[NIFL-ASSESSMENT:921] RE: Voice in writing

From: George Demetrion (george.demetrion@lvgh.org)
Date: Thu Feb 17 2005 - 15:30:03 EST


Return-Path: <nifl-assessment@literacy.nifl.gov>
Received: from literacy (localhost [127.0.0.1]) by literacy.nifl.gov (8.10.2/8.10.2) with SMTP id j1HKU3C00404; Thu, 17 Feb 2005 15:30:03 -0500 (EST)
Date: Thu, 17 Feb 2005 15:30:03 -0500 (EST)
Message-Id: <681A95205B5ACB4AAD697401486AE71206DF8E@hal9000.lvgh.prv>
Errors-To: listowner@literacy.nifl.gov
Reply-To: nifl-assessment@literacy.nifl.gov
Originator: nifl-assessment@literacy.nifl.gov
Sender: nifl-assessment@literacy.nifl.gov
Precedence: bulk
From: "George Demetrion" <george.demetrion@lvgh.org>
To: Multiple recipients of list <nifl-assessment@literacy.nifl.gov>
Subject: [NIFL-ASSESSMENT:921] RE: Voice in writing
X-Listprocessor-Version: 6.0c -- ListProcessor by Anastasios Kotsikonas
Content-Transfer-Encoding: 8bit
Content-Type: text/plain;
Status: O
Content-Length: 10754
Lines: 299

The CASAS writing assessment is valuable in assessing independent
writing skills.  I would question its value in evaluating "voice" in
that the writing prompts are highly selective in asking students to
respond to one of several descriptive scenarios.

In measuring accuracy of response based on the 4-5 rubric categories,
it's not particularly supportive of process approaches to writing, which
often times provide the idiosyncratic format wherein "voice" might
flourish.

This is not to take away from what CASAS does measure--accuracy and
fullness of response to a specific prompt--and there is much merit to
that kind of measurement. Voice, in my view, requires a different sort
of measurement.  For example, one might get at that by evaluating a
collection of student writing in a given program according to the
literary quality of the expression.  

I'm not sure a rubric would be the best form of measurement for that,
though I would not rule that out.  Also, on the CASAS writing
assessment, the resulting essay might be viewed as a manifestation of
authorial voice, but that's not what I would be primarily looking for in
such an "artificially" constructed essay.

While there may be (and ideally should be) convergences in underlying
pedagogical assumptions undergirding the type of writing fostered by the
CASAS writing prompts and a more free flowing "existential" narrative
fostered in process writing schools of thought, the differences may be
even more critically important.

Stating this, I believe a worthy discussion could ensue here on the
multi-purposes of a writing program in adult literacy education below
the GED level--a discussion that could be stimulated in reflecting on
the differences in the types of writing that CASAS prompts and process
writing orientations stimulate.

What also would be of interest are the ways in which the REEP rubric
relates to the two types of writing.

George Demetrion

-----Original Message-----
From: nifl-assessment@nifl.gov [mailto:nifl-assessment@nifl.gov] On
Behalf Of Dianne Glass
Sent: Thursday, February 17, 2005 2:55 PM
To: Multiple recipients of list
Subject: [NIFL-ASSESSMENT:920] RE: Voice in writing

Kansas has used the CASAS Functional Writing Assessment (FWA) for almost
10 years.  While it requires an enormous commitment of time and energy
to ensure that the scoring of a performance-based assessment is
standardized, Kansas adult educators have responded positively to the
lengthy process of being "certified" to use the FWA and to maintaining
certification.  They report that the process has helped them become much
better teachers of writing.  


Dianne S. Glass
Director of Adult Education
Kansas Board of Regents
1000 SW  Jackson Street, Suite 520
Topeka, KS  66612-1368
785.296.7159
Phone:  785.296.7159
FAX:  785.296.0983
dglass@ksbor.org



>>> ltaylor@casas.org 2/17/2005 12:18:12 PM >>>
Marie,

Howard has articulated the main reason that the CASAS rubric is for
both ABE
and ESL learners. He said, "We don't hold learners to different
standards.
Our instructors see 'good writing' as 'good writing' whoever is doing
the
writing." 

We would add that employers and others on the receiving end of our
students'
writing don't have different standards, either. 

We would recommend placing ESL and ABE students in different classes
since
instruction and the kinds of strengths and errors will be very
different for
the two groups, but the general characteristics of writing for both
groups
can be described within a single rubric. We have been working with this
for
nearly ten years and have become very comfortable with scoring both
types of
learners on the same rubric, though it is often necessary to be careful
not
to over-reward ESL learners for "trying" when they haven't quite
succeeded
in writing at a certain level.

In answer to your earlier questions about writing prompts, I can
respond
with respect to the CASAS Functional Writing Assessment Picture Task,
which
is currently being used for accountability reporting in Kansas, Iowa,
Connecticut, Oregon, Indiana, Vermont and New York Even Start. Prompts
for
this task are line drawings showing a scene with a central critical
incident
as well as a number of other things happening in the picture. This type
of
prompt can be answered by students from beginning to advanced levels in
ABE,
ASE and ESL programs.

It takes a long time to develop a viable prompt, with many rounds of
revisions based on field-testing input from teachers and students and
back
and forth work with an artist. They are written by a small team of
test
developers who have extensive experience as adult ed. teachers. Topics
for
the prompts come from needs assessments from adult ed. programs and
workplace surveys. We currently have seven prompts - four that are on
general life skills topics (a car accident scene, a grocery store
check-out
scene, a park scene, and a department store scene). There are three
more
prompts that have a workplace focus - a restaurant kitchen scene, a
hotel
scene and a warehouse scene. 

Like the REEP, these prompts are scored with an analytic rubric, but
with
slightly different categories: Content; Organization; Word Choice;
Grammar
and Sentence Structure; and Spelling, Capitalization and Punctuation.
The
categories are weighted, with more importance given to the first three
categories to emphasize the importance of communication of ideas in
writing.
We have recently completed a study to convert the rubric scores to a
common
IRT scale, which provides a more accurate means of reporting results
across
prompts. We have also just completed a cut score study to refine the
relationship of the CASAS Picture Task writing scores to the NRS
levels. 

With all of the work that goes into developing and standardizing a
test
prompt, it is not made available for classroom practice. However, we
have
found several published materials that contain similar types of
pictures
that can be used for classroom practice. 

We encourage programs to share the rubric with students for
instruction, in
addition to using it to communicate test results to teachers and
learners.
Many teachers tell us that completing the training for the writing
assessment, which focuses on the scoring rubric, has given them a
better
understanding of how to approach the teaching of writing. The analytic
rubric provides clear diagnostic information about students' strengths
and
weaknesses in the different rubric categories.

I am very pleased that some states are choosing to include writing in
the
mix of assessments that can be reported for accountability purposes. It
is
more work to include performance assessment in a state's
accountability
system, due to the additional training and scoring demands, but the
states
that are doing it have found it to be worth the extra effort.

Linda Taylor, CASAS
(800) 255-1036, ext. 186


-----Original Message-----
From: nifl-assessment@nifl.gov [mailto:nifl-assessment@nifl.gov] On
Behalf
Of Marie Cora
Sent: Thursday, February 17, 2005 9:54 AM
To: Multiple recipients of list
Subject: [NIFL-ASSESSMENT:914] RE: Voice in writing

Hi Bonnie, thanks for this.

Yes, I think that it would have been real tricky for me to have a
rubric
that didn't distinguish between ESOL/ABE students.  Unless they are
transitioning from ESOL to ABE perhaps.  It's tricky enough, as you
note, to adhere to rubric anchors and so forth, so adding that you are
working with different populations with the assessment would add a
layer
that I would also find difficult.

CASAS folks:  can you tell us why the writing rubric is not separate?
What's the rationale there?  It seems like the needs, esp. at the
lower
levels, would be very different.

REEP folks:  what do you think about that?  Perhaps that was never a
consideration for you though, since REEP serves the ESOL population
(is
that right?).

Thanks,
marie

-----Original Message-----
From: nifl-assessment@nifl.gov [mailto:nifl-assessment@nifl.gov] On
Behalf Of bonniesophia@adelphia.net 
Sent: Tuesday, February 15, 2005 1:21 PM
To: Multiple recipients of list
Subject: [NIFL-ASSESSMENT:908] RE: Voice in writing

I, too, have been intrigued by the idea of "voice" in the rubric, and
while
I intuitively "know" what it means, I'm interested as an emerging
writing
specialist as to what elements would constitute voice, beyond more
traditional "academic" ways of "measuring" it. I think of the clarity
or
persuasiveness of a point of view supported with meaningful examples,
the
personal voice in a narrator struggling with complex questions,
forthright
emotion strikingly articulated with imagery or other means, an attempt
at
critical thinking, or "learning to learn," self-reflectiveness... I'd
be
interested in hearing from others.
Another point I encountered when I was involved with CT's working with
the
CASAS writing assessments: the rubric was not meant to distinguish
between
ABE and ESL students. As an evaluator, I as an ESL specialist was at a
disadvantage: having attained a certain level of skill in
"translating"
English learners' language into meaningful utterances, I'd
automatically
bring that to my evaluation: it was extremely difficult to adhere to
the
rubric controls and anchors, and not want to commend the ESL learner
for
attempting with limited language ability to voice something difficult
to
articulate in another language, as having communicated more than in
fact
they did. 
Best,
Bonnie Odiorne, Ph.D.
Writing Center, English Language Institute
Post University, Waterbury, CT

Original Message:
-----------------
From: Marie Cora marie.cora@hotspurpartners.com 
Date: Tue, 15 Feb 2005 12:02:28 -0500 (EST)
To: nifl-assessment@literacy.nifl.gov 
Subject: [NIFL-ASSESSMENT:906] RE: Voice in writing


Hi everyone,

A couple of observations:  

First, please do note that this assessment is a fine example of a
performance-based assessment that has been standardized.  So if anyone
still thinks that standardized assessments all look like TABE,
consider
your myth debunked.

I think that capturing voice in writing is quite important, and I'm
glad
that the REEP rubric includes this area.  If not for voice, the rest
of
the examination of the writing is based on the 'academics' of the
writing - and I feel like that leaves out the writer's (emerging)
personality.  I note in looking around a little bit, not a whole bunch
of other writing assessments take voice into account (the GED does not
for example). I also think that because voice is a dimension of the
rubric, students will pay more attention to that area and view it as
equally important as the other dimensions.  (A bit of "what counts
gets
counted" there.)

What do others think about voice and the other dimensions?

marie


-----Original Message-----



This archive was generated by hypermail 2b30 : Mon Oct 31 2005 - 09:48:46 EST