National Institute for Literacy
 

[Assessment] "Alternative" assessment

David Rosen djrosen at comcast.net
Sun Feb 12 15:24:02 EST 2006


Marie, Ajit and others,

Two terms evaluators often use which could be introduced in this
discussion are "direct" and "indirect" measures. A "direct" measure
is one which measures an actual performance of a specified task. An
"indirect" measure (multiple choice paper and pencil tests are the
best-known example) is one which "stands for" the performance. When
the GED testing service, several years ago changed the GED writing
test from a multiple choice assessment _about_ writing to a
performance test of essay writing it moved from an indirect
("mediated") assessment to a direct assessment (of essay writing).
Most evaluators would agree that given a world of limitless money and
time, direct assessments are better -- that is, they more accurately
measure the specified and desired performance of a task -- than
indirect ones.

"Authentic" and "performance-based" assessments are synonymous (for
me) with "direct" assessments. Paper and pencil multiple choice tests
(whether standardized or not) are the best-known and most widely used
example of "indirect" or "mediated" assessments.

Two other terms I find useful when categorizing assessments, measures
or observations are "obtrusive' and "unobtrusive". The assessments we
talk about on this list are most often or always "obtrusive," that
is, the student knows s/he is being tested. This can be good
("positively obtrusive") where the assessment itself causes some
additional positive learning, or bad ("negatively obtrusive"), where
the assessment interferes with or prevents learning. One of the
biggest complaints of the obtrusive assessments mandated by NCLB is
that they are negatively obtrusive, that they, and some would argue
that preparation for them, takes away a lot of valuable learning
time. There are other ways in which assessments can be negatively
obtrusive, too, producing false results for some for whom the testing
situation creates fear that impedes normal or ordinary performance.

I find "unobtrusive" assessments the most interesting, where the
learner is assessed but doesn't know it, and just sees it as part of
the learning, indistinguishable from the rest. Portfolio assessment
can be unobtrusive, as can journal writing, or a weekly set of
problems or exercises which the student regards as regular classwork
or homework. Teachers ask students questions all the time many of
which are used as unobtrusive individual or group assessment. And
some teachers do this systematically. In theory, unobtrusive
assessments could be standardized, although I know of no example of
this.( I wonder if large college lecture classes where students
sometimes have assessment consoles that enable them to respond
immediately and have their responses immediately tabulated and
graphed for the instructor might by now have evolved some standard
procedures which make them valid and reliable. Anyone know?)

I think "alternative" is a vague word which doesn't help us to think
differently about kinds and purposes of assessment, whereas some of
these words raise some important and interesting differences in kinds
of assessment.

David J. Rosen
djrosen at comcast.net



On Feb 4, 2006, at 12:42 PM, Gopalakrishnan, Ajit wrote:


> Marie, et al,

>

> By "alternative", I presume you mean that these assessment options

> are an alternative to multiple-choice assessments. Is that a fair

> inference? I sometimes refer to alternative assessments as non-

> multiple choice assessments, just to make clear what I am talking

> about.

>

>> From my perspective, referring to them as authentic seems to muddy

>> this discussion. Webster provides two of the following definitions

>> for authentic which may help to illustrate my thinking:

> a) worthy of acceptance or belief as conforming to or based on fact

> <paints an authentic picture of our society>

> b) true to one's own personality, spirit, or character

>

>

> So for example, a student's CASAS scale score in math (say 212)

> from a multiple choice test may be worthy of acceptance of a

> person's math ability. An analysis of the test item responses may

> even provide greater information about a person's strengths and

> weaknesses. However, they cannot say much about how the student

> perceives the relation of "math" to his/her own personality and

> life. Two students at entry might both achieve a score of 207 in

> math for very different reasons. One student might have liked math,

> viewed herself as being capable of learning math but just not used

> it for many years. The other student might have never liked math,

> generally seen herself as having other strengths, but been forced

> to use math as part of her job. To ascertain this type of

> information, the teacher might have to talk to the student and find

> out the student's past experiences with math, the student's

> perceptions of its importance in his/her life, etc. Then, a custom

> assessment/project can

> be designed that is meaningful and authentic to that particular

> student.

>

>> From my perspective, all standardization (whether multiple-choice

>> or non-multiple choice assessments) will to some extent reduce the

>> authenticity for the student. The CASAS system attempts to address

>> this by providing assessments that are relevant to adults and

>> based in various contexts (life skills, employability skills,

>> workforce learning, citizenship, etc.) so that the student can be

>> assessed in contexts that are somewhat authentic to their

>> experiences and goals.

>

> Therefore, I prefer the term alternative assessments because then

> we can focus our discussion on the differences between multiple

> choice assessments and non-multiple choice assessments.

>

> There is no question that non-multiple choice assessments can be

> legitimate and have many strengths.

> For example, Connecticut is currently piloting a CASAS workplace

> speaking assessment. This is a standardized assessment designed for

> ESL learners who are currently working to demonstrate their

> listening and speaking abilities in a workplace context. Compared

> to the CASAS listening multiple-choice assessments which we have

> used over the years, the speaking assessment has the potential for

> the instructor to gain a greater understanding of a student's

> strengths and weaknesses. Students also seem to enjoy taking the

> assessment. However, it needs to be administered one-on-one unlike

> the listening which can be group administered. The speaking

> assessment also places a greater training and certification burden

> on the test administrator and scorer. We have experienced many of

> these challenges with our statewide implementation of the CASAS

> Functional Writing Assessment over the past few years. Kevin

> alluded to some of those challenges such as maintaining scorer

> certification and interr

> ater reliability. The scoring rubric used in both the writing and

> the speaking assessments can be valuable tools for classroom

> instruction.

>

> In my opinion, at least some non-multiple choice assessments should

> be standardized so that they can be used to broaden the array of

> assessments available for state-level reporting/accountability.

>

> Thanks.

> Ajit

>

> Ajit Gopalakrishnan

> Education Consultant

> Connecticut Department of Education

> 25 Industrial Park Road

> Middletown, CT 06457

> Tel: (860) 807-2125

> Fax: (860) 807-2062

> ajit.gopalakrishnan at po.state.ct.us

> <mailto:ajit.gopalakrishnan at po.state.ct.us>

> ________________________________

>

> From: assessment-bounces at nifl.gov [mailto:assessment-

> bounces at nifl.gov] On Behalf Of Marie Cora

> Sent: Thursday, February 02, 2006 11:52 AM

> To: Assessment Discussion List

> Subject: [Assessment] Legitimacy of alternative tools

>

> Hi Bruce and everyone,

>

> Bruce, you said:

>

> "I think putting forth the strengths and legitimacy of tools such

> as portfolios, outcome checklists, holistically scored writing

> samples, etc is a good way to go."

>

> This sounds like a very good path to go down to me. I think people

> would have a lot to say and share about alternative tools, their

> uses, and their strengths. It would be a great exercise to list

> them all out and discuss the strengths, uses, and limitations of

> each one.

>

> What questions do folks have about alternative assessments?: using

> them, seeking them out, developing them, whatever area most

> intrigues you.

>

> What can folks share with the rest of us in terms of "the strengths

> and legitimacy" of alternative tools such as portfolios,

> checklists, analytic/holistic scoring, rubric use, writing samples,

> in-take/placement processes?

>

> Are any of the tools you use standardized? Not standardized? Do

> you think that this is important? Why or why not?

>

> Are any of the tools used for both classroom and program purposes?

>

> I have other questions for you, but let's leave it at that for

> right now. Let us hear what your thoughts are. We're looking

> forward to it.

>

> Thanks,

>

> marie cora

> Assessment Discussion List Moderator

>

>

>

>

> --

> No virus found in this incoming message.

> Checked by AVG Free Edition.

> Version: 7.1.375 / Virus Database: 267.15.0/248 - Release Date:

> 2/1/2006

>

>

> -------------------------------

> National Institute for Literacy

> Assessment mailing list

> Assessment at nifl.gov

> To unsubscribe or change your subscription settings, please go to

> http://www.nifl.gov/mailman/listinfo/assessment





More information about the Assessment mailing list