ERIC: Education Resources Information Center Skip main navigation

Your search found 139 results.

Help Tutorial Help | Tutorial Help | Help | Tutorial Help With This Page Help With This Page
Skip search criteria and go directly to results
Search Criteria

  • (Thesaurus Descriptors:"Test Scoring Machines")

Back to Search  |  Help | Tutorial Search Within Results  |  New Search  |  Save This Search  |  RSS Feed

Search Results

Sort By:

Show: 10 | 20 | 30 | 40 | 50 results per page

Use My Clipboard to print, email, export, and save records.  My Clipboard More Info:
Help
0 items in My Clipboard

Now showing results 1-10 of 139Next 10 >>

1. How Important Is Content in the Ratings of Essay Assessments? (EJ785796)

Add this record to My Clipboard for printing, emailing, exporting, and saving.  

Author(s):

Shermis, Mark D.; Shneyderman, Aleksandr; Attali, Yigal

Source:

Assessment in Education: Principles, Policy & Practice, v15 n1 p91-105 Mar 2008

Pub Date:

2008-03-00

Pub Type(s):

Journal Articles; Reports - Evaluative

Peer-Reviewed:

Yes

Descriptors:
Predictor Variables; Test Scoring Machines; Essays; Grade 8; Grade 6; Content Analysis; Literary Genres; Prompting; Word Processing; Scoring; Achievement Rating; Value Judgment

Abstract:
This study was designed to examine the extent to which "content" accounts for variance in scores assigned in automated essay scoring protocols. Specifically it was hypothesised that certain writing genre would emphasise content more than others. Data were drawn from 1668 essays calibrated at two grade levels (6 and 8) using "e-rater[TM]", an automated essay scoring engine with established validit Note:The following two links are not-applicable for text-based browsers or screen-reading software. Show Full Abstract

Related Items: Show Related Items

Full-Text Availability Options:

More Info:
Help | Tutorial
Help Finding Full Text
More Info:
Help
Find in a Library
Publisher's Web Site

2. From #2 Pencils to the World Wide Web: A History of Test Scoring (EJ811614)

Add this record to My Clipboard for printing, emailing, exporting, and saving.  

Author(s):

Zytowski, Donald G.

Source:

Journal of Career Assessment, v16 n4 p502-511 2008

Pub Date:

2008-00-00

Pub Type(s):

Journal Articles; Reports - Descriptive

Peer-Reviewed:

No

Descriptors:
Educational Testing; Achievement Tests; Computers; Scoring; Academic Aptitude; Internet; Computer Assisted Testing; Psychological Evaluation; Evaluation Methods; Standardized Tests; Student Interests; Test Scoring Machines

Abstract:
The present highly developed status of psychological and educational testing in the United States is in part the result of many efforts over the past 100 years to develop economical and reliable methods of scoring. The present article traces a number of methods, ranging from hand scoring to present-day computer applications, stimulated by the need to economically score large-scale scholastic apti Note:The following two links are not-applicable for text-based browsers or screen-reading software. Show Full Abstract

Related Items: Show Related Items

Full-Text Availability Options:

More Info:
Help | Tutorial
Help Finding Full Text
More Info:
Help
Find in a Library
Publisher's Web Site

3. An Evaluation of Computerised Essay Marking for National Curriculum Assessment in the UK for 11-Year-Olds (EJ776010)

Add this record to My Clipboard for printing, emailing, exporting, and saving.  

Author(s):

Hutchison, Dougal

Source:

British Journal of Educational Technology, v38 n6 p977-989 Nov 2007

Pub Date:

2007-11-00

Pub Type(s):

Journal Articles; Reports - Evaluative

Peer-Reviewed:

Yes

Descriptors:
Essays; Computer Uses in Education; Scoring; Comparative Analysis; Foreign Countries; Scores; Test Scoring Machines; Writing (Composition); Elementary School Students

Abstract:
This paper reports a comparison of human and computer marking of approximately 600 essays produced by 11-year-olds in the UK. Each essay script was scored by three human markers. Scripts were also scored by the "e-rater" program. There was a good agreement between human and machine marking. Scripts with highly discrepant scores were flagged and assessed blind by expert markers for characteristics Note:The following two links are not-applicable for text-based browsers or screen-reading software. Show Full Abstract

Related Items: Show Related Items

Full-Text Availability Options:

More Info:
Help | Tutorial
Help Finding Full Text
More Info:
Help
Find in a Library
Publisher's Web Site

4. Improving Content Validation Studies Using an Asymmetric Confidence Interval for the Mean of Expert Ratings (EJ682735)

Add this record to My Clipboard for printing, emailing, exporting, and saving.  

Author(s):

Penfield, Randall D.; Miller, Jeffrey M.

Source:

Applied Measurement in Education, v17 n4 p359-370 Oct 2004

Pub Date:

2004-10-01

Pub Type(s):

Journal Articles; Reports - General

Peer-Reviewed:

Yes

Descriptors:
Student Evaluation; Evaluation Methods; Content Validity; Scoring; Scores; Automation; Test Scoring Machines

Abstract:
As automated scoring of complex constructed-response examinations reaches operational status, the process of evaluating the quality of resultant scores, particularly in contrast to scores of expert human graders, becomes as complex as the data itself. Using a vignette from the Architectural Registration Examination (ARE), this article explores the potential utility of Classification and Regressio Note:The following two links are not-applicable for text-based browsers or screen-reading software. Show Full Abstract

Related Items: Show Related Items

Full-Text Availability Options:

More Info:
Help | Tutorial
Help Finding Full Text
More Info:
Help
Find in a Library
Publisher's Web Site

5. Automated Tools for Subject Matter Expert Evaluation of Automated Scoring (EJ682734)

Add this record to My Clipboard for printing, emailing, exporting, and saving.  

Author(s):

Williamson, David M.; Bejar, Isaac I.; Sax, Anne

Source:

Applied Measurement in Education, v17 n4 p323-357 Oct 2004

Pub Date:

2004-10-01

Pub Type(s):

Reports - Evaluative; Journal Articles

Peer-Reviewed:

Yes

Descriptors:
Validity; Scoring; Scores; Evaluation Methods; Quality Control; Test Scoring Machines; Automation

Abstract:
As automated scoring of complex constructed-response examinations reaches operational status, the process of evaluating the quality of resultant scores, particularly in contrast to scores of expert human graders, becomes as complex as the data itself. Using a vignette from the Architectural Registration Examination (ARE), this article explores the potential utility of Classification and Regressio Note:The following two links are not-applicable for text-based browsers or screen-reading software. Show Full Abstract

Related Items: Show Related Items

Full-Text Availability Options:

More Info:
Help | Tutorial
Help Finding Full Text
More Info:
Help
Find in a Library
Publisher's Web Site

6. Automated Scoring Technologies and the Rising Influence of Error (EJ716796)

Add this record to My Clipboard for printing, emailing, exporting, and saving.  

Author(s):

Cheville, Julie

Source:

English Journal, v93 n4 p47 Mar 2004

Pub Date:

2004-03-01

Pub Type(s):

Journal Articles; Reports - Descriptive

Peer-Reviewed:

Yes

Descriptors:
Scoring; Test Scoring Machines; Writing Exercises; Educational Policy; Private Sector; Error Patterns

Abstract:
The professional development organizations educate the local decision-makers by reducing the risks of automated scoring technologies to language and writing practices. These automated assessments lead to changes, which benefits private industry and conflicts with research on writing and language.

Related Items: Show Related Items

Full-Text Availability Options:

More Info:
Help | Tutorial
Help Finding Full Text
More Info:
Help
Find in a Library

7. Beyond Essay Length: Evaluating e-rater[R]'s Performance on TOEFL[R] Essays. Research Reports. Report 73. RR-04-04 (ED492918)

Add this record to My Clipboard for printing, emailing, exporting, and saving.  

Author(s):

Chodorow, Martin; Burstein, Jill

Source:

Educational Testing Service

Pub Date:

2004-02-00

Pub Type(s):

Numerical/Quantitative Data; Reports - Research; Tests/Questionnaires

Peer-Reviewed:

N/A

Descriptors:
Essays; Test Scoring Machines; English (Second Language); Student Evaluation; Scores; Spanish; Semitic Languages; Japanese; Writing Evaluation

Abstract:
This study examines the relation between essay length and holistic scores assigned to Test of English as a Foreign Language[TM] (TOEFL[R]) essays by e-rater[R], the automated essay scoring system developed by ETS. Results show that an early version of the system, e-rater99, accounted for little variance in human reader scores beyond that which could be predicted by essay length. A later version o Note:The following two links are not-applicable for text-based browsers or screen-reading software. Show Full Abstract

Related Items: Show Related Items

Full-Text Availability Options:

PDF ERIC Full Text (354K)

8. The Effect of Specific Language Features on the Complexity of Systems for Automated Essay Scoring. (ED482933)

Add this record to My Clipboard for printing, emailing, exporting, and saving.  

Author(s):

Cohen, Yoav; Ben-Simon, Anat; Hovav, Myra

Source:

N/A

Pub Date:

2003-10-00

Pub Type(s):

Information Analyses; Speeches/Meeting Papers

Peer-Reviewed:

N/A

Descriptors:
Essays; Language Patterns; Language Variation; Scoring; Test Scoring Machines

Abstract:
This paper focuses on the relationship between different aspects of the linguistic structure of a given language and the complexity of the computer program, whether existing or prospective, that is to be used for the scoring of essays in that language. The first part of the paper discusses common scales used to assess writing products, then briefly describes various methods of Automated Essay Sco Note:The following two links are not-applicable for text-based browsers or screen-reading software. Show Full Abstract

Related Items: Show Related Items

Full-Text Availability Options:

PDF ERIC Full Text (420K)

9. Essay Assessment with Latent Semantic Analysis (EJ773582)

Add this record to My Clipboard for printing, emailing, exporting, and saving.  

Author(s):

Miller, Tristan

Source:

Journal of Educational Computing Research, v29 n4 p495-512 2003

Pub Date:

2003-00-00

Pub Type(s):

Journal Articles; Reports - Evaluative

Peer-Reviewed:

Yes

Descriptors:
Semantics; Test Scoring Machines; Essays; Semantic Differential; Comparative Analysis; Methods Research; Evaluation Methods; Writing Evaluation; Writing Research; Computer Assisted Testing; Program Descriptions; Program Implementation

Abstract:
Latent semantic analysis (LSA) is an automated, statistical technique for comparing the semantic similarity of words or documents. In this article, I examine the application of LSA to automated essay scoring. I compare LSA methods to earlier statistical methods for assessing essay quality, and critically review contemporary essay-scoring systems built on LSA, including the "Intelligent Essay Asse Note:The following two links are not-applicable for text-based browsers or screen-reading software. Show Full Abstract

Related Items: Show Related Items

Full-Text Availability Options:

More Info:
Help | Tutorial
Help Finding Full Text
More Info:
Help
Find in a Library
Publisher's Web Site

10. Assessing Writing through the Curriculum with Automated Essay Scoring. (ED477929)

Add this record to My Clipboard for printing, emailing, exporting, and saving.  

Author(s):

Shermis, Mark D.; Raymat, Marylou Vallina; Barrera, Felicia

Source:

N/A

Pub Date:

2003-04-00

Pub Type(s):

Reports - Descriptive; Speeches/Meeting Papers

Peer-Reviewed:

N/A

Descriptors:
College Students; Essays; Higher Education; Portfolio Assessment; Portfolios (Background Materials); Scoring; Test Scoring Machines; Writing Evaluation; Writing Improvement

Abstract:
This paper provides an overview of some recent work in automated essay scoring that focuses on writing improvement at the postsecondary level. The paper illustrates the Vantage Intellimetric (tm) automated essay scorer that is being used as part of a Fund for the Improvement of Postsecondary Education (FIPSE) project that uses technology to grade electronic portfolios. The purpose of the electron Note:The following two links are not-applicable for text-based browsers or screen-reading software. Show Full Abstract

Related Items: Show Related Items

Full-Text Availability Options:

PDF ERIC Full Text (468K)

Now showing results 1-10 of 139Next 10 >>