Performance of a Generic Approach in Automated Essay Scoring

Authors

  • Yigal Attali Educational Testing Service
  • Brent Bridgeman Educational Testing Service
  • Catherine Trapani Educational Testing Service

Keywords:

essay writing assessment, automated scoring

Abstract

A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent way of combining and weighting these features into essay scores, and a focus on features that are not based on prompt-specific information or vocabulary. This approach has both logistical and validity-related advantages. This paper evaluates the performance of generic scores in the context of the e-raterĀ® automated essay scoring system. Generic scores were compared with prompt-specific scores and scores that included prompt-specific vocabulary features. These comparisons were performed with large samples of essays written to three writing assessments: The GRE General Test argument and issue tasks and the TOEFL independent task. Criteria for evaluation included level of agreement with human scores, discrepancy from human scores across prompts, and correlations with other available scores. Results showed small differences between generic and prompt-specific scores and adequate performance of both types of scores compared to human performance.

Downloads

Published

2010-08-25

How to Cite

Attali, Y., Bridgeman, B., & Trapani, C. (2010). Performance of a Generic Approach in Automated Essay Scoring. The Journal of Technology, Learning and Assessment, 10(3). Retrieved from https://ejournals.bc.edu/index.php/jtla/article/view/1603

Issue

Section

Articles