Machine Scoring of Student Writing
Programs that purport to read and evaluate student writing are being heavily marketed to K-12 and college administrators and teachers. Criterion, a product of ETS, is marketed directly to students as well through college bookstores. The amount of marketing muscle put behind these programs by companies like ETS, Vantage, and Intellimetric (now Pearson), gives us great concern. We assemble this collection so that teachers may know what is out there. With this information, should their school or school system consider adopting one of these machine-reading services, teachers will be able to act appropriately.
In this collection we argue, paraphrasing Ed White, this country’s dean of writing assessment research, that writing to a machine is not writing at all—that we write to human beings, for human purposes, and that building a text for a machine-scoring program is not writing but some other activity, perhaps more closely related to game-playing than to human communication.
We begin with our own essay, where we make our argument and support it by creating a text for Criterion and then reporting and analyzing its response. Not surprisingly to us, Criterion’s responses are either vague and misleading or entirely wrong. In the second resource, Carl Whithaus reminds us that students are already writing on word-processing programs with spell- and grammar-checkers that give them responses to their writing. Instead of condemning these computer-generated responses out of hand, Whithaus argues, we should teach students how to understand the machine-generated feedback. Our third resource is Beth Ann Rothermel’s description of the ways in which machine-scoring programs are being marketed to K-12 schools and teachers—even being supplied to schools already-installed on computers that are bought in large batches. Rothermel describes the effects of the presence of these programs on the teacher, who now has to deal not only with the students’ writing but with the machine’s responses to that writing—which, if our own experiences with Criterion are any indication, will require a great deal of explanation.
Our fourth resource is a study by Anne Herrington and Sarah Stanley of the bias of Criterion, a machine-scoring program, toward a normalized English which, coupled with the programs’ focus on word-level error, is harmful to writers generally but particularly so to English Language Learners. Our fifth and final resource is a position statement fro the College Composition and Communication Conference (CCCC), a subset of the National Council of Teachers of English (NCTE). This position statement is unequivocal: “Writing-to-a-machine violates the essential nature of writing.”