Tuesday, May 01, 2012

Robograding Essays

The mighty Kevin Drum has this on the robograding of essays.

Two points:

1. The discussion seems to be focused on the question of using robograders for exams. But what I thought was: this'd be a great way for students to get feedback on their writing. Not such a hot idea for graded work, maybe, but one reason students can't write is that commenting is so labor-intensive. They get so little feedback that they can't learn. Note: scrawling 'vague' in the margin doesn't count as commenting.

(Note on my great virtue: I spend hours on each 5-page paper in my upper division courses, and my comments are often longer than the papers. Note one of my many vices: I don't give papers at all in my lower-division courses.)

2. OTOH, my students actually tend to produce work that is, syntactically speaking, a fair approximation of English. It's the semantic stuff they can't handle. It's not gibberish, but it doesn't make all that much sense. The problem isn't so much outright falsehood as it is that they really just can't think very clearly. They can produce a bunch of grammatical sentences that would seem to make something like sense if you weren't familiar with the topic, and if you read it fast. But they just can't seem to get the parts to fit together. Worse, they can't seem to grasp the principle that if the parts don't fit together, you need to acknowledge that and try again. I wish I could produce some examples for you, but I've found in the past that I'm simply incapable of doing that. I can produce something that makes fair sense, and I can produce gibberish if I want, but I can't product this weird stuff my students produce.

Anyway, 2 makes me worry that 1 is false, and that robograding might just make things worse.


Post a Comment

Subscribe to Post Comments [Atom]

<< Home