There are good reasons to be skeptical of machines grading essays. Their algorithms can't distinguish between arguments supported by factual evidence and cases built on canards. They're reliable enough to dock points for clichés, but they're not subtle enough to reward subtlety.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

I ain't letting no computer grade this column. No way.

For one thing, essay-grading software can't tell the difference between the intentional errors I just made and the kind of mistakes that would torpedo a college or job application. For another, who needs a machine to tell me I'm a hack when people are standing in line to do that already?

If big brands like Harvard, M.I.T. and Stanford weren't lending their haloes to computer programs that grade written answers to essay questions, there probably wouldn't have been a recent front-page New York Times story about it, let alone a petition by a group called Professionals Against Machine Scoring of Student Essays in High-Stakes Assessment, which that story says had collected the signatures of nearly 2,000 educators.

There are good reasons to be skeptical of machines grading essays. Their algorithms can't distinguish between arguments supported by factual evidence and cases built on canards. They're reliable enough to dock points for clichés, but they're not subtle enough to reward subtlety. They're fans of the kind of five-paragraph structure (intro, three examples, conclusion) that can be taught, but that can also be a vehicle for nonsense. Michele Bachmann's policy statements may hew to the traditional essay template, but the "thinking" that goes into them deserves those scare quotes. As that petition by Professionals Against Machine Scoring says, computers can't "measure the essentials of effective written communication: accuracy, reasoning, adequacy of evidence, good sense, ethical stance, convincing argument, meaningful organization, clarity, and veracity, among others."

But it'd be a mistake to brush off this development as a naïve attempt to automate what can't ever be automated. Artificial intelligence is not like jetpacks; this stuff really is heading our way. When people give computers feedback about what they're doing wrong, computers can learn from their mistakes. Maybe machines will never be able to give a David Foster Wallace essay an appropriate grade, as one of the commenters on the Times story said, but this software is already helping tens of thousands of students taking free online courses to write better. By instantly boosting students' scores when they improve their essays, by getting their competitive juices flowing the way that games do, these programs may be teaching the most important lesson about composition that there is: writing is rewriting.

The objection to automated essay grading, like the objection to the proliferation of MOOCs -- massive open online courses, many of them free, and many offered by some of the top universities in the country -- seems a tad aristocratic to me. There's some elitism lurking in the criticism that students who take these courses and have their essays graded by machine are missing out on a real education. It's as though a monopoly on the sorting and credentialing apparatus were being busted. Who's going to pay a gazillion dollars for college tuition, if you can learn to write (or special relativity, macroeconomics, digital sound design or "The Ancient Greek Hero") for free? There's no denying the thrill of learning, in person, from a great teacher; and the value of learning from peers; and the lifelong importance of the friends you make and the networks you develop in college. But the reality is that this experience is only available to a tiny fraction of the people who could benefit from it. Even if cost weren't an obstacle, we're nowhere near having enough places in high-end higher education to meet the demand. Automated grading and massive online learning may not deliver all that elite institutions provide, but that shouldn't stop us from using computers to improve what most learners can access.

Sure, it's unsettling to think that machines can accomplish things that once seemed impossible to automate. Computers are writing articles that do a decent job of delivering news. They recommend books, movies and music with the reliability of longtime pals. They drive cars, data-mine Shakespeare, recognize faces, read emotions, fix us up on dates. If a computer can grade an essay, why can't it conduct a job interview, evaluate the testimony in a trial, reply to your parents' texts, teach your puppy tricks?

When I was a student at Cambridge University, I sometimes heard faculty bemoan the change from the one-on-one student-teacher setup back in the day, to the two- or three-to-one ratio we had to endure. That's right: the old Oxbridge normal, just you and your tutor reviewing your weekly essay over tea, had given way, under economic and social pressure, to -- the horror! -- mass education, meaning three or even four people around the fire. Somehow we survived.

Slurping ramen at a laptop is a world of difference from having your own mentor to reduce your argument to rubble. So it's a small miracle that technology promises to make at least some online learning as intellectually nourishing as it is, finally, widely available.

This is my column from The Jewish Journal of Greater Los Angeles. You can read more of my columns here, and email me there if you'd like.

Popular in the Community

Close

What's Hot