Tag Archives: Robo-Readers

Role of Robo-Readers


I have increased the amount of writing in my high school World History classes over the last five years. At first, I required two DBQs per semester, then I increased that to four DBQs per semester. Next, I added a five-page research paper at the end of the school year. Now, I assign research papers during each semester. If I were to allot ten minutes of reading/grading time for each DBQ that would be 80 minutes of grading per student, multiplied by last year’s total student load of 197 for a total of 263 hours of reading and grading. Assuming I spent 30 minutes correcting each research paper, an additional 197 hours of  grading would be added to my workload. Where do I find those extra 460 hours per year? Do I neglect my family and grade non-stop every weekend? No. I use a combination of robo-readers, or automated essay scoring (AES) tools and structured peer review protocols to help my students improve their writing.

Hemmingway App

As AES has matured, a myriad of programs has proliferated that are free to educators.  Grammarly claims to find and correct ten times more mistakes than a word processor. The Hemmingway App makes writing bold and clear. PaperRater offers feedback by comparing a writer’s work to others at their grade level. It ranks each paper on a percentile scale examining originality, grammar, spelling, phrasing, transitions, academic vocabulary, voice, and style. Then it provides students with an overall grade. My students use this trio of tools to improve their writing before I ever look at it.


David Salmanson, a fellow history teacher and scholar, questioned my reliance on technology. The purpose of these back and forth posts is to elaborate on the continuum of use that robo-readers may develop in the K-12 ecosystem. Murphy-Paul argues a non-judgmental computer may motivate students to try, to fail and to improve more than almost any human. Research on a program called e-rater confirmed this and found that students using it wrote almost three times as many words as their peers who did not. Perelman rebuts this by pointing out robo-graders do not score by understanding meaning but by the use of gross measures, especially length and pretentious language. He feels students should not be graded by machines making faulty assumptions with proprietary algorithms.

Both of these writers make excellent points, however, classroom teachers, especially those of us in low SES public schools are going to have a difficult time improving their discipline-specific writing instruction, increasing the amount of writing assigned, not to mention providing feedback that motivates students to revise their work, prior to a final evaluation. We will need to find an appropriate balance for giving both computerized and human feedback to our students.

Mayfield maintains that automated assessment changes the locus of control, making students enlist the teacher as an ally to help them address the feedback from the computer. I have found that students in my class reluctantly revise their writing per the advice of a robo-reader, but real growth happens when students have discussions in small groups in regards to what works and what doesn’t. Asking students to write a revision memo detailing the changes they have made in each new draft helps them see writing as an iterative process instead of a one and done assignment.

Read David’s post and participate in our #sschat on this topic on July 13th at 7pm EST/4pm PST.