Tag Archives: AES

#SSchat on RoboReaders

Rise of The Robo-Readers

July 13 @ 4pm PST/7pm EST #sschat
co-mods: @scottmpetri & @DavidSalmanson

A primer on auto essay scoring

https://historyrewriter.com/2015/07/03/role-of-robo-readers/

Q1 What is your definition of AES, robo-reading, or robo-grading? #sschat

Q2 What is greatest hope and/or your worst fear about technology-assisted grading? #sschat

Q3 When is it ok for a computer to assign grades on student work? #sschat

Q4 How can classroom teachers test & evaluate a robograder without disrupting learning? #sschat

Q5 What would parents think if Ts required Ss to use robo-graders before submitting work? #sschat

Q6 What would school admins say if you used a robograder in your classes? #sschat

Q7 How would you use a robograder in your History-Social Science class?

Q8 How could robo-readers help teachers gamify the art and process of writing?

Shameless plug: https://www.canvas.net/browse/ncss/courses/improving-historical-writing has a module on writing feedback & AES. Course is free and open til Sept. 22. #sschat

Teaser Tweets (to promote the chat after Monday – 7/6).

Are robo-graders the future of assessment or worse than useless? http://wp.me/4SfVS #sschat

Robo-readers are called Automated Essay Scorers (AES) in education research. http://wp.me/4SfVS  #sschat

In one study, Ss using a robo-reader wrote 3X as many words as Ss not using the RR. http://wp.me/4SfVS #sschat

Robo-readers produce a change in Ss behavior from never revising to 100% revising. http://wp.me/4SfVS #sschat

Criticism from a human instructor has a negative effect on students’ attitudes about revisions. http://wp.me/4SfVS #sschat

Comments from the robo-reader produced overwhelmingly positive feelings for student writers. http://wp.me/4SfVS #sschat

Computer feedback stimulates reflectiveness in students, something instructors don’t always do. http://wp.me/4SfVS #sschat

Robo-graders are able to match human scores simply by over-valuing length compared to human readers. http://wp.me/4SfVS #sschat

None of the major testing companies allow open-ended demonstrations of their robo-graders http://wp.me/4SfVS #sschat

Toasters sold at Walmart have more gov. oversight than robo-readers grading high stakes tests. http://wp.me/4SfVS #sschat

What is the difference between a robo-reader & a robo-grader? http://wp.me/4SfVS #sschat

To join the video-chat follow @ImpHRW, sign into www.Nurph.com. Enter the #ImpHRW channel. Note you will still need to enter #sschat to your tweets.

Resources

https://www.grammarly.com/1

http://www.hemingwayapp.com/

http://paperrater.com/

http://elearningindustry.com/top-10-free-plagiarism-detection-tools-for-teachers

http://hechingerreport.org/content/robo-readers-arent-good-human-readers-theyre-better_17021/

http://www.bostonglobe.com/opinion/2014/04/30/standardized-test-robo-graders-flunk/xYxc4fJPzDr42wlK6HETpO/story.html#

http://www.newscientist.com/article/mg21128285.200-automated-marking-takes-teachers-out-of-the-loop.html#.VZYoNEZZVed

Promo Video for a forthcoming Turnitin.com product

https://www.youtube.com/watch?v=aMiB4TApZa8

A longer paper by Shermis & Hamner
www.scoreright.org/NCME_2012_Paper3_29_12.pdf

Perelman’s full-length critique of Shermis & Hamner

http://www.journalofwritingassessment.org/article.php?article=69

If you are really a hard-core stats & edu-research nerd

http://www.journalofwritingassessment.org/article.php?article=65

https://www.ets.org/research/policy_research_reports/publications/periodical/2013/jpdd

http://eric.ed.gov/?q=source%3a%22Applied+Measurement+in+Education%22&id=EJ1056804

National Council of Teachers of English Statement

http://www.ncte.org/positions/statements/machine_scoring

For Further Research

Williamson, D. M., Xi, X., & Breyer, F. J. (2012). A framework for evaluation and use of automated scoring. Educational Measurement: Issues and Practice, 31(1), 2-13.

Role of Robo-Readers

Grammarly

I have increased the amount of writing in my high school World History classes over the last five years. At first, I required two DBQs per semester, then I increased that to four DBQs per semester. Next, I added a five-page research paper at the end of the school year. Now, I assign research papers during each semester. If I were to allot ten minutes of reading/grading time for each DBQ that would be 80 minutes of grading per student, multiplied by last year’s total student load of 197 for a total of 263 hours of reading and grading. Assuming I spent 30 minutes correcting each research paper, an additional 197 hours of  grading would be added to my workload. Where do I find those extra 460 hours per year? Do I neglect my family and grade non-stop every weekend? No. I use a combination of robo-readers, or automated essay scoring (AES) tools and structured peer review protocols to help my students improve their writing.

Hemmingway App

As AES has matured, a myriad of programs has proliferated that are free to educators.  Grammarly claims to find and correct ten times more mistakes than a word processor. The Hemmingway App makes writing bold and clear. PaperRater offers feedback by comparing a writer’s work to others at their grade level. It ranks each paper on a percentile scale examining originality, grammar, spelling, phrasing, transitions, academic vocabulary, voice, and style. Then it provides students with an overall grade. My students use this trio of tools to improve their writing before I ever look at it.

PaperRater

David Salmanson, a fellow history teacher and scholar, questioned my reliance on technology. The purpose of these back and forth posts is to elaborate on the continuum of use that robo-readers may develop in the K-12 ecosystem. Murphy-Paul argues a non-judgmental computer may motivate students to try, to fail and to improve more than almost any human. Research on a program called e-rater confirmed this and found that students using it wrote almost three times as many words as their peers who did not. Perelman rebuts this by pointing out robo-graders do not score by understanding meaning but by the use of gross measures, especially length and pretentious language. He feels students should not be graded by machines making faulty assumptions with proprietary algorithms.

Both of these writers make excellent points, however, classroom teachers, especially those of us in low SES public schools are going to have a difficult time improving their discipline-specific writing instruction, increasing the amount of writing assigned, not to mention providing feedback that motivates students to revise their work, prior to a final evaluation. We will need to find an appropriate balance for giving both computerized and human feedback to our students.

Mayfield maintains that automated assessment changes the locus of control, making students enlist the teacher as an ally to help them address the feedback from the computer. I have found that students in my class reluctantly revise their writing per the advice of a robo-reader, but real growth happens when students have discussions in small groups in regards to what works and what doesn’t. Asking students to write a revision memo detailing the changes they have made in each new draft helps them see writing as an iterative process instead of a one and done assignment.

Read David’s post and participate in our #sschat on this topic on July 13th at 7pm EST/4pm PST.

Robo-Readers