Category Archives: Peer Review

Peer Review Reinforces Writing Instruction

For the past few years, I have been experimenting with peer review in my high school History classes. The California Council for the Social Studies recently published my article about this just as I was conducting a peer review activity with my 10th grade World History students. In the article, I detail using a computer program called PeerGrade, which is great, but can add several days to a lesson because students have to type their work, submit it, and then conduct several peer reviews.

This post will showcase some student work in doing a peer review activity on paper in one class period. The essay was an argumentative task where students had to state a position about eugenics and support it with evidence from 15 Minute History and the Eugenics Archive. Before writing the essay, students shared the evidence they had categorized on a Vee Diagram. The peer review worksheet I created can be accessed in this Google Doc.

In two (50 minute) class periods of writing my 10th grade students produced an average of 361 words with 6 explanations of their evidence.

Eugenics - positive or negative

Eugenics - positive or negative (1)

Eugenics - positive or negative (2)

Eugenics - positive or negative (4)

I stole this list from one of my awesome ELA teachers, Mandy Arentoft and will project it when giving students time to practice using transitions in their historical writing. Another great teacher, Keith Hart from Brunswick High School in Maine has a helpful blog post teaching students how to use transitional phrases to present their evidence.

A benefit in using peer review is that I get immediate feedback from my students that tells me who is applying the skills from my mini writing lessons. In this case, I clearly need to go back and re-teach the importance of including a creative title, a complex thesis, and in using transitions. Fortunately, not everyone will need this instruction and I can create more advanced writing lessons for them in my next station rotation activity.  For more information about peer review, please look at this #TeachWriting Twitter archive on the topic. It has a wealth of resources for teachers looking to implement peer review into their classroom writing instruction.

Summarization Strategy w/Peer Review

It has been a few years since I wrote this primer and I have moved from using SurveyMonkey, PollEverywhere, and Google Forms to Turnitin and PeerGrade. When I speak at conferences and talk to other high school history teachers about using peer review and conferencing with students about their writing, they look at me like I have three heads. On Tuesday night, May 15th at 6 PT, I am hosting a #TeachWriting chat on using peer review. I’m hoping to learn how comfortable secondary teachers are using peer review so their students learn from evaluation.

This post will describe a timed summarization strategy that adapts what John Collins calls the 10% summary.  In this activity, I give students a reading on a historical topic. They have 20-25 minutes to summarize it. Then they swap papers with an elbow partner and have ten minutes to read the partner’s summary and grade it according to a criteria chart. I have found it helpful to include exemplar summaries, or mentor texts that demonstrate superior work.

During the first pass, my 10th-grade students wrote an average of 173 words. Their feedback was perfunctory and not helpful. See Figure 1.

Summarization Strategies

After some direct instruction and modeling, this student was able to improve their feedback using specific language from the criteria chart.  See Figure 2.

Summarization Strategies (1)

These students need guided support when evaluating each others’ summaries. Focusing on simple to evaluate factors help students become more successful. Since I know the word count of the original text I asked them to summarize, after they count the number of words they wrote, they can tell me whether or not they met the 10% rule.  Next, I ask students to evaluate how well the author used their own words instead of copying directly from the text.


Lastly, we discuss the main ideas from the passage to determine whether or not the author was successful in listing and explaining them. This process can help students engage in content reading, build background knowledge, and learn from each other. It is an easy way for secondary Social Studies teachers to incorporate peer review into their everyday classroom instruction.

Measures of Effective Listening

Thirty-two years ago, Donald E. Powers wrote Considerations for Developing Measures of Speaking & Listening. It was published by the College Board, which expresses how important these measures are to a student’s academic success, particularly in their Advanced Placement programs, yet has not validated any standardized tests to measure these skills. This synthesis on some of the research on listening offers advice to teachers enrolled in our MOOC Teaching Speaking & Listening Skills

Research shows that students can listen 2-3 grade levels above what they can read. Listening while reading helps people have successful reading events, where they read with enjoyment and accuracy. Listening while reading has been shown to help with decoding, a fundamental part of reading. The average person talks at a rate of about 125 to 175 words per minute, while we listen and comprehend up to 450 words per minute (Carver, Johnson, & Friedman, 1970).

Listening has been identified as one of the top skills employers seek in entry-level employees as well as those being promoted. Even though most of us spend the majority of our day listening, it is the communication activity that receives the least instruction in school (Coakley & Wolvin, 1997). On average, viewers who just watched and listened to the evening news can only recall 17.2% of the content.

Listening is critical to academic success. Conaway (1982) examined an entire freshman class of over 400 students. They were given a listening test at the beginning of their first semester. After their first year of college, 49% of students scoring low on the listening test were on academic probation, while only 4.42% of those scoring high on the listening test were on academic probation. On the other hand, 68.5% of those scoring high on the listening test were considered Honors Students after the first year, while only 4.17% of those scoring low attained the same success.

Students do not have a clear concept of listening as an active process that they can control. Students find it easier to criticize the speaker as opposed to the speaker’s message (Imhof, 1998). Students report greater listening comprehension when they use the metacognitive strategies of asking pre-questions, interest management, and elaboration strategies (Imhof, 2001). Listening and nonverbal communication training significantly influences multicultural sensitivity (Timm & Schroeder, 2000).

Understanding is the goal of listening. Our friend Erik Palmer suggests before students engage in purposeful listening, their teachers should tell them what to attend to. We need to teach students what to respond to, how to respond, and when to respond. For example, today we are going to listen to five speeches. For each speech, we are only listening for LIFE. After each speaker finishes, clap, then take a minute to evaluate the level of passion they put into their speech. After that write down three suggestions on how they could improve the LIFE in their speech (i.e., instead of emphasizing: you stole my red hat, try stressing, you stole my red hat).

A classroom teacher who reads Powers (1984) College Board study will understand that speaking, listening, reading and writing are all tightly correlated. Empirically measuring oral communication skills requires many hours of assessment on small, controlled populations. It is the opposite of what we experience in public schools where it is not feasible for us to precisely measure each skill. The important takeaway here is that teachers need to prepare their students to actively listen, avoid distractions, and teach listening and speaking with core academic content by training students to evaluate how well various speaking functions are accomplished by their classmates. While there are reliability issues with classroom peer review models, the benefits of “learning by evaluation” far outweigh the negatives.


#TeachWriting Coaching Student Writers

Corbin Moore and I taught an online class called Improving Historical Reading and Writing over the summer. We learned that one of the major barriers to non-ELA teachers assigning writing in their classes is simply that they don’t feel comfortable providing feedback on that writing. They are also concerned about increasing their workload. Our experiences as classroom teachers have led us to include more writing in our daily practices. We hope this chat encourages other teachers to do the same.

Q1 With the recent emphasis on increasing writing in all subjects, how has your job as a teacher changed?
Goal-setting strategies are terrific. Here is a longer paper Scott wrote about using goal-setting strategies as formative assessment.
Shorter, more frequent, focused skill-building writing tasks show great promise in increasing positive attitudes toward writing. They can be graded quickly or used for peer review.
Q2 What is your definition of effective feedback?
This John Hattie article demonstrates that feedback has a strong effect on student learning. Unfortunately, this is not always positive.
Turnitin has done some extensive research on feedback and discovered a gap between teacher and student perceptions about what constitutes effective feedback.
Q3 What strategies/tools have you found valuable in providing feedback and/or peer review?
Google Docs
Rubrics/Criteria Charts
Q4 How is coaching student writers different from teaching writing? What are the advantages to coaching versus teaching writing?
Q5  What are the best writing tools, strategies, and frameworks for teaching writing and coaching students through the writing process?
Q6 What would happen if you stopped evaluating writing and switched to coaching?
Q7 How can teaching speaking and listening skills help improve student writing?
Extra Credit

Role of Robo-Readers


I have increased the amount of writing in my high school World History classes over the last five years. At first, I required two DBQs per semester, then I increased that to four DBQs per semester. Next, I added a five-page research paper at the end of the school year. Now, I assign research papers during each semester. If I were to allot ten minutes of reading/grading time for each DBQ that would be 80 minutes of grading per student, multiplied by last year’s total student load of 197 for a total of 263 hours of reading and grading. Assuming I spent 30 minutes correcting each research paper, an additional 197 hours of  grading would be added to my workload. Where do I find those extra 460 hours per year? Do I neglect my family and grade non-stop every weekend? No. I use a combination of robo-readers, or automated essay scoring (AES) tools and structured peer review protocols to help my students improve their writing.

Hemmingway App

As AES has matured, a myriad of programs has proliferated that are free to educators.  Grammarly claims to find and correct ten times more mistakes than a word processor. The Hemmingway App makes writing bold and clear. PaperRater offers feedback by comparing a writer’s work to others at their grade level. It ranks each paper on a percentile scale examining originality, grammar, spelling, phrasing, transitions, academic vocabulary, voice, and style. Then it provides students with an overall grade. My students use this trio of tools to improve their writing before I ever look at it.


David Salmanson, a fellow history teacher and scholar, questioned my reliance on technology. The purpose of these back and forth posts is to elaborate on the continuum of use that robo-readers may develop in the K-12 ecosystem. Murphy-Paul argues a non-judgmental computer may motivate students to try, to fail and to improve more than almost any human. Research on a program called e-rater confirmed this and found that students using it wrote almost three times as many words as their peers who did not. Perelman rebuts this by pointing out robo-graders do not score by understanding meaning but by the use of gross measures, especially length and pretentious language. He feels students should not be graded by machines making faulty assumptions with proprietary algorithms.

Both of these writers make excellent points, however, classroom teachers, especially those of us in low SES public schools are going to have a difficult time improving their discipline-specific writing instruction, increasing the amount of writing assigned, not to mention providing feedback that motivates students to revise their work, prior to a final evaluation. We will need to find an appropriate balance for giving both computerized and human feedback to our students.

Mayfield maintains that automated assessment changes the locus of control, making students enlist the teacher as an ally to help them address the feedback from the computer. I have found that students in my class reluctantly revise their writing per the advice of a robo-reader, but real growth happens when students have discussions in small groups in regards to what works and what doesn’t. Asking students to write a revision memo detailing the changes they have made in each new draft helps them see writing as an iterative process instead of a one and done assignment.

Read David’s post and participate in our #sschat on this topic on July 13th at 7pm EST/4pm PST.


Student Perceptions of Writing Feedback

What Students Say About Instructor Feedback was a 2013 study that examined student perceptions of instructor feedback using the platform. Students wanted timely feedback, but rarely received it: 28 percent of respondents reported that their instructors took 13+ days to provide feedback on their papers. Students preferred feedback about grammar, mechanics, composition, and structure. Students found feedback on thesis development valuable. Despite high rates of electronic submission, students did not report receiving electronic feedback at nearly the same rate.

QuickMark Categories

From The Margins analyzed nearly 30 million marks left on student papers submitted to the’s service between January 2010 and May 2012. QuickMark comments are a preloaded set of 76 comments covering 4 categories that instructors can drag- and-drop onto students’ papers within the Turnitin online grading platform.

This study looked specifically at frequencies and trends teachers employed when providing margin comments. The top 15 are listed below.

Top 10 QuickMarks

This 2014 follow-up study discovered that students found face-to-face feedback “very” or “extremely effective.” 77 percent of students viewed face-to-face comments as “very” or “extremely effective,” but only 30 percent received face-to-face “very” or “extremely often.”

Students perceived general comments on their writing to be “very” or “extremely effective.” However, a smaller percentage of educators felt the same. Even though 68 percent of students reported receiving general comments “very” or “extremely often,” and 67 percent of students said this feedback type was “very” or “extremely effective,” only one-third of educators viewed general comments as “very” or “extremely effective.”

Students preferred “suggestions for improvement” over “praise or discouragement.” The greatest percentage of students found suggestions for improvement “very” or “extremely effective,” while the fewest percentage of students said the same for “praise or discouragement.”

Students and educators differed on what constituted effective feedback.  The gap between educators and students was greater than 15 percent on the majority of areas examined. The biggest difference between educator and student responses occurred with “general, overall comments about the paper” and “specific notes written in the margins.”


Comments recorded by voice or audio may be a time-saving substitute for face-to-face feedback. Only five percent of student respondents reported receiving voice or video comments at the same frequency as that reported by students who report receiving face-to-face feedback “very” or “extremely often” (30 percent). As a way to negotiate time pressures and still be able to provide more personalized feedback, educators might consider leveraging the use of recorded voice or audio comments to provide feedback on student work. Many grading platforms and learning management systems (LMS) offer this feature as part of their services.

This study identified a clear relationship between exposure to feedback and perceived effectiveness of feedback. Thus, it is imperative to provide students with different types of feedback, and evaluate what is helpful for them. Obviously, the more feedback students get, the more valuable it becomes. Teachers should discuss the types of feedback they typically provide to their classes. Then ask students to share what types of feedback they have found most helpful.

The definition of “effective feedback” is going to be modified within your course. Poll your class to find out what types of feedback students think would improve their writing.



CCSS Presentation Materials

Goal-setting approaches to student writing

I wrote a paper on the results that happened after implementing this program at two high schools. I give this presentation to inspire teachers to consider alternative grading methods and increase the number of writing assignments they require of their students. I have found that over the course of the year my students can double, if not triple the amount of words they put on a page in one class period. The next trick is to partner with an English teacher, who can help them take the quantity they are now proficient in and turn it into quality writing. I have found that this level of competition really motivates students. This work has borrowed heavily from Chip Brady and the excellent curriculum at The DBQ Project, who provided inspiring professional development and encouraged me along the way.

Peer review with tech

Many high quality studies influenced my decision to start evaluating student writing quantitatively, De La Paz, S. (2005), De La Paz, S., & Felton, M. (2010), Monte-Sano (2008, 2011) and (Monte-Sano & De La Paz, 2012). I strongly feel that History/Social Science departments should report descriptive statistics about their students’ writing in order to derive a common set of writing expectations by age and grade level. Further, recent advances in automated essay scoring may make it possible for students to receive feedback from a computer before approaching the teacher to partner in improving the writing together. See this Lightside Labs Revision Assistant video and feel free to expand on this annotated bibliography tracking the major players in the automated essay scoring market. K12 teachers should provide input to the companies developing these products and the lefty-Liberal in me hopes all of these products will eventually be open source, like the PaperRater product that my students recently used on a speech project.

Peer review without tech

Most of the work I reference here came from O’Toole (2013), Brookhart (2013), and Bardine and Fulton (2008). Learning by evaluation has long been used by English teachers, it is time for history teachers to embrace the practice. If the CCSS are truly able to get us off the breadth vs. depth Historical coverage treadmill, History/Social Studies teachers are going to need tools and strategies to assess the writing they assigned. Having students read each others writing gives them much needed context. Before I wrote my dissertation, I read dozens of others on the same subject. History teachers will need to learn how to use mentor texts and provide general feedback instead of making margin notations on every paper they receive. English teachers have used peer rubrics and criteria charts to help students with their writing. It is time for history teachers to start incorporating those tools into their classrooms.

CCSS Presentation Resources

Free Plagiarism Tools

Robo-Readers are Better than Human Readers

Flunk the Robo-Readers

Where Does Automated Essay Scoring Belong in K12 Education

Free Robo-Graders

Hemingway App

Revision Assistant

Paper Rater

Peer Review Tools

LDC Rubrics

Sample Online Rubric

Criteria Chart

Economic Systems Criteria Chart

Peer Review Discussion Guide

Economic Systems Argumentative Rubric

Providing Effective Feedback

Video Tutorials

Changing Weak Writing into Strong Writing