Abstract: Susan Lang and Scott Dewitt

Exploratory Text Mining of Student Peer Reviews

Abstract: Despite its ongoing use in writing courses, peer review remains understudied and, perhaps, undervalued as a course component. Empirical studies in the late 20th century focused on one single question: Does peer review improve student writing? Research showcased studies on improved fluency and attitudes toward writing (Katstra, Tollefson, Gilbert; Karegaianes, Pascarella, Pflaum; Beaven; Kantor); the role of anonymity in improving feedback (Graner, Brown); the influence of secondary source citation and sophisticated prose style on how students read peers’ text (Flynn); the role of teacher feedback strategies and persona on students’ approach to peer review (DeWitt); and the effectiveness of training in improving students’ peer review abilities (Zhu). But by the early 21st century, composition scholars had moved on to other topics, and today, a dearth of research in the field prevails.

This presentation will discuss the use of various digital methods to examine two sets of students’ review of their peers’ writing from two different first-year writing curricula in order to better understand 1) how these students responded to two different prompt scenarios, and 2) whether and how those responses demonstrate students’ perceived valuing of the peer review process. The first set of responses is a more class-focused review assignment; students were given these review assignments between submission of the preliminary and the final drafts of two separate assignment. In the second set, their written assessment of their peers’ writing came in the context of a class, but in this case, anonymously reviewing as a team the work of students in other sections for inclusion in a Webzine. The assignments in both curricula were evaluated and students were assigned a grade for each.

Using Provalis Research’s Word Stat and QDA Miner, we’ll explore these student texts to consider such questions as:
• Do students differentiate between higher order matters (the production of a central argument) and lower level matters (craft and correctness) in the texts they are reading? If so, how?
• Do students prioritize higher order matters (the production of a central argument) or surface level matters (craft and correctness) in their reviews? If so, how do they do so?
• Do the individually written reviews differ significantly in content or craft from the collaboratively authored memos that reflect a group publication decision?

Finally, this presentation will demonstrate and discuss how researchers might approach similar corpora of materials.