Making Text Analytics Accessible to Writing Faculty
William Marcellino (Frederick S. Pardee RAND Graduate School, US)
Abstract:
Ever cheaper computing power along with increasing sophistication in statistical/machine learning approaches to text offers a potential revolution in writing instruction and assessment. We can efficiently mine large corpora of genre and disciplinary examples to extract their defining content and functional features, and then concretely visualize those features in student writing moves. Writing instruction and assessment are primarily a human-only art, but they could be transformed into a more data-driven practice, with context-rich human analytical attention leveraged by machine means. Enabling this jump requires three things.
First, writing instruction as a field needs a workable consensus on the relationship between humans and machines: beyond machines-as-labor-threats, what does a student-centered, fruitful union of human and machine analysis look like? Second, effective practice requires a synthesis of disciplinary approaches: writing instruction/assessment must borrow methods and technology from corpus linguistics, digital rhetorics, computer science, and machine learning. Finally, these methods and technology must be made accessible broadly: analytics and machine learning need to be accessible within the majority humanities base of writing instruction, not just to a few cross-trained practitioners with a foot in another discipline.
I’ll illustrate using RAND-Lex, a text analytics and machine learning tool suite developed at the RAND Corporation. Through a pilot effort at University of South Florida, RAND-Lex is making scalable analytics accessible for both writing instruction and digital humanities. Of particular interest to this audience may be “stance comparison”: the use of corpus-based analytics to detect the lexicogrammatical (style and stance) features that characterize genre and disciplinary writing, in order to relate those features to student writing.
Between Frontend and Backend Challenges: Connecting Tool Development with Writing Analytics
Otto Kruse (Zurich University of Applied Sciences, Switzerland) and Christian Rapp (Zurich University of Applied Sciences, Switzerland)
Abstract:
Since the introduction of the first word processors in the 1980s, writing technology has developed rapidly and absorbed several generations of subsequent innovations such as networks, mobile computers, internet and social media, each of them connected with new challenges for the teaching of writing. Cloud computing recently gave rise to a new generation of writing tools and writing/learning environments that are scalable, allow for fine-grained tracking of user data and integrate technologies provided by computer- and corpus linguistics. We will shortly look at the significance of writing analytics for the teaching of writing, and then demonstrate from our own work, what this means for tool development. It is already a matter of planning and construction to ensure that such data can be safely acquired and evaluated. But for what purpose? Data may provide feedback for the writer, instructor, institution, or for the tool’s developers, or may be used purely for the purposes of academic research. Before data is collected, issues of data protection and privacy have to be resolved, which is a subject that necessitates different solutions for many countries. Is it permissible for developers to access and read the papers authored by their users? And when data is used for such purposes as single-case evaluation, usability research, or cross-sectional study, which code of academic ethics has to be considered? Legal issues may be secondary for the early stage of tool development, but are of vital importance for dissemination studies or when considering the practical implications of commercial application.
This presentation aims to illustrate the connection between frontend and backend development from experiences gained through the development of Thesis Writer (TW), a self-created writing environment supporting writers, supervisors, and institutions in higher education (see https://thesiswriter.zhaw.ch/de/). We offer a brief look at the frontend functionality as well as a more comprehensive overview on how data is collected and subsequently processed at the backend. Preparations for the dissemination of TW at universities across three countries will be described, and a discussion held on attempts to resolve the issues of data protection and privacy. Furthermore, we will outline solutions for offering evaluative data to the benefit of the users. Even though TW is already live and in use, research and development is ongoing; so rather than reporting on established results, insights will be offered into present-day design and decision processes.
Papers
What is in the assessor’s mind? The merits of Machine Learning to code decision statements when doing pairwise comparisons
Sven De Maeyer (University Antwerp, Belgium), Renske Bouwer (University Antwerp, Belgium) and Marije Lesterhuis (University Antwerp, Belgium)
Abstract:
Recently, Comparative Judgement is increasingly used as a methodology to assess text quality. In comparative judgement, assessors receive randomly composed pairs of texts and have to indicate which one is best. This method is a promising alternative to analytic rubrics leading to high reliable scores (Pollitt, 2012). But our understanding of the validity of the results remain limited.
Rich information on the validity of CJ can be obtained by analysing assessors’ statements on why they chose one text over the other. These decision statements contain opportunities to generate feedback towards the writers of the texts (which aspects have assessors taken into account ?), but also towards assessors (which aspects did you take into account when judging?). The only bottleneck to use these decision statements in implementations of CJ is the manual coding of these statements, which is a tedious and time-consuming task.
In this study we explore how well different machine learning (ML) algorithms reproduce the coding of decision statements. We used 2599 decision statements coming from a CJ assessment in which 64 assessors assessed 405 argumentative texts. These decision statements were manually coded on 7 aspects of text quality. We compared 3 different types of ML algorithms (‘k-Nearest Neighbours’, ’decision tree’ and ‘support vector machines) on their accuracy of replicating the manual coding. The results are very promising: a ‘support vector machine’ algorithm results in accuracy measures ranging from .95 to .99. In this presentation we will discuss the opportunities and caveats of using machine learning in this context.
Pollitt, A. (2012). Comparative judgement for assessment. International Journal of Technology and Design Education, 22(2), 157-170. https://doi.org/10.1007/s10798-011-9189-x
Mining Negotiation: Using Writing Analytics to Understand Decision-Making and Consensus-Building in Student Peer Review
Susan Lang (Ohio State University, US) and Scott Lloyd Dewitt (Ohio State University, US)
Abstract:
In his landmark 1984 College English article, “Collaborative Learning and the ‘Conversation of Mankind’,” Kenneth Bruffee sought to ground collaborative learning in composition studies, drawing considerable attention to peer review, the task of asking students to read and respond to each other’s writing. Bruffee’s premise, that “understanding . . . the complex ideas that underlie collaborative learning can improve its practice and demonstrate its educational value” (546), outlined three areas of student learning that he asked composition curriculum specialists to take note of:
Collaboration in conversation (550).
Collaboration in authentic social contexts (551).
Collaboration in establishing knowledge (555).
Consensus is often cited as a key outcome of the complex processes of a collaborative learning situation. John Trimbur argues that students who truly attempt at reaching consensus are those who realize that they need “to take their ideas seriously, to fight for them, and to modify or revise them in light of others’ ideas” (Wiener 55). In other words, collaborative learning is “intellectual negotiation,” not merely each student doing his or her part to add to the completed project (Wiener 55). Further, Trimbur complicates consensus by defining it in relationship to Habermasian notions of dissensus: “The consensus that we ask students to reach in the collaborative classroom will be based not so much on collective agreements as on collective explanations of how people differ, where their differences come from, and whether they can live and work together with these differences. . . . Consensus does not appear as the end or the explanation of the conversation but instead as a means of transforming it” (610-12).
This presentation picks up questions we raised at the Writing Analytics Conference in St. Petersburg, FL in January 2018 about the ways in which negotiation, decision-making, and consensus-building are represented in students’ peer review. We are mining and analyzing data from a large corpus of student writing where students participate in an inter-section, cross-campus, anonymous peer review of student manuscripts submitted for publication on a Webzine. While reviewing manuscripts, students compose substantial review memos that evaluate the strengths and weaknesses of the writing. The review process is divided in two steps. First, students write detailed, individual review memos that argue for one of three publication decisions: Accept with Minor Revisions, Revise and Resubmit, and Reject. Second, students work in groups of at least three and come to consensus on their publication decision; they write a group memo that reflects that conversation.
We ask the following questions about this corpus of student peer review:
·In their review memos, how do student represent consensus building and intellectual negotiation? In their review memos, do students represent dissensus?
·In their review memos, what are the similarities and differences between individually authored memos that reflect a single reviewer’s decision and collaboratively authored memos that reflect a group decision?
Using Provalis Research’s Word Stat and QDA Miner, we are engaged in a four-tiered mining and analysis of our corpus
1. We will mine the corpus for review portfolios (3+ individual memos and a collaborative memo) where there is high discrepancy between individual reviewers and group decisions.
2. We will examine/code those portfolios for language that represents decision making processes, negotiation, and consensus building.
3. We will return to the entire corpus to look for examples of words/word strings/passages where students represent decision-making, negotiation, and consensus-building.
4. We will attempt to conclude where students are effective and ineffective at representing decision-making, negotiation, and consensus-building and propose implications for the teaching of writing.