Researchers for this project will develop and validate an automated assessment of students' analytic writing skills in response to reading text. During prior work the researchers studied an assessment of students' analytic writing to understand progress toward outcomes in the English Language Arts Common Core State Standards, and to understand effective writing instruction by teachers. The researchers focused on response-to-text assessment because: it is an essential skill for secondary and post-secondary success; current assessments typically examine writing outside of responding to text; and increased attention on analytic writing in schools will result in improved interventions. Recent advances in artificial intelligence offer a potential way forward through automated essay scoring of students' analytic writing at-scale and feedback to improve writing and in the teaching instruction.
Funding: IES Award (with
Rip Correnti and
Lindsay Clare Matsumura)
Writing and revising are essential parts of learning, yet many college students graduate without demonstrating improvement or mastery
of academic writing. This project explores the feasibility of improving students' academic writing through a revision environment
that integrates natural language processing methods, best practices in data visualization and user interfaces, and current pedagogical theories.
First, to analyze data on students' revision behaviors, a series of experiments are conducted to study interactions between students and variations of the revision writing environment.
Second, the collected data forms the gold standard for developing an end-to-end system that automatically extracts revisions
between student drafts and identifies the goal for each revision. The writing environment is iteratively refined, augmenting the interface prototyping through frequent user studies. Third, a complete end-to-end system that integrates the most successful component models is deployed in college-level writing classes.
Funding: NSF Award (with
Amanda Godley and
In this project, the researchers will refine an existing mobile application, CourseMIRROR, for use in postsecondary STEM lecture courses. This application aims to improve deep learning by encouraging students to reflect on course content and receive immediate feedback on their reflections. Often, in large lecture courses, students' ability to reflect on course content and get feedback on these reflections is limited by class size and instructor availability. At the same time, instructors often don't have access to students' reflections, so they cannot correct misunderstandings or build on class knowledge. By leveraging natural language processing and mobile learning technologies, CourseMIRROR aims to overcome these barriers and help students and instructors gain insights into what was or was not learned.
Funding: IES Award
(with Muhsin Meneske and Ala Samarapungavan (Purdue))
Collaborative argumentation, or the building of evidence-based, reasoned knowledge and solutions through dialogue, is essential to individual learning as well as group problem-solving. Student-centered discussions and elaborated student talk during collaborative argumentation are indicators of robust learning opportunities in STEM and other disciplines. Furthermore, the ability to engage in collaborative problem-solving is a foundational skill in STEM fields and a defining characteristic of 21st century workplaces, especially those in the technology and engineering fields, and a skill that employers report that few recent hires possess. However, teaching collaborative argumentation is an advanced skill that many high school teachers struggle to develop. We aim to develop an innovative technology called Discussion Tracker, a web-based system that leverages recent advances in human language technologies (HLT) to provide teachers with automatically generated data about the quality of students? collaborative argumentation in their classrooms and to support teachers' learning about collaborative argumentation.
In this project, we are interested in positioning the robot as a tutee that one or more human learners teach about a subject domain using spoken dialogue. This project proposes to build on existing learning sciences work within the exciting research spaces of teachable agents, human-robot interaction, spoken dialogue interaction, and collaborative learning, by examining a more complex scenario: multiple students collaborate with a robot through spoken interaction.
Funding: Internal LRDC grant (with Erin Walker and Tim Nokes-Malach)