Project Activities
The researchers will iteratively develop artificial intelligence (AI) supported classroom assessments and refine them through usability and two rounds of classroom studies (pilot and field studies). These assessments will delineate various reasoning patterns in students' arguments. The researchers also will expand the AI models to capture intended meaning in a broad range of linguistic features, including those of ELs, when they are engaged in scientific argumentation activities. The team will conduct a series of validation studies to investigate the cognitive, inferential, and instructional validity of the AI-supported assessments of student argumentation.
Structured Abstract
Setting
The research will take place in middle schools in California, South Carolina, and New Jersey. The researchers will select schools with significant English learner (EL) student populations to participate in the pilot and field studies.
Sample
Approximately 60 grade 6 to 8 math classrooms will participate in the piloting and experimental classroom studies. Participating schools will have at least 25 percent minorities and 25 percent low-income students.
The assessment developed in this project will measure students' scientific argumentation competence and apply natural language processing analysis to measure key argumentation components. Evidence-centered design approach will be applied to ensure the validity of assessments.
Research design and methods
The researchers will develop the assessment tool through two parallel strands of iterative development, feedback, and refinement. The first strand focuses on the development and refinement of the AI-based tool, and the second strand aims to develop assessment tasks that allow students to demonstrate their argumentation skills in science practices. The researchers will first collect cognitive validity evidence in a cognitive lab study with 50 middle school students (including 20 ELs) to gather user experiences and student and teacher interviews to evaluate the design of the intervention and explore how well the AI-based tool functions in terms of classifying reasoning patterns and understanding how students respond to the AI feedback. To investigate inferential validity, researchers will conduct two rounds of classroom studies (pilot and field studies) to explore how teachers and students use the tool and determine how reasoning patterns change when students are engaged with tasks that include AI feedback tools and teacher dashboards. The pilot study will include 2 or 3 middle school teachers and approximately 200 students (including at least 50 ELs). The field study will include approximately 1,000 students and 18 science teachers. To examine instructional validity, researchers will conduct classroom observations from selected classes from the pilot and field studies to investigate how teachers use the AI-supported assessments and how they make use of the student data generated from these assessments to inform their instruction to facilitate students' scientific argumentation. To triangulate with the classroom observation data, the researchers will conduct interviews with selected science and EL teachers to exam how well teachers perceive and use the AI-supported assessments to support their students' scientific argumentation learning, and how well differential decisions and actions are supported by these assessment tasks.
Control condition
Due to the nature of the research design, there is no control condition.
Key measures
The AI-supported assessments will target the measurement of key components required by the practice of scientific argumentation—including claims, grounds or evidence, and rebuttal—and support various students' reasoning patterns when engaging in argumentation in the context of making sense of ecosystem phenomena. The tool will assess students constructed responses using items from the California Science Test (CAST) database to categorize potential student reasoning patterns when they made sense of specific ecosystem phenomena.
Data analytic strategy
The researchers will use standard natural language processing and psychometric methods to validate the AI models (predictive machine learning models validated against human-annotated data). For the usability study, they will transcribe and code students' think-aloud data for both construct irrelevant thinking and construct-relevant thinking (in this case focusing on the scientific argumentation components). They will use qualitative analysis to dig into rich information collected from the interviews to investigate if the reasoning pattern classified by the AI-supported assessment is indeed how students reason through the practice of scientific argumentation. To investigate whether the argumentation assessment contextualized in the ecosystem area is unidimensional, they will perform exploratory factor analysis to evaluate the dimensionality of the assessment. In addition, they will use Rasch models to examine the measurement properties of the assessments. Finally, they will use a multidimensional item response theory framework that is the multidimensional random coefficient multinomial logit framework to investigate the construct validity of the argumentation assessment items.
Cost analysis strategy
The researchers will conduct an ingredients methods approach cost analysis to identify factors contributing to total and net cost associated with implementing the AI-based tool, including server costs, device availability, and usage patterns.
People and institutions involved
IES program contact(s)
Project contributors
Products and publications
This project will result in a fully developed and validated AI-supported classroom assessment tool, to measure middle-school students' reasoning patterns when engaging in the practice of scientific argument about ecosystem phenomena. The project will also result in peer-reviewed publications and presentations as well as additional dissemination products that reach education stakeholders such as practitioners and policymakers.
Publications:
ERIC Citations: Find available citations in ERIC for this award here.
Questions about this project?
To answer additional questions about this project or provide feedback, please contact the program officer.