Structured Abstract
Setting
Sample
Research design and methods
Control condition
Key measures
Data analytic strategy
Key outcomes
People and institutions involved
IES program contact(s)
Products and publications
Journal article, monograph, or newsletter
Jin, H., Delgado, C., Bauer, M. I., Wylie, E. C., Cisterna, D., & Llort, K. F. (2019). A hypothetical learning progression for quantifying phenomena in science. Science & Education, 28(9), 1181-1208.
Jin, H., van Rijn, P., Moore, J. C., Bauer, M. I., Pressler, Y., & Yestness, N. (2019). A validation framework for science learning progression research. International Journal of Science Education, 41(10), 1324-1346. doi:10.1080/09500693.2019.1606471
Publicly available data
The project data are stored at ETS' Research Data Repository. Researchers can request project data by sending an email to [email protected].
Project website:
Supplemental information
- Project Activities: The assessment tool has a mathematics component, a science component, and a score reporting component. To develop the mathematics component, the researchers selected high-quality items and associated rubrics from a prior IES-funded project (R305A100518). To develop the science component, the researchers used an iterative process that included 1) a historical analysis on how mathematics was used in scientific development and revolution; 2) an interview study with 44 high school students, and 3) a field test with 5,353 high school students. To develop the score reporting component, the researchers used the field test data to build the automated scoring models and collaborated with science teachers in designing the score reports. The fully developed tool was then piloted in a classroom study, where 19 teachers used the tool to inform their teaching. Student and teacher data were collected to identify the affordances and limitations of the tool.
- The researchers developed an MTS LP that contains four achievement levels, with each level describing a reasoning pattern that students use to solve science problems and explain real-world phenomena. Together, the four levels present a developmental trend, where students progress towards proficiency in quantification or mathematization (Jin, Delgado, et al., 2019).
- The researchers developed a validation framework for science learning progressions. The framework was used to guide the validation activities in the project (Jin, van Rijn, et al., 2019).
In Phase 2, a pool of 110 MTS items was generated based on the MTS LP and the interview results. The items were reviewed, revised, refined, and/or dropped based on usability interviews and feedback from the project advisors, science teachers, and assessment experts. This process resulted in 68 MTS items, which were used in a field test with 5,353 high-school students. Students' responses in the field test were scored by human raters. Next, quantitative analyses were applied to the scores and the analysis results were used to validate the LP and associated items. The score reporting component was designed to provide teachers with diagnostic information of student learning and instructional suggestions. The human scores and responses were also used to develop the automated scoring models, which were embedded in the tool to allow the generation of real-time score reports.
In Phase 3, 19 teachers piloted the tool with their students. Student assessment data (pre- and post-assessment data), teacher teaching data (observation of four teachers' lessons), and teacher feedback (interview and survey data) were collected and analyzed to identify the affordances and limitations of the tool.
Questions about this project?
To answer additional questions about this project or provide feedback, please contact the program officer.