Project Activities
Six studies will systematically identify effective instructional methods for learning evaluation skills about scientific evidence. Each study compares one or more instructional factor(s) to a control condition that is similar in all respects. In Years 1 and 2, the research team will conduct three studies focusing on causality bias as a key threat to validity, and will seek to identify effective training examples for critically assessing claims about correlational data. In Years 3 and 4, the research team will conduct three studies that combine the most effective factors the first three studies. In addition, two of the studies conducted in Years 3 and 4 will incrementally incorporate the other two target threats to validity (selection bias and overgeneralizing error).
Structured Abstract
Setting
Participating schools are located in urban and suburban areas of Michigan.
Sample
Approximately 480 seventh- through ninth-grade students will participate in the first four studies (120 in each study). 90 seventh- through ninth-grade students will participate in the fifth study. 60 seventh- through ninth-grade students will participate in the sixth study.
Due to the exploratory nature of this project, there is no intervention. The research team will identify instructional methods for learning evaluation skills about scientific evidence. This work will inform the development of interventions designed to improve students' ability to evaluate scientific evidence.
Research design and methods
Six studies will systematically identify effective instructional methods for learning evaluation skills about scientific evidence. Each study compares one or more instructional factor(s) to a control condition that is similar in all respects. In Years 1 and 2, the research team will conduct three studies focusing on causality bias as a key threat to validity, and will seek to identify effective training examples for critically assessing claims about correlational data. In Years 3 and 4, the research team will conduct three studies that combine the most effective factors the first three studies. In addition, two of the studies conducted in Years 3 and 4 will incrementally incorporate the other two target threats to validity (selection bias and overgeneralizing error). In all studies, participants complete a pre-test, then will learn abstract rules for reasoning about threats to validity and evaluate tutorial ‘media reports' containing scientific findings, and finally will complete a post-test. In the final two studies, students will complete a delayed post-test three months after receiving instruction.
Control condition
Each study compares one or more instructional factor(s) with a control condition similar in all respects. The control condition differs based on the research question being asked.
Key measures
The key measure is the change in students' pretest and posttest performance on a researcher developed task where students must evaluate scientific evidence in fictional media articles.
Data analytic strategy
Researchers will use analysis of variance and general linear model methods to compare pre- to post-test changes in performance on evidence evaluation tasks for varied instructional and control conditions.
People and institutions involved
IES program contact(s)
Products and publications
Researchers will produce preliminary evidence of potentially promising instructional methods for teaching students to evaluate scientific evidence in a variety of contexts as well as peer-reviewed publications.
Publications:
ERIC Citations: Find available citations in ERIC for this award here.
Cao, Y., Subramonyam, H., & Adar, E. (2022, March). VideoSticker: A tool for active viewing and visual note-taking from videos. In Proceedings of the 27th International Conference on Intelligent User Interfaces (pp. 672-690).
Fansher, M., Adkins, T. J., & Shah, P. (2022). Graphs do not lead people to infer causation from correlation. Journal of Experimental Psychology: Applied, 28(2), 314.
Franconeri, S. L., Padilla, L. M., Shah, P., Zacks, J. M., & Hullman, J. (2021). The science of visual data communication: What works. Psychological Science in the public interest, 22(3), 110-161.
Michal, A. L., & Shah, P. (2024). A Practical Significance Bias in Laypeople’s Evaluation of Scientific Findings. Psychological Science, 35(4), 315-327.
Michal, A. L., Zhong, Y., & Shah, P. (2021). When and why do people act on flawed science? Effects of anecdotes and prior beliefs on evidence-based decision-making. Cognitive Research: Principles and Implications, 6(1), 28.
Nancekivell, S. E., Sun, X., Gelman, S. A., & Shah, P. (2021). A slippery myth: How learning style beliefs shape reasoning about multimodal instruction and related scientific evidence. Cognitive Science, 45(10), e13047.
Subramonyam, H., Seifert, C., Shah, P., & Adar, E. (2020, April). texSketch: Active diagramming through pen-and-ink annotations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-13).
Questions about this project?
To answer additional questions about this project or provide feedback, please contact the program officer.