Skip to main content
Original Article

Modality Specificity of Comprehension Abilities in the Sciences

Published Online:https://doi.org/10.1027/1015-5759/a000114

The measurement of science achievement is often unnecessarily restricted to the presentation of reading comprehension items that are sometimes enriched with graphs, tables, and figures. In a newly developed viewing comprehension task, participants watched short videos covering different science topics and were subsequently asked several multiple-choice comprehension questions. Research questions were whether viewing comprehension (1) can be measured adequately, (2) is perfectly collinear with reading comprehension, and (3) can be regarded as a linear function of reasoning and acquired knowledge. High-school students (N = 216) worked on a paper-based reading comprehension task, a viewing comprehension task delivered on handheld devices, a sciences knowledge test, and three fluid intelligence measures. The data show that, first, the new viewing comprehension test worked psychometrically fine; second, performance in both comprehension tasks was essentially perfectly collinear; third, fluid intelligence and domain-specific knowledge fully accounted for the ability to comprehend texts and videos. We conclude that neither test medium (paper-pencil versus handheld device) nor test modality (reading versus viewing) are decisive for comprehension ability in the natural sciences. Fluid intelligence and, even more strongly, domain-specific knowledge turned out to be exhaustive predictors of comprehension performance.

References

  • Ackerman, P. L. (2000). Domain-specific knowledge as the “dark matter” of adult intelligence: Gf/Gc, personality, and interest correlates. Journal of Gerontology: Psychological Sciences, 55B, 69–84. First citation in articleCrossrefGoogle Scholar

  • Bagozzi, R. P. , & Edwards, J. R. (1998). A general approach for representing constructs in organizational research. Organizational Research Methods, 1, 45–87. First citation in articleCrossrefGoogle Scholar

  • Beauducel, A. , & Herzberg, P. Y. (2006). On the performance of maximum likelihood versus means and variance adjusted weighted least square estimation in confirmatory factor analysis. Structural Equation Modeling, 13, 186–203. First citation in articleCrossrefGoogle Scholar

  • Bollen, K. A. (1989). Structural equations with latent variables. Oxford, UK: Wiley. First citation in articleCrossrefGoogle Scholar

  • Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. Cambridge, MA: Cambridge University Press. First citation in articleCrossrefGoogle Scholar

  • Fu, A. C. , Raizen, S. A. , Shavelson, R. J. (2009). The nation’s report card: A vision of large-scale science assessment. Science, 326, 1637–1638. First citation in articleCrossrefGoogle Scholar

  • Garrett, H. E. (1946). A developmental theory of intelligence. American Psychologist, 1, 372–378. First citation in articleCrossrefGoogle Scholar

  • Halldórsson, A. M. , McKelvie, P. , Björnsson, J. K. (2009). Are Icelandic boys really better on computerized tests than conventional ones? In F. Scheuermann, J. K. Björnsson, (Eds.), The transition to computer-based assessment. JRC Scientific and Technical Report EUR 23679 EN (pp. 178–193). Luxembourg: Office for Official Publications of the European Communities. First citation in articleGoogle Scholar

  • Hambrick, D. Z. (2005). The role of domain knowledge in higher-level cognition. In O. Wilhelm, R. W. Engle, (Eds.), Handbook of understanding and measuring intelligence (pp. 361–372). London: Sage. First citation in articleCrossrefGoogle Scholar

  • Hambrick, D. Z. , & Engle, R. W. (2002). Effects of domain knowledge, working memory capacity, and age on cognitive performance: An investigation of the knowledge-is-power hypothesis. Cognitive Psychology, 44, 339–387. First citation in articleGoogle Scholar

  • Jeung, H. , Chandler, P. , Sweller, J. (1997). The role of visual indicators in dual sensory mode instruction. Educational Psychology, 17, 329–343. First citation in articleCrossrefGoogle Scholar

  • Little, R. J. , & Rubin, D. B. (2002). Statistical analysis with missing data. New York: Wiley. First citation in articleCrossrefGoogle Scholar

  • Martin, M. O. , Mullis, I. V. S. , Foy, P. (2008). TIMSS 2007 International Science Report: Findings from IEA’s Trends in International Mathematics and Science Study at the Fourth and Eighth Grades. Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Boston College. First citation in articleGoogle Scholar

  • Mayer, R. E. (2005). Cognitive theory of multimedia learning. In R. E. Mayer, , The Cambridge handbook of multimedia learning (pp. 31–48). New York: Cambridge University Press. First citation in articleCrossrefGoogle Scholar

  • McDonald, R. P. (1999). Test theory: A unified treatment. Mahwah, NJ: Erlbaum. First citation in articleGoogle Scholar

  • Mousavi, S. Y. , Low, R. , & Sweller, J. (1995). Reducing cognitive load by mixing auditory and visual presentation modes. Journal of Educational Psychology, 87, 319–334. First citation in articleCrossrefGoogle Scholar

  • Mullis, I. V. S. , Martin, M. O. , Ruddock, G. J. , O’Sullivan, C. Y. , Arora, A. , & Erberber, E. (2005). TIMSS 2007 assessment frameworks. Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Boston College. First citation in articleGoogle Scholar

  • Muthén, L. K. , & Muthén, B. O. (2007). Mplus user’s guide. Los Angeles, CA: Muthén & Muthén. First citation in articleGoogle Scholar

  • Muthén, L. K. , & Muthén, B. O. (2009). Mplus (Version 5.21) [Computer software]. Los Angeles, CA: Muthén & Muthén. First citation in articleGoogle Scholar

  • Paivio, A. (1986). Mental representations: A dual coding approach. New York: Oxford University Press. First citation in articleGoogle Scholar

  • Pashler, H. , McDaniel, M. , Rohrer, D. , Bjork, R. (2008). Learning styles: Concepts and evidence. Psychological Science in the Public Interest, 9, 105–119. First citation in articleGoogle Scholar

  • Rupp, A. A. , Templin, J. L. (2008). Unique characteristics of diagnostic classification models: A comprehensive review of the current state of the art. Measurement: Interdisciplinary Research and Perspectives, 6, 219–262. First citation in articleCrossrefGoogle Scholar

  • Schnotz, W. (2005). An integrate model of text and picture comprehension. In R. E. Mayer, (Ed.), The Cambridge handbook of multimedia learning (pp. 49–69). New York: Cambridge University Press. First citation in articleCrossrefGoogle Scholar

  • Schroeders, U. , & Wilhelm, O. (2010). Testing reasoning ability with handheld computers, notebooks, and paper and pencil. European Journal of Psychological Assessment, 26, 284–292. First citation in articleLinkGoogle Scholar

  • Schroeders, U. , Wilhelm, O. , Bucholtz, N. (2010). Reading, listening, and viewing comprehension in English as a foreign language: One or more constructs? Intelligence, 38, 562–573. First citation in articleCrossrefGoogle Scholar

  • Swygert, K. A. , McLeod, L. D. , Thissen, D. (2001). Item response theory applied to combinations of multiple-choice and constructed response items-scale scores for patterns of sum scores. In D. Thissen, H. Wainer, (Eds.), Test scoring (pp. 217–250). Hillsdale, NJ: Erlbaum. First citation in articleGoogle Scholar

  • Wilhelm, O. (2005). Measuring reasoning ability. In O. Wilhelm, R. W. Engle, (Eds.), Understanding and measuring intelligence (pp. 373–392). London: Sage. First citation in articleCrossrefGoogle Scholar

  • Wilhelm, O. , Schroeders, U. , Schipolowski, S. (2009). BEFKI. Berliner Test zur Erfassung fluider und kristalliner Intelligenz [Berlin test of fluid and crystallized intelligence]. Unpublished manuscript. First citation in articleGoogle Scholar

  • Yu, C.-Y. (2002). Evaluating cutoff criteria of model fit indices for latent variable models with binary and continuous outcomes. Doctoral dissertation, University of California, Los Angeles. First citation in articleGoogle Scholar

  • Yuan, K. , Steedle, J. , Shavelson, R. , Alonzo, A. , Oppezzo, M. (2006). Working memory, fluid intelligence, and science learning. Educational Research Review, 1, 83–98. First citation in articleCrossrefGoogle Scholar