Modality Specificity of Comprehension Abilities in the Sciences
Abstract
The measurement of science achievement is often unnecessarily restricted to the presentation of reading comprehension items that are sometimes enriched with graphs, tables, and figures. In a newly developed viewing comprehension task, participants watched short videos covering different science topics and were subsequently asked several multiple-choice comprehension questions. Research questions were whether viewing comprehension (1) can be measured adequately, (2) is perfectly collinear with reading comprehension, and (3) can be regarded as a linear function of reasoning and acquired knowledge. High-school students (N = 216) worked on a paper-based reading comprehension task, a viewing comprehension task delivered on handheld devices, a sciences knowledge test, and three fluid intelligence measures. The data show that, first, the new viewing comprehension test worked psychometrically fine; second, performance in both comprehension tasks was essentially perfectly collinear; third, fluid intelligence and domain-specific knowledge fully accounted for the ability to comprehend texts and videos. We conclude that neither test medium (paper-pencil versus handheld device) nor test modality (reading versus viewing) are decisive for comprehension ability in the natural sciences. Fluid intelligence and, even more strongly, domain-specific knowledge turned out to be exhaustive predictors of comprehension performance.
References
2000). Domain-specific knowledge as the “dark matter” of adult intelligence: Gf/Gc, personality, and interest correlates. Journal of Gerontology: Psychological Sciences, 55B, 69–84.
(1998). A general approach for representing constructs in organizational research. Organizational Research Methods, 1, 45–87.
(2006). On the performance of maximum likelihood versus means and variance adjusted weighted least square estimation in confirmatory factor analysis. Structural Equation Modeling, 13, 186–203.
(1989). Structural equations with latent variables. Oxford, UK: Wiley.
(1993). Human cognitive abilities: A survey of factor-analytic studies. Cambridge, MA: Cambridge University Press.
(2009). The nation’s report card: A vision of large-scale science assessment. Science, 326, 1637–1638.
(1946). A developmental theory of intelligence. American Psychologist, 1, 372–378.
(2009). Are Icelandic boys really better on computerized tests than conventional ones? In , The transition to computer-based assessment. JRC Scientific and Technical Report EUR 23679 EN (pp. 178–193). Luxembourg: Office for Official Publications of the European Communities.
(2005). The role of domain knowledge in higher-level cognition. In , Handbook of understanding and measuring intelligence (pp. 361–372). London: Sage.
(2002). Effects of domain knowledge, working memory capacity, and age on cognitive performance: An investigation of the knowledge-is-power hypothesis. Cognitive Psychology, 44, 339–387.
(1997). The role of visual indicators in dual sensory mode instruction. Educational Psychology, 17, 329–343.
(2002). Statistical analysis with missing data. New York: Wiley.
(2008). TIMSS 2007 International Science Report: Findings from IEA’s Trends in International Mathematics and Science Study at the Fourth and Eighth Grades. Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Boston College.
(2005). Cognitive theory of multimedia learning. In , The Cambridge handbook of multimedia learning (pp. 31–48). New York: Cambridge University Press.
(1999). Test theory: A unified treatment. Mahwah, NJ: Erlbaum.
(1995). Reducing cognitive load by mixing auditory and visual presentation modes. Journal of Educational Psychology, 87, 319–334.
(2005). TIMSS 2007 assessment frameworks. Chestnut Hill, MA: TIMSS & PIRLS International Study Center, Boston College.
(2007). Mplus user’s guide. Los Angeles, CA: Muthén & Muthén.
(2009). Mplus (Version 5.21) [Computer software]. Los Angeles, CA: Muthén & Muthén.
(1986). Mental representations: A dual coding approach. New York: Oxford University Press.
(2008). Learning styles: Concepts and evidence. Psychological Science in the Public Interest, 9, 105–119.
(2008). Unique characteristics of diagnostic classification models: A comprehensive review of the current state of the art. Measurement: Interdisciplinary Research and Perspectives, 6, 219–262.
(2005). An integrate model of text and picture comprehension. In , The Cambridge handbook of multimedia learning (pp. 49–69). New York: Cambridge University Press.
(2010). Testing reasoning ability with handheld computers, notebooks, and paper and pencil. European Journal of Psychological Assessment, 26, 284–292.
(2010). Reading, listening, and viewing comprehension in English as a foreign language: One or more constructs? Intelligence, 38, 562–573.
(2001). Item response theory applied to combinations of multiple-choice and constructed response items-scale scores for patterns of sum scores. In , Test scoring (pp. 217–250). Hillsdale, NJ: Erlbaum.
(2005). Measuring reasoning ability. In , Understanding and measuring intelligence (pp. 373–392). London: Sage.
(2009 ). BEFKI. Berliner Test zur Erfassung fluider und kristalliner Intelligenz [Berlin test of fluid and crystallized intelligence ]. Unpublished manuscript.2002). Evaluating cutoff criteria of model fit indices for latent variable models with binary and continuous outcomes. Doctoral dissertation, University of California, Los Angeles.
(2006). Working memory, fluid intelligence, and science learning. Educational Research Review, 1, 83–98.
(