Skip to main content
Original Article

Does Speededness in Collecting Reasoning Data Lead to a Speed Factor?

Published Online:https://doi.org/10.1027/1015-5759/a000498

Abstract. The consequences of speeded testing for the structure and validity of a numerical reasoning scale (NRS) were investigated. Confirmatory factor models including an additional factor for representing working speed and models without such a representation were employed for investigating reasoning data collected in speeded paper-and-pencil testing and in only slightly speeded testing. For achieving a complete account of the data, the models also accounted for the item-position effect. The results revealed the factor representing working speed as essential for achieving a good fit in data originating from speeded testing. The reasoning factors based on data due to speeded and slightly speeded testing showed a high correlation among each other. The factor representing working speed was independent of the other factors derived from reasoning data but related to an external score representing processing speed.

References

  • Bentler, P. M. (1990). Comparative fit indexes in structural models. Psychological Bulletin, 107, 238–246. https://doi.org/10.1037/0033-2909.107.2.238 First citation in articleCrossrefGoogle Scholar

  • Birney, D. P., Beckmann, J. F., Beckmann, N. & Double, K. S. (2017). Beyond the intellect: Complexity and learning trajectories in Raven’s Progressive Matrices depend on self-regulatory processes and conative dispositions. Intelligence, 61, 63–77. https://doi.org/10.1016/j.intell.2017.01.005 First citation in articleCrossrefGoogle Scholar

  • Bolt, D. M., Cohen, A. S. & Wolack, J. A. (2002). Item parameter estimation under conditions of test speededness: Application of a mixture Rasch model with ordinal constraints. Journal of Educational Measurement, 39, 331–348. https://doi.org/10.1111/j.1745-3984.2002.tb01146.x First citation in articleCrossrefGoogle Scholar

  • Campbell, D. T. & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56, 81–105. https://doi.org/10.1037/h0046016 First citation in articleCrossrefGoogle Scholar

  • Carlstedt, B., Gustafsson, J.-E. & Ullstadius, E. (2000). Item sequencing effects on the measurement of fluid intelligence. Intelligence, 28, 145–160. https://doi.org/10.1016/S0160-2896(00)00034-9 First citation in articleCrossrefGoogle Scholar

  • Cheung, G. W. & Rensvold, R. B. (2002). Evaluating goodness-of-fit indexes for testing measurement invariance. Structural Equation Modeling, 9, 233–255. https://doi.org/10.1207/S15328007SEM0902_5 First citation in articleCrossrefGoogle Scholar

  • Debeer, D. & Janssen, R. (2013). Modeling item-position effects within an IRT framework. Journal of Educational Measurement, 50, 164–185. https://doi.org/10.1111/jedm.12009 First citation in articleCrossrefGoogle Scholar

  • DiStefano, C. (2016). Examining fit with structural equation models. In K. SchweizerC. DiStefanoEds., Principles and methods of test construction. Standards and recent advances (pp. 166–193). Göttingen, Germany: Hogrefe. First citation in articleGoogle Scholar

  • Embretson, S. E. (1991). A multidimensional latent trait model for measuring learning and change. Psychometrika, 56, 495–515. https://doi.org/10.1007/BF02294487 First citation in articleCrossrefGoogle Scholar

  • Estrada, E., Román, F. J., Abad, F. J. & Colom, R. (2017). Separating power and speed components of standardized intelligence measures. Intelligence, 61, 159–168. https://doi.org/10.1016/j.intell.2017.02.002 First citation in articleCrossrefGoogle Scholar

  • Finney, S. J., DiStefano, C. & Kopp, J. P. (2016). Overview on estimation methods and preconditions for their application with structural equation modeling. In K. SchweizerC. DiStefanoEds., Principles and methods of test construction. Standards and recent advances (pp. 166–193). Göttingen, Germany: Hogrefe. First citation in articleGoogle Scholar

  • Goegebeur, Y., De Boeck, P., Wollack, J. A. & Cohen, A. S. (2008). A speeded item response model with gradual process change. Psychometrika, 73, 65–87. https://doi.org/10.1007/s11336-007-9031-2 First citation in articleCrossrefGoogle Scholar

  • Graham, J. M. (2006). Congeneric and (essentially) tau-equivalent estimates of score reliability. What they are and how to use them. Educational and Psychological Measurement, 66, 930–944. https://doi.org/10.1177/0013164406288165 First citation in articleCrossrefGoogle Scholar

  • Gulliksen, H. (1950). Speed versus power tests. In H. GulliksenEd., Theory of mental tests (pp. 230–244). New York, NY: John Wiley & Sons. First citation in articleGoogle Scholar

  • Hartig, J., Hölzel, B. & Moosbrugger, H. (2007). A confirmatory analysis of item reliability trends (CAIRT): Differentiating true score and error variance in the analysis of item context effects. Multivariate Behavioral Research, 42, 157–183. https://doi.org/10.1080/00273170701341266 First citation in articleCrossrefGoogle Scholar

  • Horn, W. (1983). Leistungsprüfsystem (LPS) [Performance testing system] (2nd ed.). Göttingen, Germany: Hogrefe. First citation in articleGoogle Scholar

  • Hu, L.-T. & Bentler, P. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6, 1–55. https://doi.org/10.1080/10705519909540118 First citation in articleCrossrefGoogle Scholar

  • Jöreskog, K. G. (1971). Statistical analysis of sets of congeneric tests. Psychometrika, 36, 109–133. https://doi.org/10.1007/BF02291393 First citation in articleCrossrefGoogle Scholar

  • Jöreskog, K. G. & Sörbom, D. (2006). LISREL 8.80. Lincolnwood, IL: Scientific Software International. First citation in articleGoogle Scholar

  • Lord, F. M. & Novick, M. R. (1968). Statistical theories of mental test scores. Reading, MA: Addison-Wesley. First citation in articleGoogle Scholar

  • Lu, Y. & Sireci, S. G. (2007). Validity issues in test speededness. Educational Measurement, 26, 29–37. https://doi.org/10.1111/j.1745-3992.2007.00106.x First citation in articleCrossrefGoogle Scholar

  • Lucke, J. F. (2005). The α and the ω of Congeneric Test Theory: An extension of reliability and internal consistency to heterogeneous tests. Applied Psychological Measurement, 29, 65–81. https://doi.org/10.1177/0146621604270882 First citation in articleCrossrefGoogle Scholar

  • Mackintosh, N. J. & Bennett, E. S. (2005). What do Raven’s Matrices measure? An analysis in terms of sex differences. Intelligence, 33, 663–674. https://doi.org/10.1016/j.intell.2005.03.004 First citation in articleCrossrefGoogle Scholar

  • McArdle, J. J. (1986). Latent variable growth within behavior genetic models. Behavior Genetics, 16, 163–200. https://doi.org/10.1007/BF01065485 First citation in articleCrossrefGoogle Scholar

  • McDonald, R. P. & Ahlawat, K. S. (1974). Difficulty factors in binary data. British Journal of Mathematical and Statistical Psychology, 27, 82–99. https://doi.org/10.1111/j.2044-8317.1974.tb00530.x First citation in articleCrossrefGoogle Scholar

  • Must, O. & Must, A. (2013). Changes in test-taking patterns over time. Intelligence, 41, 780–790. https://doi.org/10.1016/j.intell.2013.04.005 First citation in articleCrossrefGoogle Scholar

  • Oshima, T. C. (1994). The effect of speededness on parameter estimation in item response theory. Journal of Educational Measurement, 31, 200–219. https://doi.org/10.1111/j.1745-3984.1994.tb00443.x First citation in articleCrossrefGoogle Scholar

  • Rasch, G. (1980). Probabilistic models for some intelligence and attainment tests (expand. ed.). Chicago, IL: University of Chicago Press. First citation in articleGoogle Scholar

  • Raven, J. C., Raven, J. & Court, J. H. (1997). Raven’s progressive matrices and vocabulary scales. Edinburgh, UK: J. C. Raven Ltd. First citation in articleGoogle Scholar

  • Ren, X., Schweizer, K. & Xu, F. (2013). The sources of the relationship between sustained attention and reasoning. Intelligence, 41, 51–58. https://doi.org/10.1016/j.intell.2012.10.006 First citation in articleCrossrefGoogle Scholar

  • Ren, X., Wang, T., Sun, S., Deng, M. & Schweizer, K. (2018). Speeded testing in the assessment of intelligence gives rise to a speed factor. Intelligence, 66, 64–71. https://doi.org/10.1016/j.intell.2017.11004 First citation in articleCrossrefGoogle Scholar

  • Schweizer, K. (2008). Investigating experimental effects within the framework of structural equation modeling: an example with effects on both error scores and reaction times. Structural Equation Modeling, 15, 327–345. https://doi.org/10.1080/1070551080192262 First citation in articleCrossrefGoogle Scholar

  • Schweizer, K. (2011). Scaling variances of latent variables by standardizing loadings: Applications to working memory and the position effect. Multivariate Behavioral Research, 46, 938–955. https://doi.org/10.1080/00273171.2011.625312 First citation in articleCrossrefGoogle Scholar

  • Schweizer, K. (2013). A threshold-free approach to the study of the structure of binary data. International Journal of Statistics and Probability, 2, 67–75. https://doi.org/10.5539/ijsp.v2n2p67 First citation in articleCrossrefGoogle Scholar

  • Schweizer, K. & Ren, X. (2013). The position effect in tests with a time limit: The consideration of interruption and working speed. Psychological Test and Assessment Modelling, 55, 62–78. First citation in articleGoogle Scholar

  • Schweizer, K., Ren, X. & Wang, T. (2015). A comparison of confirmatory factor analysis of binary data on the basis of tetrachoric correlations and of probability-based covariances: A simulation study. In R. E. MillsapD. M. BoltL. A. van der ArkW.-C. WangEds., Springer Proceedings in Mathematics & Statistics. Quantitative Psychology Research (Vol. 89, pp. 273–292). Heidelberg, Germany: Springer International Publishing. First citation in articleGoogle Scholar

  • van der Linden, W. J. (2011). Test design and speededness. Journal of Educational Measurement, 48, 44–60. https://doi.org/10.1111/j.1745-3984.2010.00130.x First citation in articleCrossrefGoogle Scholar

  • van der Linden, W. J. & Xiong, X. (2013). Speededness and adaptive testing. Journal of Educational and Behavioral Statistics, 38, 418–438. https://doi.org/10.3102/1076998612466143 First citation in articleCrossrefGoogle Scholar

  • Verguts, T. & De Boeck, P. (2000). A Rasch model for detecting learning while solving an intelligence test. Applied Psychological Measurement, 24, 151–162. https://doi.org/10.1177/01466210022031589 First citation in articleCrossrefGoogle Scholar

  • Wilhelm, O. & Schulze, R. (2002). The relation of speeded and unspeeded reasoning with mental speed. Intelligence, 30, 537–554. https://doi.org/10.1016/j.intell.2017.11004 First citation in articleCrossrefGoogle Scholar