Skip to main content
Published Online:https://doi.org/10.1026/0012-1924/a000113

ICT-Literacy legt eine performanzbasierte Erfassung nahe, also mithilfe von Testaufgaben, die interaktive (simulierte) Computerumgebungen präsentieren und eine Reaktion mittels Maus und/oder Tastatur erfordern. Dennoch kommen häufig Verfahren wie Selbstbeurteilungen oder papierbasierte Leistungstests zum Einsatz. Ziel der vorliegenden Studie war es, die psychometrischen Eigenschaften simulationsbasierter (SIM) Aufgaben mit den Eigenschaften inhaltlich paralleler Multiple-Choice (MC)-Aufgaben zu vergleichen, bei denen Screenshots als Stimulus verwendet werden. Die MC-Aufgaben, die im Rahmen der National Educational Panel Study (NEPS) entwickelt wurden, erfassen die Fähigkeit, digitale Informationen auszuwählen und abzurufen sowie grundlegende Operationen durchzuführen (Access). In einem Zufallsgruppendesign bearbeiteten 405 Jugendliche der Klassenstufe 9 die computerbasierten Access-Testitems entweder als MC-Aufgabe oder als SIM-Aufgabe sowie den simulationsbasierten Basic Computer Skills (BCS)-Test. Die Ergebnisse zeigen, dass sich die meisten MC-Aufgaben und SIM-Aufgaben hinsichtlich Schwierigkeit und Ladung unterscheiden. Übereinstimmende konvergente Validität wird durch vergleichbar hohe Korrelationen der beiden Testformen mit BCS angezeigt.


Assessment of ICT literacy: Multiple-choice vs. simulation-based tasks

ICT literacy suggests a performance-based assessment by means of tasks presenting interactive (simulated) computer environments and requiring responses by means of mouse and/or keyboard. However, assessment procedures like self-ratings or paper-based performance measures are still commonly used. The present study compares the psychometric properties of simulation-based (SIM) tasks with parallel multiple-choice (MC) tasks that make use of screenshots of software applications. The MC tasks, developed for the National Educational Panel Study (NEPS), reflect the skill to select and retrieve digital information and to perform basic operations (access). In a random groups design, 405 grade 9 students completed the computer-based access items as MC tasks or as SIM tasks as well as the simulation-based Basic Computer Skills (BCS) test. Results show that the majority of MC tasks and SIM tasks differ in difficulty and loading. Consistent convergent validity is indicated by comparably high correlations of the two test forms with BCS.

Literatur

  • Anderson, J. R. (1996). ACT: A simple theory of complex cognition. American Psychologist, 51, 355 – 365. First citation in articleCrossrefGoogle Scholar

  • Ballantine, J. A. , Larres, P. M. & Oyelere, P. (2007). Computer usage and the validity of self-assessed computer competence among first-year business students. Computers & Education, 49, 976 – 990. First citation in articleCrossrefGoogle Scholar

  • Bollen, K. A. (1989). Structural equations with latent variables. New York, NY: Wiley. First citation in articleCrossrefGoogle Scholar

  • Byrne, B. M. & Stewart, S. M. (2006). The MACS approach to testing for multigroup invariance of a second-order structure: A walk through the process. Structural Equation Modeling, 13, 287 – 321. First citation in articleCrossrefGoogle Scholar

  • De Wit, K. , Heerwegh, D. & Verhoeven, J. C. (2012). Changes in the basic ICT skills of freshmen between 2005 and 2009: who’s catching up and who’s still behind? Education and Information Technologies, 17, 205 – 231. First citation in articleCrossrefGoogle Scholar

  • Downing, S. M. (2006). Twelve Steps for Effective Test Development. In S. M. Downing & T. M. Haladyna (Eds.). Handbook of Test Development (pp. 3 – 25). Mahwah, NJ: Lawrence Erlbaum Associates. First citation in articleGoogle Scholar

  • Educational Testing Service. (2008). iSkills–Information and Communication Technology Literacy Test. Retrieved June 15, 2009, from http://www.ets.org. First citation in articleGoogle Scholar

  • Ferrari, A. (2012). Digital Competence in practice: An analysis of frameworks. Seville: JRC-IPTS. Retrieved September 9, 2013, from http://ftp.jrc.es/EURdoc/JRC68116.pdf. First citation in articleGoogle Scholar

  • Forero, C.G. & Maydeu-Olivares, A. (2009). Estimation of IRT graded models for rating data: Limited vs. full information methods. Psychological Methods, 14, 275 – 299. First citation in articleCrossrefGoogle Scholar

  • Goldhammer, F. , Naumann, J. & Keßel, Y. (2013). Assessing individual differences in basic computer skills: Psychometric characteristics of an interactive performance measure. European Journal of Psychological Assessment, 29, 263 – 275. doi: 10.1027/1015 – 5759/a000153. First citation in articleLinkGoogle Scholar

  • Green, S. B. & Yang, Y. (2009). Reliability of summed item scores using structural equation modeling: An alternative to coefficient alpha. Psychometrika, 74, 155 – 167. First citation in articleCrossrefGoogle Scholar

  • Greiff, S. , Wüstenberg, S. , Holt, D. V. , Goldhammer, F. & Funke, J. (2013). Computer-based assessment of complex problem solving: concept, implementation, and application. Educational Technology Research and Development, 61, 407 – 421. First citation in articleCrossrefGoogle Scholar

  • Greiff, S. , Wüstenberg, S. , Molnar, G. , Fischer, A. , Funke, J. & Csapo, B. (2013). Complex problem solving in educational settings. Something beyond g: Concept, assessment, measurement invariance, and construct validity. Journal of Educational Psychology, 105, 364 – 379. First citation in articleCrossrefGoogle Scholar

  • Haladyna, T. M. (2004). Developing and validating multiple-choice test items. Mahwah, NJ: Lawrence Erlbaum Associates. First citation in articleGoogle Scholar

  • Haladyna, T. M. & Downing, S. M. (2004). Construct-irrelevant variance in high-stakes testing. Educational Measurement: Issues and Practice, 23, 17 – 27. First citation in articleCrossrefGoogle Scholar

  • International ICT Literacy Panel. (2007). Digital transformation: A framework for ICT literacy (A report of the International ICT Literacy Panel). Princeton, NJ: Educational Testing Service. Retrieved January 21, 2008, from http://www.ets.org/Media/Tests/Information_and_Communication_Technology_Literacy/ictreport.pdf. First citation in articleGoogle Scholar

  • Katz, I. R. & Macklin, A. S. (2007). Information and communication technology (ICT) literacy: Integration and assessment in higher education. Journal of Systemics, Cybernetics and Informatics, 5, 50 – 55. First citation in articleGoogle Scholar

  • Kenny, D. (2012). Measuring Model Fit. Retrieved October 18, 2012, from davidakenny.net/cm/fit.htm. First citation in articleGoogle Scholar

  • Kim, J. H. , Jung, S. Y. & Lee, W. G. (2008). Design of contents for ICT literacy in-service training of teachers in Korea. Computers & Education, 51, 1683 – 1706. First citation in articleCrossrefGoogle Scholar

  • KMK (Kultusministerkonferenz) (2012). Medienbildung in der Schule. Beschluss der Kultusministerkonferenz vom 8. März 2012. Bonn: KMK. First citation in articleGoogle Scholar

  • Kröhne, U. & Martens, T. (2011). Computer-based competence tests in the National Educational Panel Study: The challenge of mode effects. Zeitschrift für Erziehungswissenschaft, Special Issue 14, 169 – 186. First citation in articleCrossrefGoogle Scholar

  • Kuhlemeier, H. & Hemker, B. (2007). The impact of computer use at home on students’ Internet skills. Computers & Education, 49, 460 – 480. First citation in articleCrossrefGoogle Scholar

  • Lennon, M. , Kirsch, I. , Von Davier, M. , Wagner, M. & Yamamoto, K. (2003). Feasibility Study for the ICT Literacy Assessment–Report to Network A. Retrieved June 15, 2009, from http://www.oecd.org/dataoecd/35/13/33699866.pdf. First citation in articleGoogle Scholar

  • Markauskaite, L. (2007). Exploring the structure of trainee teachers’ ICT literacy: The main components of, and relationships between, general cognitive and technical capabilities. Educational Technology Research and Development, 55, 547 – 572. First citation in articleCrossrefGoogle Scholar

  • Muthén, B. O. & Asparouhov, T. (2002). Latent variable analysis with categorical outcomes: Multiple-Group and growth modeling. In Mplus. Mplus Web Notes: No. 4. Retrieved October 18, 2012, from http://www.statmodel.com/download/webnotes/CatMGLong.pdf First citation in articleGoogle Scholar

  • Muthén, L. K. & Muthén, B. O. (1998 – 2012). Mplus user’s guide (7th ed.). Los Angeles, CA: Muthén & Muthén. First citation in articleGoogle Scholar

  • Muthén, L. K. & Muthén, B. O. (2010). IRT in Mplus. Retrieved April 24, 2013, from http://www.statmodel.com/download/MplusIRT2.pdf First citation in articleGoogle Scholar

  • OECD (2009a). PIAAC Literacy: A conceptual framework. OECD Education Working Paper No. 34. Paris: OECD. First citation in articleGoogle Scholar

  • OECD (2009b). PIAAC Problem solving in technology-rich environments: A conceptual framework. OECD Education Working Paper No. 36. Paris: OECD. First citation in articleGoogle Scholar

  • OECD (2011). PISA 2009 results Vol. VI. Students on line: Reading and using digital information. Paris: OECD. First citation in articleCrossrefGoogle Scholar

  • Pohl, S. & Carstensen, C. H. (2013). Scaling of competence tests in the National Educational Panel Study–Many questions, some answers, and further challenges. Journal for Educational Research Online, 5, 189 – 216. First citation in articleGoogle Scholar

  • Potosky, D. (2007). The Internet knowledge (iKnow) measure. Computers in Human Behavior, 23, 2670 – 2777. First citation in articleCrossrefGoogle Scholar

  • Poynton, T. A. (2005). Computer literacy across the lifespan: A review with implications for educators. Computers in Human Behavior, 21, 861 – 872. First citation in articleCrossrefGoogle Scholar

  • R Development Core Team. (2011). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. Retrieved April 24, 2013, from www.R-project.org/. First citation in articleGoogle Scholar

  • Richter, T. , Naumann, J. & Horz, H. (2010). Eine revidierte Fassung des Inventars zur Computerbildung (INCOBI-R) [A revised version of the Computer Literacy Inventory]. Zeitschrift für Pädagogische Psychologie/German Journal of Educational Psychology, 24, 23 – 37. First citation in articleLinkGoogle Scholar

  • Rizopoulos, D. (2006). ltm: An R package for Latent Variable Modeling and Item Response Theory Analyses, Journal of Statistical Software, 17, 1 – 25. Retrieved April, 24, 2013, from http://www.jstatsoft.org/v17/i05/. First citation in articleGoogle Scholar

  • Rodriguez, M. C. (2003) Construct equivalence of multiple-choice and constructed-response items: A random effects synthesis of correlations. Journal of Educational Measurement, 40, 163 – 184. First citation in articleCrossrefGoogle Scholar

  • Rölke, H. (2012). The item builder: A graphical authoring system for complex item development. In T. Bastiaens & G. Marks (Eds), Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education (pp. 344 – 353). Chesapeake, VA: AACE. Retrieved October 24, 2013, from http://www.editlib.org/p/41614. First citation in articleGoogle Scholar

  • Rouet, J.-F. (2006). The skills of document use: From text comprehension to web-based learning. Mahwah, NJ: Lawrence Erlbaum. First citation in articleGoogle Scholar

  • Schermelleh-Engel, K. , Moosbrugger, H. & Müller, H. (2003). Evaluating the fit of structural equation models: Test of significance and descriptive goodness-of-fit measures. Methods of Psychological Research–Online, 8, 23 – 74. Retrieved April 24, 2013, from www.dgps.de/fachgruppen/methoden/mpr-online/. First citation in articleGoogle Scholar

  • Senkbeil, M. , Ihme, J. M. & Wittwer, J. (2013). The test of Technological and Information Literacy (TILT) in the National Educational Panel Study: Development, empirical testing, and evidence for validity. Journal for Educational Research Online, 5, 139 – 161. First citation in articleGoogle Scholar

  • Sireci, S. G. & Zenisky, A. L. (2006). Innovative item formats in computer-based testing: In pursuit of improved construct representation. In S. M. Downing & T. S. Haladyna (Eds.), Handbook of Test Development (pp. 329 – 348). Mahwah, NJ: Lawrence Erlbaum associates. First citation in articleGoogle Scholar

  • Süß, H.-M. (1996). Intelligenz, Wissen und Problemlösen. Kognitive Voraussetzungen für erfolgreiches Handeln bei computersimulierten Problemen. Lehr- und Forschungstexte Psychologie. Göttingen: Hogrefe. First citation in articleGoogle Scholar

  • Vandenberg, R. J. & Lance, C. E. (2000). A review and synthesis of the measurement invariance literature: Suggestions, practices, and recommendations for organizational research. Organizational Research Methods, 3, 4 – 69. First citation in articleCrossrefGoogle Scholar