ICAT: An Adaptive Testing Procedure for the Identification of Idiosyncratic Knowledge Patterns
Abstract
Traditional adaptive tests provide an efficient method for estimating student achievements levels, by adjusting the characteristics of the test questions to match the performance of each student. These traditional adaptive tests are not designed to identify idiosyncratic knowledge patterns. As students move through their education, they learn content in any number of different ways related to their learning style and cognitive development. This may result in a student having different achievement levels from one content area to another within a domain of content. This study investigates whether such idiosyncratic knowledge patterns exist. It discusses the differences between idiosyncratic knowledge patterns and multidimensionality. Finally, it proposes an adaptive testing procedure that can be used to identify a student’s areas of strength and weakness more efficiently than current adaptive testing approaches. The findings of the study indicate that a fairly large number of students may have test results that are influenced by their idiosyncratic knowledge patterns. The findings suggest that these patterns persist across time for a large number of students, and that the differences in student performance between content areas within a subject domain are large enough to allow them to be useful in instruction. Given the existence of idiosyncratic patterns of knowledge, the proposed testing procedure may enable us to provide more useful information to teachers. It should also allow us to differentiate between idiosyncratic patterns or knowledge, and important mutidimensionality in the testing data.
References
2007, April). What is a highly diagnostic test item? A dense item perspective. Paper presented to the annual meeting of the National Council on Measurement in Education, Chicago, IL.
(2003, April). A long-term study of the stability of item parameter estimates. Paper presented to the annual meeting of the American Educational Research Association, Chicago, IL.
(1999). Developing computerized adaptive tests for school children. In , Innovations in computerized assessment (pp. 93–115). Mahwah, NJ: Erlbaum.
(1991). A comparison of procedures for content-sensitive item selection in computerized adaptive tests. Applied Measurement in Education, 4, 241–261.
(1968). Statistical theories of mental test scores. Reading, MA: Addison-Wesley.
(2003). Technical manual for use with Measures of Academic Progress and Achievement Level Tests. Portland, OR: Northwest Evaluation Association.
(2007, April). Diagnostic testing using decision theory. Paper presented at the annual conference of the National Council on Measurement in Education, Chicago, IL.
(1977). The use of the information function in tailored testing. Applied Psychological Measurement, 1, 233–247.
(1997). Bug distribution and statistical pattern classification. Psychometrika, 52, 193–206.
(1976). Adaptive testing research at Minnesota: Overview, recent results, and future directions. In , Proceedings of the First Conference on Computerized Adaptive Testing (pp. 24–35). Washington DC: United States Civil Service Commission.
(2003). Implementing content constraints in α-stratified adaptive testing using a shadow test approach. Applied Psychological Measurement, 27, 107–120.
(