Skip to main content
Original Article

The Answer-Until-Correct Item Format Revisited

Published Online:https://doi.org/10.1027/1614-2241/a000028

Current availability of computers has led to the use of a new series of response formats that are an alternative to the classical dichotomic format, and to the recovery of other formats, like the case of the answer-until-correct (AUC) format, whose efficient administration requires this kind of technology. The goal of the present study is to determine whether the use of the AUC format improves test reliability and validity in comparison to the classical dichotomic format. Three samples of 174, 431, and 1,446 Spanish students from secondary education, professional training, and high school, ages between 13 and 20 years, were used. A 100-item test and a 25-item test that assessed knowledge of Universal History were used, both tests administered by Internet with the AUC format. There were 56 experimental conditions, resulting from the manipulation of eight scoring models and seven test lengths. The data were analyzed from the perspective of the Classical Test Theory and also with Item Response Theory (IRT) models. Reliability and construct validity, analyzed from the classic perspective, did not seem to improve significantly when using the AUC format; however, when assessing reliability with the Information Function obtained by means of IRT models, the advantages of the AUC format versus the dichotomic format become clear. For low levels of the trait assessed, scores obtained with the AUC format provide more information than scores obtained with the dichotomic format. Lastly, these results are commented on, and the possibilities and limits of the AUC format in highly computerized psychological and educational contexts are analyzed.

References