The World Beyond Rating Scales
Why We Should Think More Carefully About the Response Format in Questionnaires
References
1996). Beck Depression Inventory-II. San Antonio, TX: The Psychological Corporation.
(1981). A clarification of some issues regarding the development and use of behaviorally anchored rating-scales (BARS). Journal of Applied Psychology, 66, 458–463. https://doi.org/10.1037/0021-9010.66.4.458
(2011). Item response modeling of forced-choice questionnaires. Educational and Psychological Measurement, 71, 460–502. https://doi.org/10.1177/0013164410375112
(2013). How IRT can solve problems of ipsative data in forced-choice questionnaires. Psychological Methods, 18, 36–52. https://doi.org/10.1037/a0030641
(2005). Reconsidering forced-choice item formats for applicant personality assessment. Human Performance, 18, 267–307. https://doi.org/10.1207/s15327043hup1803_4
(1995). Constructing validity: Basic issues in objective scale development. Psychological Assessment, 7, 309–319. https://doi.org/10.1037/1040-3590.7.3.309
(2017). A classification of response scale characteristics that affect data quality: A literature review. Quality & Quantity. Advance online publication. https://doi.org/10.1007/s11135–017-0533–4
(1996). The complete guide to performance appraisal. New York, NY: American Management Association.
(2010). BARS and those mysterious, missing middle anchors. Journal of Business and Psychology, 25, 663–672. https://doi.org/10.1007/s10869-010-9180-7
(1921). Experimental development of the graphic rating method. Psychological Bulletin, 18, 98–99.
(1989). Zur Vergleichbarkeit von Ratingskalen unterschiedlicher Kategorienzahl
([On the comparability of rating scales with different numbers of categories] . Psychologische Beiträge, 31, 264–284.2004). Investigating the functioning of a middle category by means of a mixed-measurement model. Journal of Applied Psychology, 89, 687–699. https://doi.org/10.1037/0021-9010.89.4.687
(2010). What do conscientious people do? Development and validation of the Behavioral Indicators of Conscientiousness (BIC). Journal of Research in Personality, 44, 501–511. https://doi.org/10.1016/j.jrp.2010.06.005
(2005). The relation between culture and response styles – Evidence from 19 countries. Journal of Cross-Cultural Psychology, 36, 264–277. https://doi.org/10.1177/0022022104272905
(1999). Survey research. Annual Review of Psychology, 50, 537–567. https://doi.org/10.1146/annurev.psych.50.1.537
(1993). Comparisons of party identification and policy preferences – The impact of survey question format. American Journal of Political Science, 37, 941–964. https://doi.org/10.2307/2111580
(1988).
(Measurement models for ordered response categories . In R. LangeheineJ. RostEds., Latent trait and latent class models (pp. 11–29). New York, NY: Plenum Press.2012). Identifying careless responses in survey data. Psychological Methods, 17, 437–455. https://doi.org/10.1037/A0028085
(2012). The effect of response style bias on the measurement of transformational, transactional, and laissez-faire leadership. European Journal of Work and Organizational Psychology, 21, 271–298. https://doi.org/10.1080/1359432x.2010.550680
(2017). Mountain or molehill? A simulation study on the impact of response styles. Educational and Psychological Measurement, 77, 32–53. https://doi.org/10.1177/0013164416636655
(2000). Optimal number of response categories in rating scales: Reliability, validity, discriminating power, and respondent preferences. Acta Psychologica, 104, 1–15. https://doi.org/10.1016/S0001-6918(99)00050-5
(2013). The impact of acquiescence on the evaluation of personality structure. Psychological Assessment, 25, 1137–1145. https://doi.org/10.1037/a0033323
(2014). Choosing the number of categories in agree-disagree scales. Sociological Methods & Research, 43, 73–97. https://doi.org/10.1177/0049124113509605
(2014). Design, evaluation, and analysis of questionnaires for survey research. Hoboken, NJ: Wiley.
(Taking the test-taker’s perspective: Response process and test motivation in multidimensional forced-choice vs. rating scale instruments. Assessment.
(in press).1975). Behaviorally anchored rating scales: A review of the literature. Personnel Psychology, 28, 549–562. https://doi.org/10.1111/J.1744-6570.1975.Tb01392.X
(1991). Rating scales – Numeric values may change the meaning of scale labels. Public Opinion Quarterly, 55, 570–582. https://doi.org/10.1086/269282
(2008). Classical and modern methods of psychological scale construction. Social and Personality Psychology Compass, 2, 414–433. https://doi.org/10.1111/j.1751-9004.2007.00044.x
(1963). Retranslation of expectations: An approach to the construction of unambiguous anchors for rating scales. Journal of Applied Psychology, 47, 149–155. https://doi.org/10.1037/H0047060
(2016). Construct your own response: The cube construction task as a novel format for the assessment of spatial ability. European Journal of Psychological Assessment. Advance online publication. https://doi.org/10.1027/1015–5759/a000342
(2016).
(Response biases . In F. R. L. LeongB. BartramF. CheungK. F. GeisingerD. IliescuEds., The ITC international handbook of testing and assessment (pp. 349–363). New York, NY: Oxford University Press.2016). A simulation study on methods of correcting for the effects of extreme response style. Educational and Psychological Measurement, 76, 304–324. https://doi.org/10.1177/0013164415591848
(2017). The Big Five Triplets – Development of a multidimensional forced-choice questionnaire. Manuscript in preparation.
(2012). Complex problem solving – More than reasoning? Intelligence, 40, 1–14. https://doi.org/10.1016/j.intell.2011.11.003
(2014). Stop and state your intentions! Let’s not forget the ABC of test construction. European Journal of Psychological Assessment, 30, 239–242. https://doi.org/10.1027/1015-5759/a000228
(2015). “F*** you, I won’t do what you told me!” – Response biases as threats to psychological assessment. European Journal of Psychological Assessment, 31, 153–158. https://doi.org/10.1027/1015-5759/a000292
(