Skip to main content
Free AccessEditorial

Current Challenges, New Developments, and Future Directions in Scale Construction

Published Online:https://doi.org/10.1027/1015-5759/a000375
Free first page

References

  • American Educational Research Association. (2014). Standards for Educational and Psychological Testing. Washington, DC: American Educational Research Association. First citation in articleGoogle Scholar

  • Au, N. & Lorgelly, P. (2014). Anchoring vignettes for health comparisons: an analysis of response consistency. Quality of Life Research, 23, 1721–1731. First citation in articleCrossrefGoogle Scholar

  • Baumgartner, H. & Steenkamp, J.-B. E. M. (2001). Response styles in marketing research: A cross-national investigation. Journal of Marketing Research, 38, 143–156. First citation in articleCrossrefGoogle Scholar

  • Beatty, P. C. & Willis, G. B. (2007). Research synthesis: The practice of cognitive interviewing. Public Opinion Quarterly, 71(2), 287–311. First citation in articleCrossrefGoogle Scholar

  • Brunner, M. & Süß, H. M. (2005). Analyzing the reliability of multidimensional measures: An example from intelligence research. Educational and Psychological Measurement, 65, 227–240. First citation in articleCrossrefGoogle Scholar

  • Bolt, D. M., Lu, Y. & Kim, J. (2014). Measurement and control of response styles using anchoring vignettes: A model-based approach. Psychological Methods, 19, 528–541. First citation in articleCrossrefGoogle Scholar

  • Buss, D. M. & Craik, K. H. (1980). The frequency concept of disposition: Dominance and prototypically dominant acts. Journal of Personality, 48, 379–392. doi: 10.1111/j.1467-6494.1980.tb00840.x First citation in articleCrossrefGoogle Scholar

  • Chen, F. F. (2008). What happens if we compare chopsticks with forks? The impact of making inappropriate comparisons in cross-cultural research. Journal of Personality and Social Psychology, 95, 1005–1018. First citation in articleCrossrefGoogle Scholar

  • Chevalier, A. & Fielding, A. (2011). An Introduction to Anchoring Vignettes. Journal of the Royal Statistical Society: Series A (Statistics in Society), 174, 569–574. First citation in articleCrossrefGoogle Scholar

  • Crane, M., Rissel, C., Greaves, S. & Gebel, K. (2016). Correcting bias in self-rated quality of life: An application of anchoring vignettes and ordinal regression models to better understand QoL differences across commuting modes. Quality of Life Research: An International Journal of Quality of Life Aspects of Treatment, Care and Rehabilitation, 25, 257–266. First citation in articleCrossrefGoogle Scholar

  • Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297–334. Retrieved from http://download.springer.com/static/pdf/417/art%3A10.1007%2FBF02310555.pdf?auth66=1407595415_4605f078b28cec2fcecedbb9e76e4c47&ext=.pdf First citation in articleCrossrefGoogle Scholar

  • Cronbach, L. J. & Shavelson, R. J. (2004). My current thoughts on coefficient alpha and successor procedures. Educational and Psychological Measurement, 64, 391–418. doi: 10.1177/0013164404266386 First citation in articleCrossrefGoogle Scholar

  • Danner, D., Aichholzer, J. & Rammstedt, B. (2015). Acquiescence in personality questionnaires: Relevance, domain specificity, and stability. Journal of Research in Personality, 57, 119–130. First citation in articleCrossrefGoogle Scholar

  • Graham, J. M. (2006). Congeneric and (essentially) tau-equivalent estimates of score reliability what they are and how to use them. Educational and Psychological Measurement, 66, 930–944. First citation in articleCrossrefGoogle Scholar

  • Gu, F., Little, T. D. & Kingston, N. M. (2013). Misestimation of reliability using coefficient alpha and structural equation modeling when assumptions of tau-equivalence and uncorrelated errors are violated. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 9, 30–40. doi: 10.1027/1614-2241/a000052 First citation in articleLinkGoogle Scholar

  • Janssen, A. B., Schultze, M. & Grötsch, A. (2015). Following the Ants. In European Journal of Psychological Assessment. Advance online publication. doi: 10.1027/1015-5759/a000299 First citation in articleLinkGoogle Scholar

  • Jürges, H. & Winter, J. (2013). Are anchoring vignettes ratings sensitive to vignette age and sex? Health Economics, 22, 1–13. First citation in articleCrossrefGoogle Scholar

  • King, G., Murray, C. L., Salomon, J. A. & Tandon, A. (2004). Enhancing the validity and cross – cultural comparability of measurement in survey research. American Political Science Review, 97, 567–583. First citation in articleCrossrefGoogle Scholar

  • Leite, W., Huang, I. & Marcoulides, G. A. (2008). Item selection for the development of short forms of scales using an ant colony optimization algorithm. Multivariate Behavioral Research, 43, 411–431. doi: 10.1080/00273170802285743 First citation in articleCrossrefGoogle Scholar

  • Lenzner, T., Neuert, C. & Otto, W. (2016). Cognitive pretesting. Mannheim, Germany: GESIS – Leibniz Institute for the Social Sciences (GESIS Survey Guidelines). First citation in articleGoogle Scholar

  • Marcoulides, G. A. & Drenzer, Z. (2003). Model specifications searchers using ant colony optimization algorithms. Structural Equation Modeling, 10, 154–164. First citation in articleCrossrefGoogle Scholar

  • McBride, L. & Moran, G. (1967). Double agreement as a function of item ambiguity and susceptibility to demand implications of the psychological situation. Journal of Personality and Social Psychology, 6, 115–118. First citation in articleCrossrefGoogle Scholar

  • McDonald, R. P. (1999). Test theory: A unified treatment. Mahwah, NJ: Erlbaum. First citation in articleGoogle Scholar

  • Moors, G., Kieruj, N. D. & Vermunt, J. K. (2014). The effect of labeling and numbering of response scales on the likelihood of response bias. Sociological Methodology, 44, 369–399. doi: 10.1177/0081175013516114 First citation in articleCrossrefGoogle Scholar

  • Mottus, R., Allik, J., Realo, A., Pullmann, H., Rossier, G., Zecca, G., … Tseung, C. N. (2012). Comparability of self-reported conscientiousness across 21 countries. European Journal of Personality, 26, 303–317. doi: 10.1002/per.840 First citation in articleCrossrefGoogle Scholar

  • Olaru, G., Witthöft, M. & Wilhelm, O. (2015). Methods matter: Testing competing models for designing short-scale Big-Five assessments. Journal of Research in Personality, 59, 56–68. doi: 10.1016/j.jrp.2015.09.001 First citation in articleCrossrefGoogle Scholar

  • Padilla, M. A. & Divers, J. (2013). Coefficient omega bootstrap confidence intervals: Nonnormal distributions. Educational and Psychological Measurement, 73, 956–972. doi: 10.1177/0013164413492765 First citation in articleCrossrefGoogle Scholar

  • Paulhus, D. L. (1991). Measurement and control of response bias. In J. P. RobinsonP. R. ShaverL. S. WrightsmanEds., Measures of personality and social psychological attitudes (pp. 17–59). San Diego, CA: Academic Press. First citation in articleGoogle Scholar

  • Primi, R., Zanon, C., Santos, D., De Fruit, F. & John, O. P. (2016). Can they make adolescent self-reports of social-emotional skills more reliable, discriminant, and criterion-valid? European Journal of Psychological Assessment, 32, 39–51. First citation in articleLinkGoogle Scholar

  • Rammstedt, B., Beierlein, C., Brähler, E., Eid, M., Hartig, J., Kersting, M., … Weichselgartner, E. (2015). Quality standards for the development, application, and evaluation of measurement instruments in social science survey research RatSWD Working Papers 245, http://www.ratswd.de/dl/RatSWD_WP_245.pdf First citation in articleGoogle Scholar

  • Rammstedt, B. & Farmer, R. F. (2013). The impact of acquiescence on the evaluation of personality structure. Psychological Assessment, 25, 1137–1145. doi: 10.1037/a0033323 First citation in articleCrossrefGoogle Scholar

  • Raykov, T. (1997). Estimation of composite reliability for congeneric measures. Applied Psychological Measurement, 21, 173–184. First citation in articleCrossrefGoogle Scholar

  • Raykov, T. & Pohl, S. (2013). On studying common factor variance in multiple-component measuring instruments. Educational and Psychological Measurement, 73, 191–209. doi: 10.1177/0013164412458673 First citation in articleCrossrefGoogle Scholar

  • Revelle, W. & Zinbarg, R. E. (2009). Coefficients alpha, beta, omega, and the glb: Comments on Sijtsma. Psychometrika, 74, 145–154. First citation in articleCrossrefGoogle Scholar

  • Robie, C., Brown, D. J. & Beaty, J. C. (2007). Do people fake on personality inventories? A verbal protocol analysis. Journal of Business and Psychology, 21, 489–509. doi: 10.1007/s10869-007-9038-9 First citation in articleCrossrefGoogle Scholar

  • Rodriguez, A., Reise, S. P. & Haviland, M. G. (2016). Evaluating bifactor models: Calculating and interpreting statistical indices. Psychological Methods, 21, 137–150. doi: 10.1037/met0000045 First citation in articleCrossrefGoogle Scholar

  • Sijtsma, K. (2009). On the use, the misuse, and the very limited usefulness of Cronbach’s alpha. Psychometrika, 74, 107–120. Retrieved from http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2792363/pdf/11336_2008_Article_9101.pdf First citation in articleCrossrefGoogle Scholar

  • Soto, C. J., John, O. P., Gosling, S. D. & Potter, J. (2008). The developmental psychometrics of big five self-reports: Acquiescence, factor structure, coherence, and differentiation from ages 10 to 20. Journal of Personality and Social Psychology, 94, 718–737. doi: 10.1037/0022-3514.94.4.718 First citation in articleCrossrefGoogle Scholar

  • Thalmayer, A. G. & Saucier, G. (2014). The questionnaire big six in 26 nations: Developing cross-culturally applicable big six, big five and big two inventories. European Journal of Personality, 28, 482–496. doi: 10.1002/per.1969 First citation in articleCrossrefGoogle Scholar

  • Trott, D. M. & Jackson, D. N. (1967). An experimental analysis of acquiescence. Journal of Experimental Research in Personality, 2, 278–288. First citation in articleGoogle Scholar

  • Vandenberg, R. J. & Lance, C. E. (2000). A review and synthesis of the measurement invariance literature: Suggestions, practices, and recommendations for organizational research. Organizational Research Methods, 3, 4–70. doi: 10.1177/109442810031002 First citation in articleCrossrefGoogle Scholar

  • Weijters, B., Cabooter, E. & Schillewaert, N. (2010). The effect of rating scale format on response styles: The number of response categories and response category labels. International Journal of Research in Marketing, 27, 236–247. doi: 10.1016/j.ijresmar.2010.02.004 First citation in articleCrossrefGoogle Scholar

  • Wetzel, E., Böhnke, J. R., Carstensen, C. H., Ziegler, M. & Ostendorf, F. (2013). Do individual response styles matter? Journal of Individual Differences, 34, 69–81. doi: 10.1027/1614-0001/a000102 First citation in articleLinkGoogle Scholar

  • Willis, G. (2004). Cognitive Interviewing Revisited: A useful technique, in theory?. In S. PresserJ. M. RothgebM. P. CouperJ. T. LesslerE. MartinJ. MartinE. E. SingerEds., Methods for testing and evaluating survey questionnaires (pp. 299–317). Hoboken, NJ: Wiley. First citation in articleGoogle Scholar

  • Yang, Y. & Green, S. B. (2011). Coefficient alpha: A reliability coefficient for the 21st century? Journal of Psychoeducational Assessment, 29, 377–392. doi: 10.1177/0734282911406668 First citation in articleCrossrefGoogle Scholar

  • Zhang, Z. & Yuan, K.-H. (2016). Robust coefficients alpha and omega and confidence intervals with outlying observations and missing data: Methods and software. Educational and Psychological Measurement, 76, 387–411. doi: 10.1177/0013164415594658 First citation in articleCrossrefGoogle Scholar

  • Ziegler, M. (2011). Applicant faking: A look into the black box. The Industrial and Organizational Psychologist, 49, 29–36. First citation in articleGoogle Scholar

  • Ziegler, M. (2014). Stop and state your intentions! Let’s not forget the ABC of test construction. European Journal of Psychological Assessment, 30, 239–242. doi: 10.1027/1015-5759/a000228 First citation in articleLinkGoogle Scholar

  • Ziegler, M. (2015). “F*** you, I won’t do what you told me!” Response biases as threats to psychological assessment. European Journal of Psychological Assessment, 31, 153–158. doi: 10.1027/1015-5759/a000292 First citation in articleLinkGoogle Scholar

  • Ziegler, M. & Brunner, M. (2016). Test standards and psychometric modeling. In A. A. LipnevichF. PreckelR. RobertsEds., Psychosocial skills and school systems in the 21st century (pp. 29–55). New York, NY: Springer. First citation in articleGoogle Scholar

  • Ziegler, M. & Hagemann, D. (2015). Testing the unidimensionality of items: Pitfalls and loopholes. European Journal of Psychological Assessment, 31, 231–237. doi: 10.1027/1015-5759/a000309 First citation in articleLinkGoogle Scholar

  • Ziegler, M., MacCann, C. & Roberts, R. D. (2011). Faking: Knowns, unknowns, and points of contention. In M. ZieglerC. MacCannR. D. RobertsEds., New perspectives on faking in personality assessment (pp. 3–16). New York, NY: Oxford University Press. First citation in articleGoogle Scholar