Skip to main content
Published Online:https://doi.org/10.1027/1015-5759/a000309
Free first page

References

  • Beauducel, A. (2013). Taking the error term of the factor model into account: The factor score predictor interval. Applied Psychological Measurement, 37, 289–303. doi: 10.1177/0146621613475358 First citation in articleCrossrefGoogle Scholar

  • Bejar, I. I. (1983). Achievement testing: Recent advances. Beverly Hills, CA: Sage. First citation in articleCrossrefGoogle Scholar

  • Brunner, M. & Süß, H. M. (2005). Analyzing the reliability of multidimensional measures: An example from intelligence research. Educational and Psychological Measurement, 65, 227–240. First citation in articleCrossrefGoogle Scholar

  • Campbell, D. T. & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56, 81–105. First citation in articleCrossrefGoogle Scholar

  • Cronbach, L. J. (1947). Test “reliability”: Its meaning and determination. Psychometrika, 12, 1–16. First citation in articleCrossrefGoogle Scholar

  • Cronbach, L. J. & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281–302. First citation in articleCrossrefGoogle Scholar

  • Fabrigar, L. R., Wegener, D. T., MacCallum, R. C. & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4, 272–299. First citation in articleCrossrefGoogle Scholar

  • Fischer, G. (1997). Unidimensional linear logistic Rasch models. In W. van der LindenR. HambletonEds., Handbook of Modern Item Response Theory (pp. 225–243). New York, NY: Springer. First citation in articleGoogle Scholar

  • Hartig, J. & Höhler, J. (2009). Multidimensional IRT models for the assessment of competencies. Studies in Educational Evaluation, 35, 57–63. First citation in articleCrossrefGoogle Scholar

  • Heene, M., Hilbert, S., Draxler, C., Ziegler, M. & Bühner, M. (2011). Masking misfit in confirmatory factor analysis by increasing unique variances: A cautionary note on the usefulness of cutoff values of fit indices. Psychological Methods, 16, 319–336. First citation in articleCrossrefGoogle Scholar

  • Henson, R. & Roberts, J. (2006). Use of exploratory factor analysis in published research: Common errors and some comment on improved practice. Educational and Psychological Measurement, 66, 393. First citation in articleCrossrefGoogle Scholar

  • Horn, J. L. (1965). A rationale and test for the number of factors in factor analysis. Psychometrika, 30, 179–185. First citation in articleCrossrefGoogle Scholar

  • Hu, L. & Bentler, P. M. (1998). Fit indices in covariance structure modeling: Sensitivity to underparameterized model misspecification. Psychological Methods, 3, 424–453. First citation in articleCrossrefGoogle Scholar

  • Lance, C. E., Noble, C. L. & Scullen, S. E. (2002). A critique of the correlated trait-correlated method and correlated uniqueness models for multitrait-multimethod data. Psychological Methods, 7, 228–244. First citation in articleCrossrefGoogle Scholar

  • Lazarsfeld, P. F. (1959). Latent structure analysis. In S. KochEd., Psychology: A study of a science (Vol. 3, pp. 476–543). New York, NY: McGraw-Hill. First citation in articleGoogle Scholar

  • Loevinger, J. (1957). Objective tests as instruments of psychological theory: Monograph Supplement 9. Psychological Reports, 3, 635–694. First citation in articleCrossrefGoogle Scholar

  • Marsh, H. W., Hau, K. T. & Wen, Z. (2004). In search of golden rules: Comment on hypothesis-testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and Bentler’s (1999) findings. Structural Equation Modeling, 11, 320–341. First citation in articleCrossrefGoogle Scholar

  • Marsh, H. W., Muthén, B., Asparouhov, T., Lüdtke, O., Robitzsch, A., Morin, A. J. S. & Trautwein, U. (2009). Exploratory structural equation modeling, integrating CFA and EFA: Application to students’ evaluations of university teaching. Structural Equation Modeling: A Multidisciplinary Journal, 16, 439–476. First citation in articleCrossrefGoogle Scholar

  • Marsh, H. W., Wen, Z. & Hau, K. T. (2004). Structural equation models of latent interactions: Evaluation of alternative estimation strategies and indicator construction. Psychological Methods, 9, 275–300. First citation in articleCrossrefGoogle Scholar

  • McDonald, R. P. (1967). Factor interaction in nonlinear factor analysis*. ETS Research Bulletin Series, 1967, i-18. First citation in articleGoogle Scholar

  • Radloff, L. S. (1977). The CES-D scale a self-report depression scale for research in the general population. Applied Psychological Measurement, 1, 385–401. First citation in articleCrossrefGoogle Scholar

  • Reise, S. P., Cook, K. F. & Moore, T. M. (2014). Evaluating the impact of multidimensionality on unidimensional item response theory model parameters. In S. P. ReiseD. A. RevickiEds., Handbook of Item Response Theory Modeling: Applications to Typical Performance Assessment (pp. 13–YY). New York, NY: Routledge. First citation in articleGoogle Scholar

  • Revelle, W. (2014). psych: Procedures for Personality and Psychological Research (Version 1.4.5). Evanston, IL: Northwestern University. First citation in articleGoogle Scholar

  • Rost, J. (1991). A logistic mixture distribution model for polychotomous item responses. The British Journal of Mathematical and Statistical Psychology, 44, 75–92. First citation in articleCrossrefGoogle Scholar

  • Rost, J. (2001). The growing family of Rasch models. In A. BoomsmaM. J. van DuijnT. B. SnijdersEds., Essays on Item Response Theory (Vol. 157, pp. 25–42). New York, NY: Springer. First citation in articleGoogle Scholar

  • Rost, J., Carstensen, C. H. & Von Davier, M. (1997). Applying the mixed Rasch model to personality questionnaires. In J. In RostR. E. LangeheineEds., Applications of latent trait and latent class models in the social sciences (pp. XX–YY). New York, NY: Waxmann. First citation in articleGoogle Scholar

  • Sass, D. A. & Schmitt, T. A. (2010). A comparative investigation of rotation criteria within exploratory factor analysis. Multivariate Behavioral Research, 45, 73–103. First citation in articleCrossrefGoogle Scholar

  • Schmitt, T. A. & Sass, D. A. (2011). Rotation criteria and hypothesis testing for exploratory factor analysis: Implications for factor pattern loadings and interfactor correlations. Educational and Psychological Measurement, 71, 95–113. First citation in articleCrossrefGoogle Scholar

  • Schweizer, K. (2010). Some guidelines concerning the modeling of traits and abilities in test construction. European Journal of Psychological Assessment, 26, 1–2. First citation in articleLinkGoogle Scholar

  • Schweizer, K. (2012). On issues of validity and especially on the misery of convergent validity. European Journal of Psychological Assessment, 28, 249–254. First citation in articleLinkGoogle Scholar

  • Stout, W. (1987). A nonparametric approach for assessing latent trait unidimensionality. Psychometrika, 52, 589–617. First citation in articleCrossrefGoogle Scholar

  • Stout, W. (2002). Psychometrics: From practice to theory and back. Psychometrika, 67, 485–518. First citation in articleCrossrefGoogle Scholar

  • Velicer, W. F. (1976). Determining number of components from matrix of partial correlations. Psychometrika, 41, 321–327. First citation in articleCrossrefGoogle Scholar

  • Wetzel, E., Böhnke, J. R., Carstensen, C. H., Ziegler, M. & Ostendorf, F. (2013). Do individual response styles matter? Journal of Individual Differences, 34, 69–81. First citation in articleLinkGoogle Scholar

  • Ziegler, M. (2014a). Comments on item selection procedures. European Journal of Psychological Assessment, 30, 1–2. First citation in articleLinkGoogle Scholar

  • Ziegler, M. (2014b). Stop and state your intentions!: Let’s not forget the ABC of test construction. European Journal of Psychological Assessment, 30, 239–242. First citation in articleLinkGoogle Scholar

  • Ziegler, M. (2015). “F*** you, I won’t do what you told me!” – response biases as threats to psychological assessment. European Journal of Psychological Assessment, 31, 153–158. First citation in articleLinkGoogle Scholar

  • Ziegler, M., Booth, T. & Bensch, D. (2013). Getting entangled in the nomological net. European Journal of Psychological Assessment, 29, 157–161. First citation in articleLinkGoogle Scholar

  • Ziegler, M. & Brunner, M. (in press). Test Standards and Psychometric Modeling. In A. A. LipnevichF. PreckelR. RobertsEds., Psychosocial Skills and School Systems in the Twenty-First Century: Theory, Research, and Applications. Göttingen, Germany: Springer. First citation in articleGoogle Scholar

  • Ziegler, M., Kemper, C. J. & Kruyen, P. (2014). Short scales – Five misunderstandings and ways to overcome them. Journal of Individual Differences, 35, 185–189. First citation in articleLinkGoogle Scholar

  • Ziegler, M., Maaß, U., Griffith, R. & Gammon, A. (2015). What is the nature of faking? Modeling distinct response patterns and quantitative differences in faking at the same time. Organizational Research Methods, 18, 679–703. First citation in articleCrossrefGoogle Scholar

  • Ziegler, M., MacCann, C. & Roberts, R. D. (2011). Faking: Knowns, unknowns, and points of contention. In M. ZieglerC. MacCannR. R. RobertsEds., New perspectives on faking in personality assessment (pp. 3–16). New York, NY: Oxford University Press. First citation in articleGoogle Scholar

  • Ziegler, M., Poropat, A. & Mell, J. (2014). Does the length of a questionnaire matter? Journal of Individual Differences, 35, 250–261. First citation in articleLinkGoogle Scholar

  • Zinbarg, R. E., Revelle, W., Yovel, I. & Li, W. (2005). Cronbach’s α, Revelle’s β, and McDonald’s ω H: Their relations with each other and two alternative conceptualizations of reliability. Psychometrika, 70, 123–133. First citation in articleCrossrefGoogle Scholar