Skip to main content
Original Article

A CTC(M−1) Model for Different Types of Raters

Published Online:https://doi.org/10.1027/1614-2241.5.3.88

Many psychologists collect multitrait-multimethod (MTMM) data to assess the convergent and discriminant validity of psychological measures. In order to choose the most appropriate model, the types of methods applied have to be considered. It is shown how the combination of interchangeable and structurally different raters can be analyzed with an extension of the correlated trait-correlated method minus one [CTC(M−1)] model. This extension allows for disentangling individual rater biases (unique method effects) from shared rater biases (common method effects). The basic ideas of this model are presented and illustrated by an empirical example.

References

  • Burns, G. L. , Haynes, S. N. (2006). Clinical psychology: Construct validation with multiple sources of information and multiple settings. In M. Eid, E. Diener, (Eds.), Handbook of multimethod measurement in psychology. Washington, DC: American Psychological Association. First citation in articleCrossrefGoogle Scholar

  • Campbell, D. T. , Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56, 81–105. First citation in articleCrossrefGoogle Scholar

  • Colvin, C. R. (1993). Judgable people: Personality, behavior, and competing explanations. Journal of Personality and Social Psychology, 64, 861–873. First citation in articleCrossrefGoogle Scholar

  • Curran, P. J. (2003). Have multilevel models been structural equation models all along? Multivariate Behavioral Research, 38, 529–569. First citation in articleCrossrefGoogle Scholar

  • Eid, M. (2000). A multitrait-multimethod model with minimal assumptions. Psychometrika, 65, 241–261. First citation in articleCrossrefGoogle Scholar

  • Eid, M. , Lischetzke, T. , Nussbeck, F. W. (2006). Structural equation models for multitrait-multimethod data. In M. Eid, E. Diener, (Eds.), Handbook of psychological measurement: A multimethod perspective (pp. 283–299). Washington, DC: American Psychological Association. First citation in articleCrossrefGoogle Scholar

  • Eid, M. , Lischetzke, T. , Nussbeck, F. W. , Trierweiler, L. I. (2003). Separating trait effects from trait-specific method effects in multitrait-multimethod models: A multiple-indicator CT-C(M−1) model. Psychological Methods, 8, 38–60. First citation in articleCrossrefGoogle Scholar

  • Eid, M. , Nussbeck, F. W. , Geiser, C. , Cole, D. A. , Gollwitzer, M. , Lischetzke, T. (2008). Structural equation modeling of multitrait-multimethod data: Different models for different types of methods. Psychological Methods, 13, 230–253. First citation in articleCrossrefGoogle Scholar

  • Funder, D. C. (1995). On the accuracy of personality judgment: A realistic approach. Psychological Review, 102, 652–670. First citation in articleCrossrefGoogle Scholar

  • Geiser, C. , Eid, M. , Nussbeck, F. W. (2008). On the meaning of the latent variables in the CT-C(M−1) model: A comment on Maydeu-Olivares and Coffman (2006). Psychological Methods, 13, 49–57. First citation in articleCrossrefGoogle Scholar

  • Hays, W. L. (1994). Statistics (5th ed.). Orlando, FL: Harcourt Brace College Publishers. First citation in articleGoogle Scholar

  • John, O. P. , Robins, R. W. (1993). Determinants of interjudge agreement on personality traits: The Big Five domains, observability, evaluativeness, and the unique perspective of the self. Journal of Personality, 61, 521–551. First citation in articleCrossrefGoogle Scholar

  • Lord, F. M. , Novick, M. R. (1968). Statistical theories of mental test scores. Reading, MA: Addison-Wesley. First citation in articleGoogle Scholar

  • Marsh, H. W. (1989). Confirmatory factor analyses of multitrait-multimethod data: Many problems and a few solutions. Applied Psychological Measurement, 13, 335–361. First citation in articleCrossrefGoogle Scholar

  • Marsh, H. W. , Grayson, D. (1995). Latent variable models of multitrait-multimethod data. In R. H. Hoyle, (Ed.), Structural equation modeling: Concepts, issues, and applications (pp. 177–198). Thousands Oaks: Sage Publications. First citation in articleGoogle Scholar

  • Muthén, L. K. , Muthén, B. O. (2006). Mplus user’s guide. Los Angeles, CA: Muthén & Muthén. First citation in articleGoogle Scholar

  • Nussbeck, F. W. , Eid, M. , Lischetzke, T. (2006). Analyzing MTMM data with SEM for ordinal variables applying the WLSMV-estimator: What is the sample size needed for valid results? British Journal of Mathematical and Statistical Psychology, 59, 195–213. First citation in articleCrossrefGoogle Scholar

  • Saris, W. E. , van Meurs, A. (1991). Evaluation of measurement instruments using a structural modeling approach. In P. P. Biemer, R. M. Groves, L. E. Lyberg, N. A. Mathiowetz, S. Sudman, (Eds.), Measurement errors in surveys (pp. 575–597). New York: Wiley. First citation in articleGoogle Scholar

  • Spain, J. S. , Eaton, L. G. , & Funder, D. C. (2000). Perspectives on personality: The relative accuracy of self versus others for the prediction of emotion and behavior. Journal of Personality, 68, 837–867. First citation in articleCrossrefGoogle Scholar

  • Watson, D. , Clark, L. A. (1991). Self- versus peer ratings of specific emotional traits: Evidence of convergent and discriminant validity. Journal of Personality and Social Psychology, 60, 927–940. First citation in articleCrossrefGoogle Scholar

  • Widaman, K. F. (1985). Hierarchically nested covariance structure models for multitrait-multimethod data. Applied Psychological Measurement, 9, 1–26. First citation in articleCrossrefGoogle Scholar