Skip to main content
Original Article

Three Conceptual Impediments to Developing Scale Theory for Formative Scales

Published Online:https://doi.org/10.1027/1614-2241/a000154

Abstract. Bollen and colleagues have advocated the use of formative scales despite the fact that formative scales lack an adequate underlying theory to guide development or validation such as that which underlies reflective scales. Three conceptual impediments impede the development of such theory: the redefinition of measurement restricted to the context of model fitting, the inscrutable notion of conceptual unity, and a systematic conflation of item scores with attributes. Setting aside these impediments opens the door to progress in developing the needed theory to support formative scale use. A broader perspective facilitates consideration of standard scale development concerns as applied to formative scales including scale development, item analysis, reliability, and item bias. While formative scales require a different pattern of emphasis, all five of the traditional sources of validity evidence apply to formative scales. Responsible use of formative scales requires greater attention to developing the requisite underlying theory.

References

  • American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association. First citation in articleGoogle Scholar

  • Bainter, S. A. & Bollen, K. A. (2014). Interpretational confounding or confounded interpretations of causal indicators? Measurement: Interdisciplinary Research and Perspectives, 12, 125–140. https://doi.org/10.1080/15366367.2014.968503 First citation in articleCrossrefGoogle Scholar

  • Bandalos, D. (2018). Measurement theory and applications for the social sciences. New York, NY: Guilford Press. First citation in articleGoogle Scholar

  • Blalock, H. M. Jr. (1963). Making causal inferences for unmeasured variables from correlations among indicators. American Journal of Sociology, 69, 53–62. https://doi.org/10.1086/223510 First citation in articleCrossrefGoogle Scholar

  • Bollen, K. A. (1984). Multiple indicators: Internal consistency or no necessary relationship? Quality and Quantity, 18, 377–385. First citation in articleCrossrefGoogle Scholar

  • Bollen, K. A. (1989). Structural equations with latent variables. New York, NY: Wiley. First citation in articleCrossrefGoogle Scholar

  • Bollen, K. A. & Bauldry, S. (2011). Three Cs in measurement models: Causal indicators, composite indicators, and covariates. Psychological Methods, 16, 265–284. https://doi.org/10.1037/a0024448 First citation in articleCrossrefGoogle Scholar

  • Bollen, K. A. & Diamantopoulos, A. (2017a). In defense of causal-formative indicators: A minority report. Psychological Methods, 22, 581–596. https://doi.org/10.1037/met0000056 First citation in articleCrossrefGoogle Scholar

  • Bollen, K. A. & Diamantopoulos, A. (2017b). Notes on measurement theory for causal-formative indicators: A reply to Hardin. Psychological Methods, 22, 605–608. https://doi.org/10.1037/met0000149 First citation in articleCrossrefGoogle Scholar

  • Bollen, K. A. & Lennox, R. (1991). Conventional wisdom on measurement: A structural equation perspective. Psychological Bulletin, 110, 305–314. https://doi.org/10.1037/0033-2909.110.2.305 First citation in articleCrossrefGoogle Scholar

  • Chapin, S. F. (1928). A quantitative scale for rating the home and social environment of middle class families in an urban community: A first approximation to the measurement of socio-economic status. Journal of Educational Psychology, 19, 99–111. https://doi.org/10.1037/h0074500 First citation in articleCrossrefGoogle Scholar

  • Edwards, J. R. (2011). The fallacy of formative measurement. Organizational Research Methods, 14, 370–388. https://doi.org/10.1177/1094428110378369 First citation in articleCrossrefGoogle Scholar

  • Edwards, J. R. & Bagozzi, R. P. (2000). On the nature and direction of relationships between constructs and measures. Psychological Methods, 5, 155–174. https://doi.org/10.1037/1082-989X.5.2.155 First citation in articleCrossrefGoogle Scholar

  • Fayers, P. M. & Hand, D. J. (1997). Factor analysis, causal indicators, and quality of life. Quality of Life Research, 6, 139–150. https://doi.org/10.1023/A:1026490117121 First citation in articleGoogle Scholar

  • Hardin, A. (2017). A call for theory to support the use of causal-formative indicators: A commentary on Bollen and Diamantopoulos (2017). Psychological Methods, 22, 597–604. https://doi.org/10.1037/met0000115 First citation in articleCrossrefGoogle Scholar

  • Hayduk, L. A. (1987). Structural equation modeling with LISREL: Essentials and advances. Baltimore, MD: Johns Hopkins University Press. First citation in articleGoogle Scholar

  • Hayduk, L. A., Pazderka Robinson, H., Cummings, G. G., Boadu, K., Verbeek, E. L. & Perks, T. A. (2007). The weird world, and equally weird measurement models: Reactive indicators and the validity revolution. Structural Equation Modeling: A Multidisciplinary Journal, 14, 280–310. https://doi.org/10.1080/10705510709336747 First citation in articleCrossrefGoogle Scholar

  • Haynes, S. N., Richard, D. C. S. & Kubany, E. S. (1995). Content validity in psychological assessment: A functional approach to concepts and methods. Psychological Assessment, 7, 238–247. https://doi.org/10.1037/1040-3590.7.3.238 First citation in articleCrossrefGoogle Scholar

  • Kane, M. T. (2006). Validation. In R. L. BrennanEd., Educational Measurement (4th ed.pp. 17–64). Westport, CT: Praeger. First citation in articleGoogle Scholar

  • Lord, F. M. & Novick, M. R. (2008). Statistical theories of mental test scores (Originally published 1967). Charlotte, NC: Information Age. First citation in articleGoogle Scholar

  • Markus, K. A. (2014). Unfinished business in clarifying causal measurement: Commentary on Bainter and Bollen. Measurement: Interdisciplinary Research and Perspectives, 12, 146–150. https://doi.org/10.1080/15366367.2014.980106 First citation in articleCrossrefGoogle Scholar

  • Markus, K. A. (2016). Causal Measurement models: Can criticism stimulate clarification? Measurement: Interdisciplinary Research and Perspectives, 14, 110–113. https://doi.org/10.1080/15366367.2016.1224965 First citation in articleCrossrefGoogle Scholar

  • Markus, K. A. & Borsboom, D. (2013). Frontiers of test validity theory: Measurement¸ causation, and meaning. New York, NY: Routledge. First citation in articleGoogle Scholar

  • Maul, A. (2017). Rethinking traditional methods of survey validation. Measurement: Interdisciplinary Research and Perspectives, 15, 51–69. https://doi.org/10.1080/15366367.2017.1348108 First citation in articleCrossrefGoogle Scholar

  • McDonald, R. P. (1999). Test theory: A unified treatment. Mahwah, NJ: Erlbaum. First citation in articleGoogle Scholar

  • Messick, S. (1989). Validity. In R. L. LinnEd., Educational measurement (3rd ed., pp. 13–103). Washington, DC: American Council on Education and National Council on Measurement in Education. First citation in articleGoogle Scholar

  • Morgan, S. L. & Winship, C. (2015). Counterfactuals and causal inference: Methods and principles for social research (2nd ed..). New York, NY: Cambridge University Press. First citation in articleGoogle Scholar

  • Newton, P. E. & Shaw, S. D. (2014). Validity in educational & psychological assessment. Los Angeles, CA: Sage Publications. First citation in articleCrossrefGoogle Scholar

  • Nunnally, J. C. & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). New York, NY: McGraw-Hill. First citation in articleGoogle Scholar

  • Schmidt, B., Kirpalani, H., Rosenbaum, P. & Cadman, D. (1988). Strengths and limitations of the Apgar score: A critical appraisal. Journal of Clinical Epidemiology, 41, 843–850. https://doi.org/10.1016/0895-4356(88)90100-X First citation in articleCrossrefGoogle Scholar

  • Sireci, S. G. (1998). The construct of content validity. Social Indicators Research, 45, 83–117. First citation in articleCrossrefGoogle Scholar

  • Slaney, K. (2017). Validating psychological constructs: Historical, philosophical, and practical dimensions. London, UK: Palgrave. First citation in articleCrossrefGoogle Scholar

  • Wainer, H. & Braun, H. I. (1988). Test validity. Hillsdale, NJ: Erlbaum. First citation in articleGoogle Scholar

  • Zumbo, B. & Hubley, A. (2016). Bringing consequences and side effects of testing and assessment to the foreground. Assessment in Education: Principles Policy and Practice, 23, 299–303. https://doi.org/10.1080/0969594X.2016.1141169 First citation in articleCrossrefGoogle Scholar