Abstract
Abstract. Bollen and colleagues have advocated the use of formative scales despite the fact that formative scales lack an adequate underlying theory to guide development or validation such as that which underlies reflective scales. Three conceptual impediments impede the development of such theory: the redefinition of measurement restricted to the context of model fitting, the inscrutable notion of conceptual unity, and a systematic conflation of item scores with attributes. Setting aside these impediments opens the door to progress in developing the needed theory to support formative scale use. A broader perspective facilitates consideration of standard scale development concerns as applied to formative scales including scale development, item analysis, reliability, and item bias. While formative scales require a different pattern of emphasis, all five of the traditional sources of validity evidence apply to formative scales. Responsible use of formative scales requires greater attention to developing the requisite underlying theory.
References
2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.
. (2014). Interpretational confounding or confounded interpretations of causal indicators? Measurement: Interdisciplinary Research and Perspectives, 12, 125–140. https://doi.org/10.1080/15366367.2014.968503
(2018). Measurement theory and applications for the social sciences. New York, NY: Guilford Press.
(1963). Making causal inferences for unmeasured variables from correlations among indicators. American Journal of Sociology, 69, 53–62. https://doi.org/10.1086/223510
(1984). Multiple indicators: Internal consistency or no necessary relationship? Quality and Quantity, 18, 377–385.
(1989). Structural equations with latent variables. New York, NY: Wiley.
(2011). Three Cs in measurement models: Causal indicators, composite indicators, and covariates. Psychological Methods, 16, 265–284. https://doi.org/10.1037/a0024448
(2017a). In defense of causal-formative indicators: A minority report. Psychological Methods, 22, 581–596. https://doi.org/10.1037/met0000056
(2017b). Notes on measurement theory for causal-formative indicators: A reply to Hardin. Psychological Methods, 22, 605–608. https://doi.org/10.1037/met0000149
(1991). Conventional wisdom on measurement: A structural equation perspective. Psychological Bulletin, 110, 305–314. https://doi.org/10.1037/0033-2909.110.2.305
(1928). A quantitative scale for rating the home and social environment of middle class families in an urban community: A first approximation to the measurement of socio-economic status. Journal of Educational Psychology, 19, 99–111. https://doi.org/10.1037/h0074500
(2011). The fallacy of formative measurement. Organizational Research Methods, 14, 370–388. https://doi.org/10.1177/1094428110378369
(2000). On the nature and direction of relationships between constructs and measures. Psychological Methods, 5, 155–174. https://doi.org/10.1037/1082-989X.5.2.155
(1997). Factor analysis, causal indicators, and quality of life. Quality of Life Research, 6, 139–150. https://doi.org/10.1023/A:1026490117121
(2017). A call for theory to support the use of causal-formative indicators: A commentary on Bollen and Diamantopoulos (2017). Psychological Methods, 22, 597–604. https://doi.org/10.1037/met0000115
(1987). Structural equation modeling with LISREL: Essentials and advances. Baltimore, MD: Johns Hopkins University Press.
(2007). The weird world, and equally weird measurement models: Reactive indicators and the validity revolution. Structural Equation Modeling: A Multidisciplinary Journal, 14, 280–310. https://doi.org/10.1080/10705510709336747
(1995). Content validity in psychological assessment: A functional approach to concepts and methods. Psychological Assessment, 7, 238–247. https://doi.org/10.1037/1040-3590.7.3.238
(2006).
(Validation . In R. L. BrennanEd., Educational Measurement (4th ed.pp. 17–64). Westport, CT: Praeger.2008). Statistical theories of mental test scores (Originally published 1967). Charlotte, NC: Information Age.
(2014). Unfinished business in clarifying causal measurement: Commentary on Bainter and Bollen. Measurement: Interdisciplinary Research and Perspectives, 12, 146–150. https://doi.org/10.1080/15366367.2014.980106
(2016). Causal Measurement models: Can criticism stimulate clarification? Measurement: Interdisciplinary Research and Perspectives, 14, 110–113. https://doi.org/10.1080/15366367.2016.1224965
(2013). Frontiers of test validity theory: Measurement¸ causation, and meaning. New York, NY: Routledge.
(2017). Rethinking traditional methods of survey validation. Measurement: Interdisciplinary Research and Perspectives, 15, 51–69. https://doi.org/10.1080/15366367.2017.1348108
(1999). Test theory: A unified treatment. Mahwah, NJ: Erlbaum.
(1989).
(Validity . In R. L. LinnEd., Educational measurement (3rd ed., pp. 13–103). Washington, DC: American Council on Education and National Council on Measurement in Education.2015). Counterfactuals and causal inference: Methods and principles for social research (2nd ed..). New York, NY: Cambridge University Press.
(2014). Validity in educational & psychological assessment. Los Angeles, CA: Sage Publications.
(1994). Psychometric theory (3rd ed.). New York, NY: McGraw-Hill.
(1988). Strengths and limitations of the Apgar score: A critical appraisal. Journal of Clinical Epidemiology, 41, 843–850. https://doi.org/10.1016/0895-4356(88)90100-X
(1998). The construct of content validity. Social Indicators Research, 45, 83–117.
(2017). Validating psychological constructs: Historical, philosophical, and practical dimensions. London, UK: Palgrave.
(1988). Test validity. Hillsdale, NJ: Erlbaum.
(2016). Bringing consequences and side effects of testing and assessment to the foreground. Assessment in Education: Principles Policy and Practice, 23, 299–303. https://doi.org/10.1080/0969594X.2016.1141169
(