Skip to main content
Original Article

The Impact of Item Parceling Ratios and Strategies on the Internal Structure of Assessment Center Ratings

A Study Using Confirmatory Factor Analysis

Published Online:https://doi.org/10.1027/1866-5888/a000266

Abstract. The aim of the present study was to investigate whether using item parcels instead of single indicators would increase support for the factorial validity of assessment center (AC) ratings in factor analytic applications. Factor analytic analyses of AC ratings are often plagued by poor model fit as well as admissibility and termination problems. In the present study, three purposive item parceling strategies, in conjunction with three parceling approaches (specifying different ratios of indicators to dimensions), were investigated in relation to five confirmatory factor analysis specifications of AC ratings across two AC samples (Sample 1: N = 244; Sample 2: N = 320). The findings were equivocal across the two samples. Nonetheless, a three-parcel approach using a factorial allocation strategy performed better that a one-parcel approach (akin to the postexercise dimension rating).

References

  • Alkharusi, H. A. (2012). Generalizability theory: An analysis of variance approach to measurement problems in educational assessment. Journal of Studies in Education, 2(1), 184–196. 10.5296/jse.v2i1.1227 First citation in articleCrossrefGoogle Scholar

  • Arthur, W. Jr. (2012). Dimension-based assessment centers: Theoretical perspectives. In D. J. R. JacksonC. E. LanceB. J. HoffmanEds., The psychology of assessment centers (pp. 95–120). Routledge. First citation in articleGoogle Scholar

  • Arthur, W., Day, E. A., McNelly, T. L., & Edens, P. S. (2003). A meta-analysis of the criterion-related validity of assessment center dimensions. Personnel Psychology, 56(1), 125–153. 10.1111/j.1744-6570.2003.tb00146.x First citation in articleCrossrefGoogle Scholar

  • Arthur, W., Day, E. A., & Woehr, D. J. (2008). Mend it, don't end it: An alternate view of assessment center construct-related validity evidence. Industrial and Organizational Psychology, 1(1), 105–111. 10.1111/j.1754-9434.2007.00019.x First citation in articleCrossrefGoogle Scholar

  • Arthur, W., Woehr, D. J., & Maldegen, R. (2000). Convergent and discriminant validity of assessment center dimensions: A conceptual and empirical reexamination of the assessment center construct-related validity paradox. Journal of Management, 26(4), 813–835. 10.1177/014920630002600410 First citation in articleCrossrefGoogle Scholar

  • Bandalos, D. L. (2002). The effects of item parceling on goodness-of-fit and parameter estimate bias in structural equation modeling. Structural Equation Modeling: A Multidisciplinary Journal, 9(1), 78–102. 10.1207/S15328007SEM0901_5 First citation in articleCrossrefGoogle Scholar

  • Bandalos, D. L. (2008). Is parceling really necessary? A comparison of results from item parceling and categorical variable methodology. Structural Equation Modeling: A Multidisciplinary Journal, 15(2), 211–240. 10.1080/10705510801922340 First citation in articleCrossrefGoogle Scholar

  • Bowler, M. C., & Woehr, D. J. (2006). A meta-analytic evaluation of the impact of dimension and exercise factors on assessment center ratings. Journal of Applied Psychology, 91(5), 1114–1124. 10.1037/0021-9010.91.5.1114 First citation in articleCrossrefGoogle Scholar

  • Buckett, A., Becker, J. R., Melchers, K. G., & Roodt, G. (2020). How different indicator-dimension ratios in assessment center ratings affect evidence for dimension factors. Frontiers in Psychology, 11(4), 459. 10.3389/fpsyg.2020.00459 First citation in articleCrossrefGoogle Scholar

  • Cahoon, M. V., Bowler, M. C., & Bowler, J. L. (2012). A reevaluation of assessment center construct-related validity. International Journal of Business and Management, 7(9), 3–19. 10.5539/ijbm.v7n9p3 First citation in articleCrossrefGoogle Scholar

  • Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56(2), 81–105. 10.1037/h0046016 First citation in articleCrossrefGoogle Scholar

  • Coffman, D. L., & MacCallum, R. C. (2005). Using parcels to convert path analysis models into latent variable models. Multivariate Behavioral Research, 40(2), 235–259. 10.1207/s15327906mbr4002_4 First citation in articleCrossrefGoogle Scholar

  • Fleeson, W., & Jayawickreme, E. (2015). Whole trait theory. Journal of Research in Personality, 56(1), 82–92. 10.1016/j.jrp.2014.10.009 First citation in articleCrossrefGoogle Scholar

  • Gaugler, B. B., Rosenthal, D. B., Thornton, G. C. III, & Bentson, C. (1987). Meta-analysis of assessment center validity. Journal of Applied Psychology, 72(3), 493–511. 10.1037/0021-9010.72.3.493 First citation in articleCrossrefGoogle Scholar

  • Gibbons, A. M., & Rupp, D. E. (2009). Dimension consistency as an individual difference: A new (old) perspective on the assessment center construct validity debate. Journal of Management, 35(5), 1154–1180. 10.1177/0149206308328504 First citation in articleCrossrefGoogle Scholar

  • Gorman, C. A., & Rentsch, J. R. (2017). Retention of assessment center rater training. Journal of Personnel Psychology, 16(1), 1–11. 10.1027/1866-5888/a000167 First citation in articleLinkGoogle Scholar

  • Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Mulitvariate data analysis: A global perspective. Pearson. First citation in articleGoogle Scholar

  • Herde, C. N., & Lievens, F. (2020). Multiple speed assessments. European Journal of Psychological Assessment, 36(2), 237–249. 10.1027/1015-5759/a000512 First citation in articleLinkGoogle Scholar

  • Hermelin, E., Lievens, F., & Robertson, I. T. (2007). The validity of assessment centres for the prediction of supervisory performance ratings: A meta-analysis. International Journal of Selection, 15(4), 405–411. 10.1111/j.1468-2389.2007.00399.x First citation in articleCrossrefGoogle Scholar

  • Hoffman, B. J. (2012). Exercises, dimensions and the Battle of Lilliput: Evidence for a mixed-model interpretation of assessment center performance. In D. J. R. JacksonC. E. LanceB. J. HoffmanEds., The psychology of assessment centers (pp. 281–306). Routledge. First citation in articleGoogle Scholar

  • Hoffman, B. J., Melchers, K. G., Blair, C. A., Kleinmann, M., & Ladd, R. T. (2011). Exercises and dimensions are the currency of assessment centers. Personnel Psychology, 64(2), 351–395. 10.1111/j.1744-6570.2011.01213.x First citation in articleCrossrefGoogle Scholar

  • Howard, A. (1997). A reassessment of assessment centers: Challenges for the 21st century. Journal of Social Behavior and Personality, 12(5), 13–52. First citation in articleGoogle Scholar

  • Howard, A. (2008). Making assessment centers work the way they are supposed to. Industrial and Organizational Psychology, 1(1), 98–104. 10.1111/j.1754-9434.2007.00018.x First citation in articleCrossrefGoogle Scholar

  • International Taskforce on Assessment Center Guidelines (2015). Guidelines and ethical considerations for assessment center operations. Journal of Management, 41(4), 1244–1273. 10.1177/0149206314567780 First citation in articleCrossrefGoogle Scholar

  • Jackson, D. J., Michaelides, G., Dewberry, C., & Kim, Y.-J. (2016). Everything that you have ever been told about assessment center ratings is confounded. Journal of Applied Pyschology, 101(7), 976–994. 10.1037/apl0000102 First citation in articleCrossrefGoogle Scholar

  • Krause, D. E., Rossberger, R. J., Dowdeswell, K., Venter, N., & Joubert, T. (2011). Assessment center practices in South Africa. International Journal of Selection and Assessment, 19(3), 262–275. 10.1111/j.1468-2389.2011.00555.x First citation in articleCrossrefGoogle Scholar

  • Kuncel, N. R., & Sackett, P. R. (2014). Resolving the assessment center construct validity problem (as we know it). Journal of Applied Psychology, 99(1), 38–47. 10.1037/a0034147 First citation in articleCrossrefGoogle Scholar

  • Lance, C. E. (2008). Why assessment centers do not work the way they are supposed to. Industrial and Organizational Psychology, 1(1), 84–97. 10.1111/j.1754-9434.2007.00017.x First citation in articleCrossrefGoogle Scholar

  • Lance, C. E., Foster, M. R., Nemeth, Y. M., Gentry, W. A., & Drollinger, S. (2007). Extending the nomological network of assessment center construct validity: Prediction of cross-situationally consistent and specific aspects of assessment center performance. Human Performance, 20(4), 345–362. 10.1080/08959280701522031 First citation in articleCrossrefGoogle Scholar

  • Lance, C. E., Lambert, T. A., Gewin, A. G., Lievens, F., & Conway, J. M. (2004). Revised estimates of dimension and exercise variance components in assessment center postexercise dimension ratings. Journal of Applied Psychology, 89(2), 377–385, 10.1037/0021-9010.89.2.377 First citation in articleCrossrefGoogle Scholar

  • Lance, C. E., Newbolt, W. H., Gatewood, R. D., Foster, M. R., French, N. R., & Smith, D. E. (2000). Assessment center exercise factors represent cross-situational specificity, not method bias. Human Performance, 13(4), 323–353. 10.1207 /S15327043HUP1304 First citation in articleCrossrefGoogle Scholar

  • Lance, C. E., Woehr, D. J., & Meade, A. W. (2007). Case study. Organizational Research Methods, 10(3), 430–448. 10.1177/1094428106289395 First citation in articleCrossrefGoogle Scholar

  • Leech, N., & A. Haug, C. (2015). Investigating graduate level research and statistics courses in schools of education. International Journal of Doctoral Studies, 10, 93–110. https://ijds.org/Volume10/IJDSv10p093-110Leech0658.pdf First citation in articleCrossrefGoogle Scholar

  • Lievens, F. (2009). Assessment centres: A tale about dimensions, exercises, and dancing bears. European Journal of Work and Organizational Psychology, 18(1), 102–121. 10.1080/13594320802058997 First citation in articleCrossrefGoogle Scholar

  • Lievens, F., & Conway, J. M. (2001). Dimension and exercise variance in assessment center scores: A large-scale evaluation of multitrait-multimethod studies. Journal of Applied Psychology, 86(6), 1202–1222. 10.1037/0021-9010.86.6.1202 First citation in articleCrossrefGoogle Scholar

  • Lievens, F., Dilchert, S., & Ones, D. S. (2009). The importance of exercise and dimension factors in assessment centers: Simultaneous examinations of construct-related and criterion-related validity. Human Performance, 22(5), 375–390. 10.1080/08959280903248310 First citation in articleCrossrefGoogle Scholar

  • Little, T. D., Cunningham, W. A., Shahar, G., & Widaman, K. F. (2002). To parcel or not to parcel: Exploring the question, weighing the merits. Structural Equation Modeling: A Multidisciplinary Journal, 9(2), 151–173. 10.1207/S15328007SEM0902_1 First citation in articleCrossrefGoogle Scholar

  • Little, T. D., Rhemtulla, M., Gibson, K., & Schoemann, A. M. (2013). Why the items versus parcels controversy needn't be one. Psychological Methods, 18(3), 285–300. 10.1037/a0033266 First citation in articleCrossrefGoogle Scholar

  • Matsunaga, M. (2008). Item parceling in structural equation modeling: A primer. Communication Methods and Measures, 2(4), 260–293. 10.1080/19312450802458935 First citation in articleCrossrefGoogle Scholar

  • McCrae, R. R., & Costa, P. T. Jr. (1999). A five-factor theory of personality. In L. A. PervinO. P. JohnEds., Handbook of personality: Theory and research (2nd ed., pp. 139–153). Guilford. First citation in articleGoogle Scholar

  • Meriac, J. P., Hoffman, B. J., & Woehr, D. J. (2014). A conceptual and empirical review of the structure of assessment center dimensions. Journal of Management, 40(5), 1269–1296. 10.1177/0149206314522299 First citation in articleCrossrefGoogle Scholar

  • Merkulova, N., Melchers, K. G., Kleinmann, M., Annen, H., & Szvircsev Tresch, T. (2016). A test of the generalizability of a recently suggested conceptual model for assessment center ratings. Human Performance, 29(3), 226–250. 10.1080/08959285.2016.1160093 First citation in articleCrossrefGoogle Scholar

  • Mischel, W., & Shoda, Y. (1995). A cognitive-affective system theory of personality: Reconceptualizing situations, dispositions, dynamics, and invariance in personality structure. Psychological Review, 102(2), 246–268. 10.1037/0033-295X.102.2.246 First citation in articleCrossrefGoogle Scholar

  • Monahan, E. L., Hoffman, B. J., Lance, C. E., Jackson, D. J. R., & Foster, M. R. (2013). Now you see them, now you do not: The influence of indicator-factor ratio on support for assessment center dimensions. Personnel Psychology, 66(4), 1009–1047. 10.1111/peps.12049 First citation in articleCrossrefGoogle Scholar

  • Muthén, L. K., & Muthén, B. O. (1998–2017). Mplus users' guide (7th ed.). Muthén and Muthén. First citation in articleGoogle Scholar

  • Nasser-Abu Alhija, F., & Wisenbaker, J. (2006). A Monte Carlo study investigating the impact of item parceling strategies on parameter estimates and their standard errors in CFA. Structural Equation Modeling: A Multidisciplinary Journal, 13(2), 204–228. 10.1207/s15328007sem1302_3 First citation in articleCrossrefGoogle Scholar

  • Orcan, F. (2013). Use of item parceling in structual equation modeling with missing data (doctoral dissertation). Florida State University Libraries. https://diginole.lib.fsu.edu/islandora/object/fsu%3A185150 First citation in articleGoogle Scholar

  • Putka, D. J., & Hoffman, B. J. (2013). Clarifying the contribution of assessee-, dimension-, exercise-, and assessor-related effects to reliable and unreliable variance in assessment center ratings. Journal of Applied Psychology, 98(1), 114–133. 10.1037/a0030887 First citation in articleCrossrefGoogle Scholar

  • Rupp, D. E., Thornton, G. C. III, & Gibbons, A. M. (2008). The construct validity of the assessment center method and usefulness of dimensions as focal constructs. Industrial and Organizational Psychology, 1(1), 116–120. 10.1111/j.1754-9434.2007.00021.x First citation in articleCrossrefGoogle Scholar

  • Sackett, P. R., & Dreher, G. F. (1982). Constructs and assessment center dimensions: Some troubling empirical findings. Journal of Applied Psychology, 67(4), 401–410. 10.1037/0021-9010.67.4.401 First citation in articleCrossrefGoogle Scholar

  • Schwartz, S. H. (1992). Universals in the content and structure of values: Theoretical advances and empirical tests in 20 countries. Advances in Experimental Social Psychology, 25, 1–65. 10.1016/S0065-2601(08)60281-6 First citation in articleCrossrefGoogle Scholar

  • Schwartz, S. H. (2012). An overview of the Schwartz theory of basic values. Online Readings in Psychology and Culture, 2(1), 1–20. 10.9707/2307-0919.1116 First citation in articleCrossrefGoogle Scholar

  • Shavelson, R. J., & Webb, N. M. (1991). Generalizability theory: A primer. Sage. First citation in articleGoogle Scholar

  • Siminovsky, A. B., Hoffman, B. J., & Lance, C. E. (2015, April). Revised estimates of general performance effects on AC ratings. Paper presented at the 30th Annual Conference of the Society for Industrial and Organizational Psychology (SIOP), Philadelphia, PA, USA. First citation in articleGoogle Scholar

  • Tett, R. P., & Guterman, H. A. (2000). Situation trait relevance, trait expression, and cross-situational consistency: Testing a principle of trait activation. Journal of Research in Personality, 34(4), 397–423. 10.1006/jrpe.2000.2292 First citation in articleCrossrefGoogle Scholar

  • Thornton, G. C. III, & Gibbons, A. M. (2009). Validity of assessment centers for personnel selection. Human Resource Management Review, 19(3), 169–187. 10.1016/j.hrmr.2009.02.002 First citation in articleCrossrefGoogle Scholar

  • Thornton, G. C. III, Rupp, D. E., & Hoffman, B. J. (2015). Assessment center perspectives for talent management strategies (2nd ed.). Routledge. First citation in articleGoogle Scholar

  • Williams, L. J., & O'Boyle, E. H. (2008). Measurement models for linking latent variables and indicators: A review of human resource management research using parcels. Human Resource Management Review, 18(4), 233–242. 1016/j.hrmr.2008.07.002 First citation in articleCrossrefGoogle Scholar

  • Wirz, A., Melchers, K. G., Kleinmann, M., Lievens, F., Annen, H., Blum, U., & Ingold, P. V. (2020). Do overall dimension ratings from assessment centres show external construct-related validity? European Journal of Work and Organizational Psychology, 29(3), 405–420. 10.1080/1359432X.2020.1714593 First citation in articleCrossrefGoogle Scholar

  • Woehr, D. J., & Arthur, W. (2003). The construct-related validity of assessment center ratings: A review and meta-analysis of the role of methodological factors. Journal of Management, 29(2), 231–258. 10.1177/014920630302900206 First citation in articleCrossrefGoogle Scholar

  • Woehr, D. J., Meriac, J. P., & Bowler, M. C. (2012). Methods and data analysis for assessment centers. In D. J. R. JacksonC. E. LanceB. J. HoffmanEds., The psychology of assessment centers (pp. 45–67). Routledge. First citation in articleGoogle Scholar

  • Woehr, D. J., Putka, D. J., & Bowler, M. C. (2012). An examination of G-theory methods for modeling multitrait-multimethod data. Organizational Research Methods, 15(1), 134–161. 10.1177/1094428111408616 First citation in articleCrossrefGoogle Scholar