The Impact of Item Parceling Ratios and Strategies on the Internal Structure of Assessment Center Ratings
A Study Using Confirmatory Factor Analysis
Abstract
Abstract. The aim of the present study was to investigate whether using item parcels instead of single indicators would increase support for the factorial validity of assessment center (AC) ratings in factor analytic applications. Factor analytic analyses of AC ratings are often plagued by poor model fit as well as admissibility and termination problems. In the present study, three purposive item parceling strategies, in conjunction with three parceling approaches (specifying different ratios of indicators to dimensions), were investigated in relation to five confirmatory factor analysis specifications of AC ratings across two AC samples (Sample 1: N = 244; Sample 2: N = 320). The findings were equivocal across the two samples. Nonetheless, a three-parcel approach using a factorial allocation strategy performed better that a one-parcel approach (akin to the postexercise dimension rating).
References
2012). Generalizability theory: An analysis of variance approach to measurement problems in educational assessment. Journal of Studies in Education, 2(1), 184–196. 10.5296/jse.v2i1.1227
(2012). Dimension-based assessment centers: Theoretical perspectives. In D. J. R. JacksonC. E. LanceB. J. HoffmanEds., The psychology of assessment centers (pp. 95–120). Routledge.
(2003). A meta-analysis of the criterion-related validity of assessment center dimensions. Personnel Psychology, 56(1), 125–153. 10.1111/j.1744-6570.2003.tb00146.x
(2008). Mend it, don't end it: An alternate view of assessment center construct-related validity evidence. Industrial and Organizational Psychology, 1(1), 105–111. 10.1111/j.1754-9434.2007.00019.x
(2000). Convergent and discriminant validity of assessment center dimensions: A conceptual and empirical reexamination of the assessment center construct-related validity paradox. Journal of Management, 26(4), 813–835. 10.1177/014920630002600410
(2002). The effects of item parceling on goodness-of-fit and parameter estimate bias in structural equation modeling. Structural Equation Modeling: A Multidisciplinary Journal, 9(1), 78–102. 10.1207/S15328007SEM0901_5
(2008). Is parceling really necessary? A comparison of results from item parceling and categorical variable methodology. Structural Equation Modeling: A Multidisciplinary Journal, 15(2), 211–240. 10.1080/10705510801922340
(2006). A meta-analytic evaluation of the impact of dimension and exercise factors on assessment center ratings. Journal of Applied Psychology, 91(5), 1114–1124. 10.1037/0021-9010.91.5.1114
(2020). How different indicator-dimension ratios in assessment center ratings affect evidence for dimension factors. Frontiers in Psychology, 11(4), 459. 10.3389/fpsyg.2020.00459
(2012). A reevaluation of assessment center construct-related validity. International Journal of Business and Management, 7(9), 3–19. 10.5539/ijbm.v7n9p3
(1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56(2), 81–105. 10.1037/h0046016
(2005). Using parcels to convert path analysis models into latent variable models. Multivariate Behavioral Research, 40(2), 235–259. 10.1207/s15327906mbr4002_4
(2015). Whole trait theory. Journal of Research in Personality, 56(1), 82–92. 10.1016/j.jrp.2014.10.009
(1987). Meta-analysis of assessment center validity. Journal of Applied Psychology, 72(3), 493–511. 10.1037/0021-9010.72.3.493
(2009). Dimension consistency as an individual difference: A new (old) perspective on the assessment center construct validity debate. Journal of Management, 35(5), 1154–1180. 10.1177/0149206308328504
(2017). Retention of assessment center rater training. Journal of Personnel Psychology, 16(1), 1–11. 10.1027/1866-5888/a000167
(2010). Mulitvariate data analysis: A global perspective. Pearson.
(2020). Multiple speed assessments. European Journal of Psychological Assessment, 36(2), 237–249. 10.1027/1015-5759/a000512
(2007). The validity of assessment centres for the prediction of supervisory performance ratings: A meta-analysis. International Journal of Selection, 15(4), 405–411. 10.1111/j.1468-2389.2007.00399.x
(2012). Exercises, dimensions and the Battle of Lilliput: Evidence for a mixed-model interpretation of assessment center performance. In D. J. R. JacksonC. E. LanceB. J. HoffmanEds., The psychology of assessment centers (pp. 281–306). Routledge.
(2011). Exercises and dimensions are the currency of assessment centers. Personnel Psychology, 64(2), 351–395. 10.1111/j.1744-6570.2011.01213.x
(1997). A reassessment of assessment centers: Challenges for the 21st century. Journal of Social Behavior and Personality, 12(5), 13–52.
(2008). Making assessment centers work the way they are supposed to. Industrial and Organizational Psychology, 1(1), 98–104. 10.1111/j.1754-9434.2007.00018.x
(2015). Guidelines and ethical considerations for assessment center operations. Journal of Management, 41(4), 1244–1273. 10.1177/0149206314567780
(2016). Everything that you have ever been told about assessment center ratings is confounded. Journal of Applied Pyschology, 101(7), 976–994. 10.1037/apl0000102
(2011). Assessment center practices in South Africa. International Journal of Selection and Assessment, 19(3), 262–275. 10.1111/j.1468-2389.2011.00555.x
(2014). Resolving the assessment center construct validity problem (as we know it). Journal of Applied Psychology, 99(1), 38–47. 10.1037/a0034147
(2008). Why assessment centers do not work the way they are supposed to. Industrial and Organizational Psychology, 1(1), 84–97. 10.1111/j.1754-9434.2007.00017.x
(2007). Extending the nomological network of assessment center construct validity: Prediction of cross-situationally consistent and specific aspects of assessment center performance. Human Performance, 20(4), 345–362. 10.1080/08959280701522031
(2004). Revised estimates of dimension and exercise variance components in assessment center postexercise dimension ratings. Journal of Applied Psychology, 89(2), 377–385, 10.1037/0021-9010.89.2.377
(2000). Assessment center exercise factors represent cross-situational specificity, not method bias. Human Performance, 13(4), 323–353. 10.1207 /S15327043HUP1304
(2007). Case study. Organizational Research Methods, 10(3), 430–448. 10.1177/1094428106289395
(2015). Investigating graduate level research and statistics courses in schools of education. International Journal of Doctoral Studies, 10, 93–110. https://ijds.org/Volume10/IJDSv10p093-110Leech0658.pdf
(2009). Assessment centres: A tale about dimensions, exercises, and dancing bears. European Journal of Work and Organizational Psychology, 18(1), 102–121. 10.1080/13594320802058997
(2001). Dimension and exercise variance in assessment center scores: A large-scale evaluation of multitrait-multimethod studies. Journal of Applied Psychology, 86(6), 1202–1222. 10.1037/0021-9010.86.6.1202
(2009). The importance of exercise and dimension factors in assessment centers: Simultaneous examinations of construct-related and criterion-related validity. Human Performance, 22(5), 375–390. 10.1080/08959280903248310
(2002). To parcel or not to parcel: Exploring the question, weighing the merits. Structural Equation Modeling: A Multidisciplinary Journal, 9(2), 151–173. 10.1207/S15328007SEM0902_1
(2013). Why the items versus parcels controversy needn't be one. Psychological Methods, 18(3), 285–300. 10.1037/a0033266
(2008). Item parceling in structural equation modeling: A primer. Communication Methods and Measures, 2(4), 260–293. 10.1080/19312450802458935
(1999). A five-factor theory of personality. In L. A. PervinO. P. JohnEds., Handbook of personality: Theory and research (2nd ed., pp. 139–153). Guilford.
(2014). A conceptual and empirical review of the structure of assessment center dimensions. Journal of Management, 40(5), 1269–1296. 10.1177/0149206314522299
(2016). A test of the generalizability of a recently suggested conceptual model for assessment center ratings. Human Performance, 29(3), 226–250. 10.1080/08959285.2016.1160093
(1995). A cognitive-affective system theory of personality: Reconceptualizing situations, dispositions, dynamics, and invariance in personality structure. Psychological Review, 102(2), 246–268. 10.1037/0033-295X.102.2.246
(2013). Now you see them, now you do not: The influence of indicator-factor ratio on support for assessment center dimensions. Personnel Psychology, 66(4), 1009–1047. 10.1111/peps.12049
(1998–2017). Mplus users' guide (7th ed.). Muthén and Muthén.
(2006). A Monte Carlo study investigating the impact of item parceling strategies on parameter estimates and their standard errors in CFA. Structural Equation Modeling: A Multidisciplinary Journal, 13(2), 204–228. 10.1207/s15328007sem1302_3
(2013). Use of item parceling in structual equation modeling with missing data (doctoral dissertation). Florida State University Libraries. https://diginole.lib.fsu.edu/islandora/object/fsu%3A185150
(2013). Clarifying the contribution of assessee-, dimension-, exercise-, and assessor-related effects to reliable and unreliable variance in assessment center ratings. Journal of Applied Psychology, 98(1), 114–133. 10.1037/a0030887
(2008). The construct validity of the assessment center method and usefulness of dimensions as focal constructs. Industrial and Organizational Psychology, 1(1), 116–120. 10.1111/j.1754-9434.2007.00021.x
(1982). Constructs and assessment center dimensions: Some troubling empirical findings. Journal of Applied Psychology, 67(4), 401–410. 10.1037/0021-9010.67.4.401
(1992). Universals in the content and structure of values: Theoretical advances and empirical tests in 20 countries. Advances in Experimental Social Psychology, 25, 1–65. 10.1016/S0065-2601(08)60281-6
(2012). An overview of the Schwartz theory of basic values. Online Readings in Psychology and Culture, 2(1), 1–20. 10.9707/2307-0919.1116
(1991). Generalizability theory: A primer. Sage.
(2015, April). Revised estimates of general performance effects on AC ratings. Paper presented at the 30th Annual Conference of the Society for Industrial and Organizational Psychology (SIOP), Philadelphia, PA, USA.
(2000). Situation trait relevance, trait expression, and cross-situational consistency: Testing a principle of trait activation. Journal of Research in Personality, 34(4), 397–423. 10.1006/jrpe.2000.2292
(2009). Validity of assessment centers for personnel selection. Human Resource Management Review, 19(3), 169–187. 10.1016/j.hrmr.2009.02.002
(2015). Assessment center perspectives for talent management strategies (2nd ed.). Routledge.
(2008). Measurement models for linking latent variables and indicators: A review of human resource management research using parcels. Human Resource Management Review, 18(4), 233–242. 1016/j.hrmr.2008.07.002
(2020). Do overall dimension ratings from assessment centres show external construct-related validity? European Journal of Work and Organizational Psychology, 29(3), 405–420. 10.1080/1359432X.2020.1714593
(2003). The construct-related validity of assessment center ratings: A review and meta-analysis of the role of methodological factors. Journal of Management, 29(2), 231–258. 10.1177/014920630302900206
(2012). Methods and data analysis for assessment centers. In D. J. R. JacksonC. E. LanceB. J. HoffmanEds., The psychology of assessment centers (pp. 45–67). Routledge.
(2012). An examination of G-theory methods for modeling multitrait-multimethod data. Organizational Research Methods, 15(1), 134–161. 10.1177/1094428111408616
(