Skip to main content
Free AccessBrief Report

Does an Overall Job Crafting Dimension Exist?

A Multidimensional Item Response Theory Analysis

Published Online:https://doi.org/10.1027/1015-5759/a000638

Abstract

Abstract. Job crafting is a multidimensional construct that can be conceptualized both at the general level and at the daily level. Several researchers have used aggregated scores across the dimensions of job crafting, to represent an overall job crafting construct. The purpose of the research presented herein is to investigate the factor structure of the general and daily versions of the job crafting scale developed by Petrou et al., (2012) (PJCS), using parametric multidimensional Item Response Theory (IRT) models. A sample of 675 employees working on different occupational sectors completed the Greek version of the scales. Results are in line with theoretical underpinnings and suggest that, although a bifactor IRT model offers an adequate fit, a correlated factors IRT model is more appropriate for both versions of the PJCS. Results caution against using aggregated scores across the dimensions of PJCS for both the general and daily versions.

Job crafting is considered an important proactive approach to job redesign and is conceptualized as a multidimensional proactive behavior (Tims & Bakker, 2010). Systematic evidence from meta-analytic studies (Lichtenthaler & Fischbach, 2019; Rudolph et al., 2017) indicates that the dimensions of job crafting are interrelated, independent, not mutually exclusive, and have different antecedents and outcomes.

Several researchers, however, have used aggregated scores across the dimensions of job crafting to represent an overall job crafting construct (e.g., Akkermans & Tims, 2017), implying that job crafting is a general unidimensional factor. However, the use of aggregated scores across the dimensions to represent overall job crafting needs empirical justification.

In the present paper, we aim to investigate the factor structure of the general and daily level versions of the job crafting scale (PJCS) developed by Petrou et al. (2012) and modified from previous research (Tims et al., 2012). The PJCS consists of 13 items at the general level and 10 items at the daily level and differentiates between three types of job crafting behaviors, namely seeking resources, seeking challenges, and reducing demands.

We seek to contribute to a better understanding of the factorial structure of job crafting and make clear whether an overall factor of job crafting exists in the general and daily versions of the PJCS. Our research considers the multidimensional Item Response Theory (MIRT) as a viable approach to access the factor structure of PJCS. Compared to previous analyses of job crafting constructs (Bakker et al., 2018; Tims et al., 2012), which aimed at finding the smallest number of factors that reproduced the observed correlation matrix of the scales’ items using factor analysis techniques, MIRT focuses in individual scale items and overcomes item-person confounding, found in classical test theory (CTT) techniques (like CFA). Moreover, in MIRT the ordinal level raw data of the rating scales (i.e., Likert scales), are transformed through logarithmic transformations into interval data and are not treated as continuous as is the case for CTT techniques (Reckase, 2009).

Method

Participants and Procedure

Data were based on the responses of 675 Greek employees (59.4% female), working in different occupational sectors (40% in the public sector). Participants were recruited through network sampling, from September 2019 to December 2019. The mean age was 39.80 years (SD = 10.87 years). The majority of the sample had a university degree (48.7%) and worked on average of 38.34 hrs per week (SD = 11.09 hrs).

Measurement of Theoretical Constructs

Native speakers translated all the items of the main constructs used in the study, into the Greek language. A back-translation into English by other bilingual individuals revealed that the translation had worked quite well. All constructs included in the analysis were assessed with self-report measures. Responses to items were made on 5-point Likert scales. Two different forms of the questionnaires were presented to respondents in order to counterbalance the order of the job crafting constructs.

General Job Crafting

We adopted the three scales of general job crafting used by Petrou et al. (2012). Respondents were asked to indicate how often they engage in several behaviors in general. Table 1 presents Cronbach’s reliability coefficients for the three scales.

Table 1 Cronbach’s coefficient α, first eigenvalue, percentage of variance accounted for by the first eigenvalue, KMO, and Bartlett’s statistic with degrees of freedom (df) from EFA analyses

Daily Job Crafting

We adopted the three scales used by Petrou et al. (2012) for the daily version of job crafting. Respondents were asked to indicate how often they engaged in several behaviors during the past day. Table 1 presents Cronbach’s reliability coefficients for the three scales.

Statistical Procedure

We examined whether data from both scales are “unidimensional enough” (i.e., in exploratory factor analysis [EFA] the first factor accounts for at least 20% of the total variance; Anderson et al., 2017). We used EFA (FACTOR v.10.8 program) and the minimum rank factor analysis (MRFA) with direct Oblimin rotation for factor extraction. Results of EFA, in Table 1, support the existence of sufficiently unidimensional factors.

Then, we examined the fit of four unidimensional polytomous IRT models: The Partial Credit Model (PCM), the Generalized Partial Credit Model (GPCM), the Rating Scale Model (RSM), and the Graded Response Model (GRM) (for equations of models see De Ayala, 2013). To determine the model with the best fit, we used two indices across models: the Bayesian information criterion (BIC) and Akaike’s information criterion (AIC).

Next, we fitted multidimensional alternatives of the unidimensional IRT model with the best fit. We examined model-data fit and item fit, using several statistical procedures. (1) M2 goodness-of-fit statistic and the associated root mean square error of approximation (RMSEA). For good model-fit, the M2 statistic is not significant (p > .05), and RMSEA with values close to zero to indicate an acceptable model fit. (2) Standardized local dependence (LD) χ2, with values greater than |10|, indicating likely LD. (3) S-χ2 item-fit diagnostic statistic, with significant values indicating lack of fit. For the analyses, we used the computer program IRTPRO (v.4.20) (Cai et al., 2017).

Results

Classical Item Analysis of the PJCS

Results, of the classical item analysis of the PJCS, are presented in Table E1 of the Electronic Supplementary Material, ESM 1. The indices indicate that most of the items have good discrimination. Items of the reducing demands subscale have relatively low discriminations (below 0.40). The mean item scores indicate that most of the items are “easy” in that the mean is above the midpoint on the scale. We found positive and statistically significant correlations between subscale scores for the general (rgSR, gSC = 0.45, p < .001; rgSR, gRD = 0.04, p = .19; rgSC, gRD = 0.13, p < 0.001) and daily version (rdSR, dSC = 0.50, p < .001; rdSR, dRD = 0.12, p < .001; rdSC, dRD = 0.01, p = .77) of the PJCS.

Unidimensional IRT Model Comparisons

Results of the model selection procedure (see Table E2 of ESM 1) for the unidimensional IRT models suggests that the GRM could be selected as the best model for the two overall job crafting scales.

Evaluation of Local Dependence

Results of the standardized LD-χ2 with values larger than 10 suggested that item responses covariation in both scales maybe better modeled by a multidimensional model such as the multidimensional GRM (mGRM) or the bifactor GRM (bGRM). We fitted a multidimensional GRM and a bifactor GRM to both versions of the overall job crafting scale (see Tables E3a and E3b of ESM 1).

Global Model-Data Fit and Comparison for the GRM, mGRM, and bGRM

The bottom half of Table E4 of ESM 1, displays a summary of the model comparison and fit results for GRM, mGRM, and bGRM. All indexes (−2LL, BIC, AIC) agree that the best fitting model is the bifactor model for both versions of the overall job crafting scale.

Parameter Estimates for the bGRM

Tables E5 and E6 of ESM 1 summarize the bGRM parameter estimates for the general and daily overall versions. These parameters were used to calculate the explained common variance (ECV) index and the item explained common variance (IECV) index (Rodriguez et al., 2016). Large values of ECV (i.e., > 0.85), suggest that the set of items can be reasonably considered unidimensional.

The ECV for the overall job crafting scale dimension is 0.37 and 0.39 for the general and daily versions, respectively. This means that the overall job crafting dimension accounts for 37% and 39% of the variance accounted for by the bifactor model as a whole. The seeking resources, seeking challenges, and reducing demands dimensions accounted for 12%, 23%, and 28% of the variance, respectively, for the general version and 20%, 10%, and 30% for the daily version. The IECVG was also computed for each item on the general dimension of the bifactor model (see Tables E4 and E5 of ESM 1) with almost all values below 0.85.

Discussion

In this brief report, we used MIRT to evaluate the factor structure of the Petrou et al. (2012) job crafting scale. The study contributes to research by clarifying that an overall factor of job crafting does not exist in either of the two-scale versions. Based on our MIRT analyses the correlated factors model provided an adequate global fit to the data. As such, we suggest that the Greek version of the PJCS is best conceptualized as being defined by three independent, yet related dimensions.

From a theoretical perspective, our empirical results do not support the existence of a measurable “overall job crafting” construct at the general or daily levels. This finding is in line with recent studies (i.e., Bakker et al., 2018) using Classical Test Theory (CTT) techniques on different constructs for the assessment of job crafting and recent meta-analyses showing that different dimensions of job crafting have different antecedents and outcomes (Lichtenthaler & Fischbach, 2019).

Regarding limitations the study used self-reported measures and as such, self-report bias and common method variance may have affected the results. Future research on the structure of job crafting could benefit of the use of reports made by colleagues or supervisors. Furthermore, our sample was composed of Greek high educated employees; generalizability to employees with less formal education remains an open question. It is plausible that less-educated employees may participate in job crafting behaviors in different levels and forms compared to more educated employees (e.g., engage in crafting behaviors that are protective of their status and function in their work).

Moreover, we implicitly theorized that our analyses are based on a compensatory MIRT model (Reckase, 2009), meaning that a low score in one dimension can be compensated by a high score in another dimension. Future research could use non-compensatory MIRT models and examine how overall scores may relate to external variables included in job crafting’s nomological network.

Is there any merit to developing direct measures of overall job crafting? Job crafting is an important proactive approach to job redesign that facilitates the work-related well-being of employees. Theoretically, we could expect that different dimensions of job crafting to interact to determine employee behaviors (in line with our compensatory MIRT model conceptualization) and recent empirical findings confirm this (see Petrou & Xanthopoulou, 2020) for interactive effects of job crafting dimensions). More research is clearly needed on whether and which multiple dimensions of job crafting can be represented as an aggregate score.

Conclusion

The results of the present study, caution against aggregated scores across the dimensions of job crafting for both the general and daily version of the PJCS. The PJCS conceptualizes job crafting as a multidimensional proactive behavior, and as such, the use of alternative operationalization of job crafting (such as an overall aggregated scale score) may lead empirical research to very different conclusions regarding antecedents and outcomes.

Electronic Supplementary Materials

The electronic supplementary material is available with the online version of the article at https://doi.org/10.1027/1015-5759/a000638

References

  • Akkermans, J., & Tims, M. (2017). Crafting your career: How career competencies relate to career success via job crafting. Applied Psychology: An International Review, 66(1), 168–195. https://doi.org/10.1111/apps.12082 First citation in articleCrossrefGoogle Scholar

  • Anderson, D., Kahn, J. D., & Tindal, G. (2017). Exploring the robustness of a unidimensional item response theory model with empirically multidimensional data. Applied Measurement in Education, 30(3), 163–177. https://doi.org/10.1080/08957347.2017.1316277 First citation in articleCrossrefGoogle Scholar

  • Bakker, A. B., Ficapal-Cusí, P., Torrent-Sellens, J., Boada-Grau, J., & Hontangas-Beltrán, P. M. (2018). The Spanish version of the Job Crafting Scale. Psicothema, 30(1), 136–142. https://doi.org/10.7334/psicothema2016.293 First citation in articleGoogle Scholar

  • Cai, L., Thissen, D., & du Toit, S. (2017). IRTPRO for Windows, Scientific Software International (Version 4.20) First citation in articleGoogle Scholar

  • De Ayala, R. J. (2013). The theory and practice of item response theory, Guilford Press. First citation in articleGoogle Scholar

  • Lichtenthaler, P. W., & Fischbach, A. (2019). A meta-analysis on promotion- and prevention-focused job crafting. European Journal of Work and Organizational Psychology, 28(1), 30–50. https://doi.org/10.1080/1359432X.2018.1527767 First citation in articleCrossrefGoogle Scholar

  • Petrou, P., Demerouti, E., Peeters, M. C., Schaufeli, W. B., & Hetland, J. (2012). Crafting a job on a daily basis: Contextual correlates and the link to work engagement. Journal of Organizational Behavior, 33(8), 1120–1141. https://doi.org/10.1002/job.1783 First citation in articleCrossrefGoogle Scholar

  • Petrou, P., & Xanthopoulou, D. (2020). Interactive effects of approach and avoidance job crafting in explaining weekly variations in work performance and employability. Applied Psychology: An International Review, https://doi.org/10.1111/apps.12277 First citation in articleCrossrefGoogle Scholar

  • Reckase, M. (2009). Multidimensional Item Response Theory, Springer. First citation in articleCrossrefGoogle Scholar

  • Rodriguez, A., Reise, S. P., & Haviland, M. G. (2016). Evaluating bifactor models: Calculating and interpreting statistical indices. Psychological Methods, 21(2), 137–150. https://doi.org/10.1037/met0000045 First citation in articleCrossrefGoogle Scholar

  • Rudolph, C. W., Katz, I. M., Lavigne, K. N., & Zacher, H. (2017). Job crafting: A meta-analysis of relationships with individual differences, job characteristics, and work outcomes. Journal of Vocational Behavior, 102, 112–138. https://doi.org/10.1016/j.jvb.2017.05.008 First citation in articleCrossrefGoogle Scholar

  • Tims, M., & Bakker, A. B. (2010). Job crafting: Towards a new model of individual job redesign. South African Journal of Industrial Psychology, 36, 1–9. First citation in articleGoogle Scholar

  • Tims, M., Bakker, A. B., & Derks, D. (2012). Development and validation of the Job Crafting Scale. Journal of Vocational Behavior, 80(1), 173–186. https://doi.org/10.1177/20018726712453471 First citation in articleCrossrefGoogle Scholar