Skip to main content
Published Online:https://doi.org/10.1027/2698-1866/a000002

Abstract. This article explains how papers should be structured to guide the preparation of papers to be submitted to Psychological Test Adaptation and Development. Each submission should adhere as strictly as possible to the following structure. If, for any reason, certain aspects cannot be provided, this should be explained and considered in the limitations and recommendations. The outline in Table 1 is followed by a detailed explanation for each section.

Table 1 Content required in papers given by section

References

  • AERA, APA, & NMCE. (2014). Standards for educational & psychological testing. Washington, DC: AERA Publications. First citation in articleGoogle Scholar

  • Albers, C., & Lakens, D. (2018). When power analyses based on pilot data are biased: Inaccurate effect size estimators and follow-up bias. Journal of Experimental Social Psychology, 74, 187–195. 10.1016/j.jesp.2017.09.004 First citation in articleCrossrefGoogle Scholar

  • Bäckström, M. (2007). Higher-order factors in a five-factor personality inventory and its relation to social desirability. European Journal of Psychological Assessment, 23, 63–70. 10.1027/1015-5759.23.2.63 First citation in articleLinkGoogle Scholar

  • Bäckström, M., & Björklund, F. (2016). Is the general factor of personality based on evaluative responding? Experimental manipulation of item-popularity in personality inventories. Personality and Individual Differences, 96, 31–35. 10.1016/j.paid.2016.02.058 First citation in articleCrossrefGoogle Scholar

  • Bäckström, M., Björklund, F., & Larsson, M. R. (2009). Five-factor inventories have a major general factor related to social desirability which can be reduced by framing items neutrally. Journal of Research in Personality, 43, 335–344. 10.1016/j.jrp.2008.12.013 First citation in articleCrossrefGoogle Scholar

  • Bejar, I. I. (1983). Achievement testing: Recent advances. Beverly Hills, CA: Sage Publications. First citation in articleCrossrefGoogle Scholar

  • Borsboom, D. (2006). When does measurement invariance matter? Medical Care, 44, S176–S181. 10.1097/01.mlr.0000245143.08679.cc First citation in articleCrossrefGoogle Scholar

  • Borsboom, D., Mellenbergh, G., & Van Heerden, J. (2003). The theoretical status of latent variables. Psychological Review, 110, 203–218. 10.1037/0033-295x.110.2.203 First citation in articleCrossrefGoogle Scholar

  • Brunner, M., & Süß, H. M. (2005). Analyzing the reliability of multidimensional measures: An example from intelligence research. Educational and Psychological Measurement, 65, 227–240. 10.1177/0013164404268669 First citation in articleCrossrefGoogle Scholar

  • Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56, 81–105. 10.1037/h0046016 First citation in articleCrossrefGoogle Scholar

  • Chen, F. F. (2008). What happens if we compare chopsticks with forks? The impact of making inappropriate comparisons in cross-cultural research. Journal of Personality and Social Psychology, 95, 1005–1018. 10.1037/a0013193 First citation in articleCrossrefGoogle Scholar

  • Credé, M., Tynan, M. C., & Harms, P. D. (2017). Much ado about grit: A meta-analytic synthesis of the grit literature. Journal of Personality and Social Psychology, 113, 492–511. 10.1037/pspp0000102 First citation in articleCrossrefGoogle Scholar

  • Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297–334. 10.1007/bf02310555 First citation in articleCrossrefGoogle Scholar

  • Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281–302. 10.1037/h0040957 First citation in articleCrossrefGoogle Scholar

  • Fried, E. I., van Borkulo, C. D., Epskamp, S., Schoevers, R. A., Tuerlinckx, F., & Borsboom, D. (2016). Measuring depression over time ... Or not? Lack of unidimensionality and longitudinal measurement invariance in four common rating scales of depression. Psychological Assessment, 28, 1354–1367. 10.1037/pas0000275 First citation in articleCrossrefGoogle Scholar

  • Gnambs, T. (2014). A meta-analysis of dependability coefficients (testretest reliabilities) for measures of the Big Five. Journal of Research in Personality, 52, 20–28. 10.1016/j.jrp.2014.06.003 First citation in articleCrossrefGoogle Scholar

  • Gnambs, T. (2015). Facets of measurement error for scores of the Big Five: Three reliability generalizations. Personality and Individual Differences, 84, 84–89. 10.1016/j.paid.2014.08.019 First citation in articleCrossrefGoogle Scholar

  • Greiff, S., & Heene, M. (2017). Why psychological assessment needs to start worrying about model fit. European Journal of Psychological Assessment, 33, 313–317. 10.1027/1015-5759/a000450 First citation in articleLinkGoogle Scholar

  • Griffith, R. L., Chmielowski, T., & Yoshita, Y. (2007). Do applicants fake? An examination of the frequency of applicant faking behavior. Personnel Review, 36, 341–357. 10.1108/00483480710731310 First citation in articleCrossrefGoogle Scholar

  • Heene, M., Hilbert, S., Draxler, C., Ziegler, M., & Bühner, M. (2011). Masking misfit in confirmatory factor analysis by increasing unique variances: A cautionary note on the usefulness of cutoff values of fit indices. Psychological Methods, 16, 319–336. 10.1037/a0024917 First citation in articleCrossrefGoogle Scholar

  • Kelley, T. L. (1927). Interpretation of educational measurements. Oxford, UK: World Book Co. First citation in articleGoogle Scholar

  • Kemper, C. J., Trapp, S., Kathmann, N., Samuel, D. B., & Ziegler, M. (2019). Short versus long scales in clinical assessment: Exploring the trade-off between resources saved and psychometric quality lost using two measures of obsessivecompulsive symptoms. Assessment, 26, 767–782. 10.1177/1073191118810057 First citation in articleCrossrefGoogle Scholar

  • Kretzschmar, A., & Gignac, G. E. (2019). At what sample size do latent variable correlations stabilize? Journal of Research in Personality, 80, 17–22. 10.1016/j.jrp.2019.03.007 First citation in articleCrossrefGoogle Scholar

  • Kruyen, P. M., Emons, W. H. M., & Sijtsma, K. (2013a). On the shortcomings of shortened tests: A literature review. International Journal of Testing, 13, 223–248. 10.1080/15305058.2012.703734 First citation in articleCrossrefGoogle Scholar

  • Kruyen, P. M., Emons, W. H. M., & Sijtsma, K. (2013b). Shortening the S-STAI: Consequences for research and clinical practice. Journal of Psychosomatic Research, 75, 167–172. 10.1016/j.jpsychores.2013.03.013 First citation in articleCrossrefGoogle Scholar

  • Kruyen, P. M., Emons, W. H. M., & Sijtsma, K. (2014). Assessing individual change using short tests and questionnaires. Applied Psychological Measurement, 38, 201–216. 10.1177/0146621613510061 First citation in articleCrossrefGoogle Scholar

  • Loevinger, J. (1957). Objective tests as instruments of psychological theory: Monograph supplement 9. Psychological Reports, 3, 635–694. 10.2466/pr0.3.7.635-694 First citation in articleCrossrefGoogle Scholar

  • McNeish, D. (2018). Thanks coefficient alpha, we’ll take it from here. Psychological Methods, 23, 412–433. 10.1037/met0000144 First citation in articleCrossrefGoogle Scholar

  • Michell, J. (1997). Quantitative science and the definition of measurement in psychology. British Journal of Psychology, 88, 355–383. 10.1111/j.2044-8295.1997.tb02641.x First citation in articleCrossrefGoogle Scholar

  • Michell, J. (2001). Teaching and misteaching measurement in psychology. Australian Psychologist, 36, 211–218. 10.1080/00050060108259657 First citation in articleCrossrefGoogle Scholar

  • Mussel, P. (2010). Epistemic curiosity and related constructs: Lacking evidence of discriminant validity. Personality and Individual Differences, 49, 506–510. 10.1016/j.paid.2010.05.014 First citation in articleCrossrefGoogle Scholar

  • Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349, aac4716. 10.1126/science.aac4716 First citation in articleCrossrefGoogle Scholar

  • Revelle, W., & Zinbarg, R. E. (2009). Coefficients alpha, beta, omega, and the glb: Comments on Sijtsma. Psychometrika, 74, 145–154. 10.1007/s11336-008-9102-z First citation in articleCrossrefGoogle Scholar

  • Sass, D. A. (2011). Testing measurement invariance and comparing latent factor means within a confirmatory factor analysis framework. Journal of Psychoeducational Assessment, 29, 347–363. 10.1177/0734282911406661 First citation in articleCrossrefGoogle Scholar

  • Schipolowski, S., Wilhelm, O., & Schroeders, U. (2014). On the nature of crystallized intelligence: The relationship between verbal ability and factual knowledge. Intelligence, 46, 156–168. 10.1016/j.intell.2014.05.014 First citation in articleCrossrefGoogle Scholar

  • Schneider, W. J., & McGrew, K. S. (2018). The CattellHornCarroll theory of cognitive abilities. In D. P. FlanaganE. M. McDonough (Eds.), Contemporary intellectual assessment: Theories, tests and issues (Vol. 4, pp. 73–130). New York, NY: The Guilford Press. First citation in articleGoogle Scholar

  • Schönbrodt, F. D., & Perugini, M. (2013). At what sample size do correlations stabilize? Journal of Research in Personality, 47, 609–612. 10.1016/j.jrp.2013.05.009 First citation in articleCrossrefGoogle Scholar

  • Schweizer, K. (2012). On issues of validity and especially on the misery of convergent validity. European Journal of Psychological Assessment, 28, 249–254. 10.1027/1015-5759/a000156 First citation in articleLinkGoogle Scholar

  • Siegling, A. B., Petrides, K. V., & Martskvishvili, K. (2015). An examination of a new psychometric method for optimizing multi‐faceted assessment instruments in the context of trait emotional intelligence. European Journal of Personality, 29, 42–54. 10.1002/per.1976 First citation in articleCrossrefGoogle Scholar

  • Sijtsma, K. (2009). Correcting fallacies in validity, reliability, and classification. International Journal of Testing, 9, 167–194. 10.1080/15305050903106883 First citation in articleCrossrefGoogle Scholar

  • Sijtsma, K., & Emons, W. H. (2011). Advice on total-score reliability issues in psychosomatic measurement. Journal of Psychosomatic Research, 70, 565–572. 10.1016/j.jpsychores.2010.11.002 First citation in articleCrossrefGoogle Scholar

  • Wehner, C., Roemer, L., & Ziegler, M. (2018). Construct validity. In V. Zeigler-HillT. K. Shackelford (Eds.), Encyclopedia of personality and individual differences (pp. 1–3). Cham, Germany: Springer International Publishing. First citation in articleGoogle Scholar

  • Wilhelm, O. (2005). Measuring reasoning ability. In O. WilhelmR. W. Engle (Eds.), Handbook of understanding and measuring intelligence (pp. 373–392). Thousand Oaks, CA: Sage. First citation in articleCrossrefGoogle Scholar

  • Ziegler, M. (2014a). Comments on item selection procedures. European Journal of Psychological Assessment, 30, 1–2. 10.1027/1015-5759/a000196 First citation in articleLinkGoogle Scholar

  • Ziegler, M. (2014b). Stop and state your intentions!: Let\x{2019}s not forget the ABC of test construction. European Journal of Psychological Assessment, 30, 239–242. 10.1027/1015-5759/a000228 First citation in articleLinkGoogle Scholar

  • Ziegler, M., & Bäckström, M. (2016). 50 facets of a trait 50 ways to mess up? European Journal of Psychological Assessment, 32, 105–110. 10.1027/1015-5759/a000372 First citation in articleLinkGoogle Scholar

  • Ziegler, M., & Bensch, D. (2013). Lost in translation: Thoughts regarding the translation of existing psychological measures into other languages. European Journal of Psychological Assessment, 29, 81–83. 10.1027/1015-5759/a000167 First citation in articleLinkGoogle Scholar

  • Ziegler, M., Booth, T., & Bensch, D. (2013). Getting entangled in the nomological net. European Journal of Psychological Assessment, 29, 157–161. 10.1027/1015-5759/a000173 First citation in articleLinkGoogle Scholar

  • Ziegler, M., & Bühner, M. (2009). Modeling socially desirable responding and its effects. Educational and Psychological Measurement, 69, 548–565. 10.1177/0013164408324469 First citation in articleCrossrefGoogle Scholar

  • Ziegler, M., & Hagemann, D. (2015). Testing the unidimensionality of items: Pitfalls and loopholes. European Journal of Psychological Assessment, 31, 231–237. 10.1027/1015-5759/a000309 First citation in articleLinkGoogle Scholar

  • Ziegler, M., Kemper, C. J., & Lenzner, T. (2015). The issue of fuzzy concepts in test construction and possible remedies. European Journal of Psychological Assessment, 31, 1–4. 10.1027/1015-5759/a000255 First citation in articleLinkGoogle Scholar

  • Ziegler, M., Maaß, U., Griffith, R., & Gammon, A. (2015). What is the nature of faking? Modeling distinct response patterns and quantitative differences in faking at the same time. Organizational Research Methods, 18, 679–703. 10.1177/1094428115574518 First citation in articleCrossrefGoogle Scholar

  • Ziegler, M., MacCann, C., & Roberts, R. D. (2011). Faking: Knowns, unknowns, and points of contention. In M. ZieglerC. MacCannR. R. Roberts (Eds.), New perspectives on faking in personality assessment (pp. 3–16). New York, NY: Oxford University Press. First citation in articleCrossrefGoogle Scholar

  • Zinbarg, R. E., Revelle, W., Yovel, I., & Li, W. (2005). Cronbach’s α, Revelle’s β, and McDonald’s ω H: Their relations with each other and two alternative conceptualizations of reliability. Psychometrika, 70, 123–133. 10.1007/s11336-003-0974-7 First citation in articleCrossrefGoogle Scholar