The Use of Student Data in Job Analysis Research
Abstract
Summary: The use of student data in research in industrial and organizational psychology has been questioned several times (e. g., Gordon, Slade, & Schmitt, 1986; Cornelius, DeNisi, & Blencoe, 1984; DeNisi, Cornelius, & Blencoe, 1987). In this paper we discuss the problems surrounding the use and comparability of student data as opposed to expert data, and illustrate these with a study conducted in The Netherlands. This study examined the differences between first-year psychology students, students of industrial and organizational (I/O) psychology, and professionals (psychologists) in the field of personnel psychology. Subjects rated a detailed job description on the importance of 20 attributes for successful job performance. Results showed, as expected, that neither student group can be considered equivalent to professionals, but that I/O students are more similar to professionals than freshmen. For specific research questions, specific student samples, such as I/O students, can provide results that can be generalized to professionals. The use of first-year students, however, remains questionable in this context.
References
References
Cardy, R.L. Bernardin, H.J. Abbott, J.G. Senderak, M.P. Taylor, K. (1987). The effects of individual performance schemata and dimension familiarization on rating accuracy. Journal of Occupational Psychology, 60, 197– 205Cornelius, E.T. DeNisi, A.S. Blencoe, A.G. (1984). Expert and naive raters using the PAQ: Does it matter?. Personnel Psychology, 37, 453– 464Dam, K. van (1996). Dansende Beren. Beoordelingsprocessen bij personeelsselectie . Amsterdam: Universiteit van Amsterdam, Vakgroep Arbeids- en OrganisatiepsychologieDeNisi, A.S. Cornelius, E.T. Blencoe, A.G. (1987). Further investigation of common knowledge effects on job analysis ratings. Journal of Applied Psychology, 72, 262– 268Dunette, M.D. (1976). Aptitudes, abilities and skills. In M.D. Dunette (Ed.), Handbook of industrial and organizational psychology. Chicago: Rand McNallyFleishman, E.A. Quaintance, M.K. (1984). Taxonomies of human performance . Orlando: Academic PressFriedman, L. Harvey, R.J. (1986). Can raters with reduced job descriptive information provide accurate Position Analysis Questionnaire (PAQ) ratings. Personnel Psychology, 39, 779– 789Gordon, M.E. Slade, L.A. Schmitt, N. (1986). The “science of the sophomore” revisited: From conjecture to empirism. Academy of Management Review, 11, 191– 207Hahn, D.C. Dipboye, R.L. (1988). Effects of training and information on the accuracy and reliability of job evaluations. Journal of Applied Psychology, 73, 146– 153Harvey, R.J. Lozada-Larsen, S.R. (1988). Influence of amount of job descriptive information on job analysis rating accuracy. Journal of Applied Psychology, 73, 457– 461Jones, A.P. Main, D.S. Butler, M.C. Johnson, L.A. (1982). Narrative job descriptions as potential sources of job analysis ratings. Personnel Psychology, 35, 813– 828Schmitt, N. Fine, S.A. (1983). Inter-rater reliability of judgements of functional levels and skill requirements of jobs based on written task statements. Journal of Occupational Psychology, 56, 121– 127Smith, J.E. Hakel, M.D. (1979). Convergence among data sources, response bias, and reliability and validity of a structured job analysis questionnaire. Personnel Psychology, 32, 677– 692