Hostname: page-component-68945f75b7-lndnj Total loading time: 0 Render date: 2024-09-02T23:46:36.924Z Has data issue: false hasContentIssue false

Same as it ever was: A clarification on the sources of predictable variance in job performance ratings

Published online by Cambridge University Press:  27 August 2024

Paul R. Sackett*
Affiliation:
University of Minnesota Twin Cities, Minneapolis, MN, USA
Dan J. Putka
Affiliation:
Human Resources Research Organization, Alexandria, VA, USA
Brian J. Hoffman
Affiliation:
University of Georgia, Athens, GA, USA
*
Corresponding author: Paul R. Sackett; Email: psackett@umn.edu

Abstract

Image of the first page of this content. For PDF version, please use the ‘Save PDF’ preceeding this image.'
Type
Commentaries
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Society for Industrial and Organizational Psychology

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Foster, J., Steel, P., Harms, P., O’Neill, T., & Wood, D. (2024). Selection tests work better than we think they do, and have for years. Industrial and Organizational Psychology, 17.Google Scholar
Hoffman, B. J., & Woehr, D. J. (2009). Disentangling the meaning of multisource performance rating source and dimension factors. Personnel Psychology, 62(4), 735765.CrossRefGoogle Scholar
Jackson, D. J., Michaelides, G., Dewberry, C., Schwencke, B., & Toms, S. (2020). The implications of unconfounding multisource performance ratings. Journal of Applied Psychology, 105(3), 312.CrossRefGoogle ScholarPubMed
Lance, C. E., Hoffman, B. J., Gentry, W. A., & Baranik, L. E. (2008). Rater source factors represent important subcomponents of the criterion construct space, not rater bias. Human Resource Management Review, 18(4), 223232.CrossRefGoogle Scholar
Speer, A. B., Delacruz, A. Y., Wegmeyer, L. J., & Perrotta, J. (2023). Meta-analytical estimates of interrater reliability for direct supervisor performance ratings: Optimism under optimal measurement designs. Journal of Applied Psychology, 109(3), 456467. https://doi.org/10.1037/apl0001146 CrossRefGoogle ScholarPubMed
Zhou, Y., Shen, W., Beatty, A. S., & Sackett, P. R. (2024). An updated meta-analysis of the interrater reliability of supervisory performance ratings. Journal of Applied Psychology, 109, 949970. https://doi.org/10.1037/apl0001174 CrossRefGoogle ScholarPubMed