King's College London

Research portal

Uncertainty about Rater Variance and Small Dimension Effects Impact Reliability in Supervisor Ratings

Research output: Contribution to journalArticlepeer-review

Duncan Jackson, George Michaelides, Chris Dewberry, Amanda Jones, Simon Toms, Benjamin Schwenke, Wei-Ning Yang

Original languageEnglish
Pages (from-to)278-301
Number of pages24
JournalHUMAN PERFORMANCE
Volume35
Issue number3-4
DOIs
Published2022

Bibliographical note

Publisher Copyright: © 2022 The Author(s). Published with license by Taylor & Francis Group, LLC.

Documents

  • MSS Supervisor Performance Ratings HP 2022

    MSS_Supervisor_Performance_Ratings_UNMASKED_R3.docx, 253 KB, application/vnd.openxmlformats-officedocument.wordprocessingml.document

    Uploaded date:04 Aug 2022

    Version:Accepted author manuscript

King's Authors

Abstract

We modeled the effects commonly described as defining the measurement structure of supervisor performance ratings. In doing so, we contribute to different theoretical perspectives, including components of the multifactor and mediated models of performance ratings. Across two reanalyzed samples (Sample 1, N ratees = 392, N raters = 244; Sample 2, N ratees = 342, N raters = 397), we found a structure primarily reflective of general (>27% of variance explained) and rater-related (>49%) effects, with relatively small performance dimension effects (between 1% and 11%). We drew on findings from the assessment center literature to approximate the proportion of rater variance that might theoretically contribute to reliability in performance ratings. We found that even moderate contributions of rater-related variance to reliability resulted in a sizable impact on reliability estimates, drawing them closer to accepted criteria.

Download statistics

No data available

View graph of relations

© 2020 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454