Uncertainty about Rater Variance and Small Dimension Effects Impact Reliability in Supervisor Ratings

Duncan Jackson, George Michaelides, Chris Dewberry, Amanda Jones, Simon Toms, Benjamin Schwenke, Wei-Ning Yang

Research output: Contribution to journalArticlepeer-review

33 Downloads (Pure)

Abstract

We modeled the effects commonly described as defining the measurement structure of supervisor performance ratings. In doing so, we contribute to different theoretical perspectives, including components of the multifactor and mediated models of performance ratings. Across two reanalyzed samples (Sample 1, N ratees = 392, N raters = 244; Sample 2, N ratees = 342, N raters = 397), we found a structure primarily reflective of general (>27% of variance explained) and rater-related (>49%) effects, with relatively small performance dimension effects (between 1% and 11%). We drew on findings from the assessment center literature to approximate the proportion of rater variance that might theoretically contribute to reliability in performance ratings. We found that even moderate contributions of rater-related variance to reliability resulted in a sizable impact on reliability estimates, drawing them closer to accepted criteria.

Original languageEnglish
Pages (from-to)278-301
Number of pages24
JournalHUMAN PERFORMANCE
Volume35
Issue number3-4
DOIs
Publication statusPublished - 19 Aug 2022

Fingerprint

Dive into the research topics of 'Uncertainty about Rater Variance and Small Dimension Effects Impact Reliability in Supervisor Ratings'. Together they form a unique fingerprint.

Cite this