King's College London

Research portal

The implications of unconfounding multisource performance ratings

Research output: Contribution to journalArticlepeer-review

Duncan John Ross Jackson, George Michaelides, Chris Dewberry, Ben Schwencke, Simon Toms

Original languageEnglish
Pages (from-to)312-329
Number of pages18
JournalJournal of Applied Psychology
Volume105
Issue number3
Early online date22 Jul 2019
DOIs
Accepted/In press5 Jun 2019
E-pub ahead of print22 Jul 2019
PublishedMar 2020

Documents

King's Authors

Abstract

The multifaceted structure of multisource job performance ratings has been a subject of research and debate for over 30 years. However, progress in the field has been hampered by the confounding of effects relevant to the measurement design of multisource ratings and, as a consequence, the impact of ratee-, rater-, source-, and dimension-related effects on the reliability of multisource ratings remains unclear. In separate samples obtained from 2 different applications and measurement designs (N₁ [ratees] = 392, N₁ [raters] = 1,495; N₂ [ratees] = 342, N₂ [raters] = 2,636), we, for the first time, unconfounded all systematic effects commonly cited as being relevant to multisource ratings using a Bayesian generalizability theory approach. Our results suggest that the main contributors to the reliability of multisource ratings are source-related and general performance effects that are independent of dimension-related effects. In light of our findings, we discuss the interpretation and application of multisource ratings in organizational contexts.

Download statistics

No data available

View graph of relations

© 2020 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454