The implications of unconfounding multisource performance ratings

Duncan John Ross Jackson, George Michaelides, Chris Dewberry, Ben Schwencke, Simon Toms

Research output: Contribution to journalArticlepeer-review

13 Citations (Scopus)
509 Downloads (Pure)

Abstract

The multifaceted structure of multisource job performance ratings has been a subject of research and debate for over 30 years. However, progress in the field has been hampered by the confounding of effects relevant to the measurement design of multisource ratings and, as a consequence, the impact of ratee-, rater-, source-, and dimension-related effects on the reliability of multisource ratings remains unclear. In separate samples obtained from 2 different applications and measurement designs (N₁ [ratees] = 392, N₁ [raters] = 1,495; N₂ [ratees] = 342, N₂ [raters] = 2,636), we, for the first time, unconfounded all systematic effects commonly cited as being relevant to multisource ratings using a Bayesian generalizability theory approach. Our results suggest that the main contributors to the reliability of multisource ratings are source-related and general performance effects that are independent of dimension-related effects. In light of our findings, we discuss the interpretation and application of multisource ratings in organizational contexts.
Original languageEnglish
Pages (from-to)312-329
Number of pages18
JournalJournal of Applied Psychology
Volume105
Issue number3
Early online date22 Jul 2019
DOIs
Publication statusPublished - Mar 2020

Keywords

  • 360-degree ratings
  • Bayesian generalizability theory
  • Multisource performance ratings

Fingerprint

Dive into the research topics of 'The implications of unconfounding multisource performance ratings'. Together they form a unique fingerprint.

Cite this