King's College London

Research portal

Quality of interdisciplinary postsimulation debriefing: 360° evaluation

Research output: Contribution to journalArticle

Louise Hull, Stephanie Russ, Maria Ahmed, Nick Sevdalis, David J. Birnbach

Original languageEnglish
Pages (from-to)9-16
Number of pages8
JournalBMJ Simulation and Technology Enhanced Learning
Volume3
Issue number1
Early online date11 Dec 2016
DOIs
Publication statusE-pub ahead of print - 11 Dec 2016

King's Authors

Abstract

Introduction Debriefing is widely perceived to be the most important component of simulation-based training. This study aimed to explore the value of 360° evaluation of debriefing by examining expert debriefing evaluators, debriefers and learners' perceptions of the quality of interdisciplinary debriefings. Method This was a cross-sectional observational study. 41 teams, consisting of 278 learners, underwent simulation-based team training. Immediately following the postsimulation debriefing session, debriefers and learners rated the quality of debriefing using the validated Objective Structured Assessment of Debriefing (OSAD) framework. All debriefing sessions were video-recorded and subsequently rated by evaluators trained to proficiency in assessing debriefing quality. Results Expert debriefing evaluators and debriefers' perceptions of debriefing quality differed significantly; debriefers perceived the quality of debriefing they provided more favourably than expert debriefing evaluators (40.98% of OSAD ratings provided by debriefers were ≥+1 point greater than expert debriefing evaluators' ratings). Further, learner perceptions of the quality of debriefing differed from both expert evaluators and debriefers' perceptions: weak agreement between learner and expert evaluators' perceptions was found on 2 of 8 OSAD elements (learner engagement and reflection); similarly weak agreement between learner and debriefer perceptions was found on just 1 OSAD element (application). Conclusions Debriefers and learners' perceptions of debriefing quality differ significantly. Both groups tend to perceive the quality of debriefing far more favourably than external evaluators. An overconfident debriefer may fail to identify elements of debriefing that require improvement. Feedback provided by learners to debriefers may be of limited value in facilitating improvements. We recommend periodic external evaluation of debriefing quality.

View graph of relations

© 2018 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454