King's College London

Research portal

Let’s agree to disagree: Learning highly debatable multirater labelling

Research output: Chapter in Book/Report/Conference proceedingConference paper

Carole H. Sudre, Beatriz Gomez Anson, Silvia Ingala, Chris D. Lane, Daniel Jimenez, Lukas Haider, Thomas Varsavsky, Ryutaro Tanno, Lorna Smith, Sébastien Ourselin, Rolf H. Jäger, M. Jorge Cardoso

Original languageEnglish
Title of host publicationMedical Image Computing and Computer Assisted Intervention – MICCAI 2019 - 22nd International Conference, Proceedings
EditorsDinggang Shen, Pew-Thian Yap, Tianming Liu, Terry M. Peters, Ali Khan, Lawrence H. Staib, Caroline Essert, Sean Zhou
PublisherSPRINGER
Pages665-673
Number of pages9
ISBN (Print)9783030322502
DOIs
Publication statusPublished - 1 Jan 2019
Event22nd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2019 - Shenzhen, China
Duration: 13 Oct 201917 Oct 2019

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11767 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference22nd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2019
CountryChina
CityShenzhen
Period13/10/201917/10/2019

King's Authors

Abstract

Classification and differentiation of small pathological objects may greatly vary among human raters due to differences in training, expertise and their consistency over time. In a radiological setting, objects commonly have high within-class appearance variability whilst sharing certain characteristics across different classes, making their distinction even more difficult. As an example, markers of cerebral small vessel disease, such as enlarged perivascular spaces (EPVS) and lacunes, can be very varied in their appearance while exhibiting high inter-class similarity, making this task highly challenging for human raters. In this work, we investigate joint models of individual rater behaviour and multi-rater consensus in a deep learning setting, and apply it to a brain lesion object-detection task. Results show that jointly modelling both individual and consensus estimates leads to significant improvements in performance when compared to directly predicting consensus labels, while also allowing the characterization of human-rater consistency.

View graph of relations

© 2018 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454