TY - CHAP
T1 - Addressing Deep Learning Model Calibration Using Evidential Neural Networks and Uncertainty-Aware Training
AU - Dawood, Tareen
AU - Chan, Emily
AU - Razavi, Reza
AU - King, Andrew P.
AU - Puyol-Anton, Esther
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - In terms of accuracy, deep learning (DL) models have had considerable success in classification problems for medical imaging applications. However, it is well-known that the outputs of such models, which typically utilise the SoftMax function in the final classification layer can be over-confident, i.e. they are poorly calibrated. Two competing solutions to this problem have been proposed: uncertainty-aware training and evidential neural networks (ENNs). In this paper we perform an investigation into the improvements to model calibration that can be achieved by each of these approaches individually, and their combination. We perform experiments on two classification tasks: a simpler MNIST digit classification task and a more complex and realistic medical imaging artefact detection task using Phase Contrast Cardiac Magnetic Resonance images. The experimental results demonstrate that model calibration can suffer when the task becomes challenging enough to require a higher capacity model. However, in our complex artefact detection task we saw an improvement in calibration for both a low and higher capacity model when implementing both the ENN and uncertainty-aware training together, indicating that this approach can offer a promising way to improve calibration in such settings. The findings highlight the potential use of these approaches to improve model calibration in a complex application, which would in turn improve clinician trust in DL models.
AB - In terms of accuracy, deep learning (DL) models have had considerable success in classification problems for medical imaging applications. However, it is well-known that the outputs of such models, which typically utilise the SoftMax function in the final classification layer can be over-confident, i.e. they are poorly calibrated. Two competing solutions to this problem have been proposed: uncertainty-aware training and evidential neural networks (ENNs). In this paper we perform an investigation into the improvements to model calibration that can be achieved by each of these approaches individually, and their combination. We perform experiments on two classification tasks: a simpler MNIST digit classification task and a more complex and realistic medical imaging artefact detection task using Phase Contrast Cardiac Magnetic Resonance images. The experimental results demonstrate that model calibration can suffer when the task becomes challenging enough to require a higher capacity model. However, in our complex artefact detection task we saw an improvement in calibration for both a low and higher capacity model when implementing both the ENN and uncertainty-aware training together, indicating that this approach can offer a promising way to improve calibration in such settings. The findings highlight the potential use of these approaches to improve model calibration in a complex application, which would in turn improve clinician trust in DL models.
KW - Calibration
KW - Evidential Neural Networks
KW - Medical Imaging
UR - http://www.scopus.com/inward/record.url?scp=85172091438&partnerID=8YFLogxK
U2 - 10.1109/ISBI53787.2023.10230515
DO - 10.1109/ISBI53787.2023.10230515
M3 - Other chapter contribution
AN - SCOPUS:85172091438
T3 - Proceedings - International Symposium on Biomedical Imaging
BT - 2023 IEEE International Symposium on Biomedical Imaging, ISBI 2023
PB - IEEE Computer Society
T2 - 20th IEEE International Symposium on Biomedical Imaging, ISBI 2023
Y2 - 18 April 2023 through 21 April 2023
ER -