Interpretable deep models for cardiac resynchronisation therapy response prediction

Esther Puyol-Antón*, Chen Chen, James R. Clough, Bram Ruijsink, Baldeep S. Sidhu, Justin Gould, Bradley Porter, Marc Elliott, Vishal Mehta, Daniel Rueckert, Christopher A. Rinaldi, Andrew P. King

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

19 Citations (Scopus)


Advances in deep learning (DL) have resulted in impressive accuracy in some medical image classification tasks, but often deep models lack interpretability. The ability of these models to explain their decisions is important for fostering clinical trust and facilitating clinical translation. Furthermore, for many problems in medicine there is a wealth of existing clinical knowledge to draw upon, which may be useful in generating explanations, but it is not obvious how this knowledge can be encoded into DL models - most models are learnt either from scratch or using transfer learning from a different domain. In this paper we address both of these issues. We propose a novel DL framework for image-based classification based on a variational autoencoder (VAE). The framework allows prediction of the output of interest from the latent space of the autoencoder, as well as visualisation (in the image domain) of the effects of crossing the decision boundary, thus enhancing the interpretability of the classifier. Our key contribution is that the VAE disentangles the latent space based on ‘explanations’ drawn from existing clinical knowledge. The framework can predict outputs as well as explanations for these outputs, and also raises the possibility of discovering new biomarkers that are separate (or disentangled) from the existing knowledge. We demonstrate our framework on the problem of predicting response of patients with cardiomyopathy to cardiac resynchronization therapy (CRT) from cine cardiac magnetic resonance images. The sensitivity and specificity of the proposed model on the task of CRT response prediction are 88.43% and 84.39% respectively, and we showcase the potential of our model in enhancing understanding of the factors contributing to CRT response.

Original languageEnglish
Title of host publicationMedical Image Computing and Computer Assisted Intervention – MICCAI 2020 - 23rd International Conference, Proceedings
EditorsAnne L. Martel, Purang Abolmaesumi, Danail Stoyanov, Diana Mateus, Maria A. Zuluaga, S. Kevin Zhou, Daniel Racoceanu, Leo Joskowicz
PublisherSpringer Science and Business Media Deutschland GmbH
Number of pages10
ISBN (Print)9783030597092
Publication statusPublished - 2020
Event23rd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2020 - Lima, Peru
Duration: 4 Oct 20208 Oct 2020

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12261 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Conference23rd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2020


  • Cardiac MRI
  • Cardiac resynchronization therapy
  • Interpretable ML
  • Variational autoencoder


Dive into the research topics of 'Interpretable deep models for cardiac resynchronisation therapy response prediction'. Together they form a unique fingerprint.

Cite this