King's College London

Research portal

Decision fusion of 3D convolutional neural networks to triage patients with suspected prostate cancer using volumetric biparametric MRI

Research output: Chapter in Book/Report/Conference proceedingConference paper

Pritesh Mehta, Michela Antonelli, Hashim Ahmed, Mark Emberton, Shonit Punwani, Sebastien Ourselin

Original languageEnglish
Title of host publicationMedical Imaging 2020
Subtitle of host publicationComputer-Aided Diagnosis
EditorsHorst K. Hahn, Maciej A. Mazurowski
PublisherSPIE
ISBN (Electronic)9781510633957
DOIs
Publication statusPublished - 1 Jan 2020
EventMedical Imaging 2020: Computer-Aided Diagnosis - Houston, United States
Duration: 16 Feb 202019 Feb 2020

Publication series

NameProgress in Biomedical Optics and Imaging - Proceedings of SPIE
Volume11314
ISSN (Print)1605-7422

Conference

ConferenceMedical Imaging 2020: Computer-Aided Diagnosis
CountryUnited States
CityHouston
Period16/02/202019/02/2020

King's Authors

Abstract

In this work, we present a computer-aided diagnosis system that uses deep learning and decision fusion to classify patients into one of three classes: "Likely Prostate Cancer, "Equivocal and "Likely not Prostate Cancer. We impose the group "Equivocal to reduce misclassifications by allowing for uncertainty, akin to prostate imaging reporting systems used by radiologists. We trained 3D convolutional neural networks to perform two binary patient-level classification tasks: classification of patients with/without prostate cancer and classification of patients with/without clinically significant prostate cancer. Networks were trained separately using volumetric T2-weighted images and apparent diffusion coefficient maps for both tasks. The probabilistic outputs of the resulting four trained networks were combined using majority voting followed by the max operator to classify patients into one of the three classes mentioned above. All networks were trained using patient-level labels only, which is a key advantage of our system since voxel-level tumour annotation is often unavailable due to the time and effort required of a radiologist. Our system was evaluated by retrospective analysis on a previously collected trial dataset. At a higher sensitivity setting, our system achieved 0.97 sensitivity and 0.31 specificity compared to an experienced radiologist who achieved 0.99 sensitivity and 0.12 specificity. At a lower sensitivity setting, our system achieved 0.78 sensitivity and 0.77 specificity compared to 0.76 sensitivity and 0.77 specificity for the experienced radiologist. We envision our system acting as a second reader in pre-biopsy screening applications.

View graph of relations

© 2018 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454