King's College London

Research portal

3D fetal skull reconstruction from 2DUS via deep conditional generative networks

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

Juan J. Cerrolaza, Yuanwei Li, Carlo Biffi, Alberto Gomez, Matthew Sinclair, Jacqueline Matthew, Caronline Knight, Bernhard Kainz, Daniel Rueckert

Original languageEnglish
Title of host publicationMedical Image Computing and Computer Assisted Intervention – MICCAI 2018 - 21st International Conference, 2018, Proceedings
EditorsJulia A. Schnabel, Christos Davatzikos, Carlos Alberola-López, Gabor Fichtinger, Alejandro F. Frangi
PublisherSpringer Verlag
Pages383-391
Number of pages9
ISBN (Print)9783030009274
DOIs
Published1 Jan 2018
Event21st International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2018 - Granada, Spain
Duration: 16 Sep 201820 Sep 2018

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11070 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference21st International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2018
Country/TerritorySpain
CityGranada
Period16/09/201820/09/2018

King's Authors

Abstract

2D ultrasound (US) is the primary imaging modality in antenatal healthcare. Despite the limitations of traditional 2D biometrics to characterize the true 3D anatomy of the fetus, the adoption of 3DUS is still very limited. This is particularly significant in developing countries and remote areas, due to the lack of experienced sonographers and the limited access to 3D technology. In this paper, we present a new deep conditional generative network for the 3D reconstruction of the fetal skull from 2DUS standard planes of the head routinely acquired during the fetal screening process. Based on the generative properties of conditional variational autoencoders (CVAE), our reconstruction architecture (REC-CVAE) directly integrates the three US standard planes as conditional variables to generate a unified latent space of the skull. Additionally, we propose HiREC-CVAE, a hierarchical generative network based on the different clinical relevance of each predictive view. The hierarchical structure of HiREC-CVAE allows the network to learn a sequence of nested latent spaces, providing superior predictive capabilities even in the absence of some of the 2DUS scans. The performance of the proposed architectures was evaluated on a dataset of 72 cases, showing accurate reconstruction capabilities from standard non-registered 2DUS images.

View graph of relations

© 2020 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454