Abstract

Automatic segmentation of the placenta in fetal ultrasound (US) is challenging due to the (i) high diversity of placenta appearance, (ii) the restricted quality in US resulting in highly variable reference annotations, and (iii) the limited field-of-view of US prohibiting whole placenta assessment at late gestation. In this work, we address these three challenges with a multi-task learning approach that combines the classification of placental location (e.g., anterior, posterior) and semantic placenta segmentation in a single convolutional neural network. Through the classification task the model can learn from larger and more diverse datasets while improving the accuracy of the segmentation task in particular in limited training set conditions. With this approach we investigate the variability in annotations from multiple raters and show that our automatic segmentations (Dice of 0.86 for anterior and 0.83 for posterior placentas) achieve human-level performance as compared to intra- and inter-observer variability. Lastly, our approach can deliver whole placenta segmentation using a multi-view US acquisition pipeline consisting of three stages: multi-probe image acquisition, image fusion and image segmentation. This results in high quality segmentation of larger structures such as the placenta in US with reduced image artifacts which are beyond the field-of-view of single probes.

Original languageEnglish
Article number102639
JournalMedical Image Analysis
Volume83
Early online date28 Sept 2022
DOIs
Publication statusPublished - Jan 2023

Keywords

  • Multi-task learning
  • Multi-view imaging
  • Ultrasound placenta segmentation
  • Uncertainty/variability

Fingerprint

Dive into the research topics of 'Placenta segmentation in ultrasound imaging: Addressing sources of uncertainty and limited field-of-view'. Together they form a unique fingerprint.

Cite this