TY - JOUR
T1 - Placenta segmentation in ultrasound imaging
T2 - Addressing sources of uncertainty and limited field-of-view
AU - Zimmer, Veronika A.
AU - Gomez, Alberto
AU - Skelton, Emily
AU - Wright, Robert
AU - Wheeler, Gavin
AU - Deng, Shujie
AU - Ghavami, Nooshin
AU - Lloyd, Karen
AU - Matthew, Jacqueline
AU - Kainz, Bernhard
AU - Rueckert, Daniel
AU - Hajnal, Joseph V.
AU - Schnabel, Julia A.
N1 - Funding Information:
This research was funded in part by the Wellcome Trust IEH Award, United Kingdom [ WT 102431/Z/13/Z ]. This work was also supported by the Wellcome/EPSRC Centre for Medical Engineering, United Kingdom [ WT203148/Z/16/Z ] and by the National Institute for Health Research (NIHR) Biomedical Research Centre, United Kingdom at Guy’s and St Thomas’ NHS Foundation Trust and King’s College London, United Kingdom. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.
Publisher Copyright:
© 2022 The Authors
PY - 2023/1
Y1 - 2023/1
N2 - Automatic segmentation of the placenta in fetal ultrasound (US) is challenging due to the (i) high diversity of placenta appearance, (ii) the restricted quality in US resulting in highly variable reference annotations, and (iii) the limited field-of-view of US prohibiting whole placenta assessment at late gestation. In this work, we address these three challenges with a multi-task learning approach that combines the classification of placental location (e.g., anterior, posterior) and semantic placenta segmentation in a single convolutional neural network. Through the classification task the model can learn from larger and more diverse datasets while improving the accuracy of the segmentation task in particular in limited training set conditions. With this approach we investigate the variability in annotations from multiple raters and show that our automatic segmentations (Dice of 0.86 for anterior and 0.83 for posterior placentas) achieve human-level performance as compared to intra- and inter-observer variability. Lastly, our approach can deliver whole placenta segmentation using a multi-view US acquisition pipeline consisting of three stages: multi-probe image acquisition, image fusion and image segmentation. This results in high quality segmentation of larger structures such as the placenta in US with reduced image artifacts which are beyond the field-of-view of single probes.
AB - Automatic segmentation of the placenta in fetal ultrasound (US) is challenging due to the (i) high diversity of placenta appearance, (ii) the restricted quality in US resulting in highly variable reference annotations, and (iii) the limited field-of-view of US prohibiting whole placenta assessment at late gestation. In this work, we address these three challenges with a multi-task learning approach that combines the classification of placental location (e.g., anterior, posterior) and semantic placenta segmentation in a single convolutional neural network. Through the classification task the model can learn from larger and more diverse datasets while improving the accuracy of the segmentation task in particular in limited training set conditions. With this approach we investigate the variability in annotations from multiple raters and show that our automatic segmentations (Dice of 0.86 for anterior and 0.83 for posterior placentas) achieve human-level performance as compared to intra- and inter-observer variability. Lastly, our approach can deliver whole placenta segmentation using a multi-view US acquisition pipeline consisting of three stages: multi-probe image acquisition, image fusion and image segmentation. This results in high quality segmentation of larger structures such as the placenta in US with reduced image artifacts which are beyond the field-of-view of single probes.
KW - Multi-task learning
KW - Multi-view imaging
KW - Ultrasound placenta segmentation
KW - Uncertainty/variability
UR - http://www.scopus.com/inward/record.url?scp=85140142477&partnerID=8YFLogxK
U2 - 10.1016/j.media.2022.102639
DO - 10.1016/j.media.2022.102639
M3 - Article
C2 - 36257132
AN - SCOPUS:85140142477
SN - 1361-8415
VL - 83
JO - Medical Image Analysis
JF - Medical Image Analysis
M1 - 102639
ER -