DeepEdit: Deep Editable Learning for Interactive Segmentation of 3D Medical Images

Andres Diaz-Pinto*, Pritesh Mehta, Sachidanand Alle, Muhammad Asad, Richard Brown, Vishwesh Nath, Alvin Ihsani, Michela Antonelli, Daniel Palkovics, Csaba Pinter, Ron Alkalay, Steve Pieper, Holger R. Roth, Daguang Xu, Prerna Dogra, Tom Vercauteren, Andrew Feng, Abood Quraini, Sebastien Ourselin, M. Jorge Cardoso

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

8 Citations (Scopus)

Abstract

Automatic segmentation of medical images is a key step for diagnostic and interventional tasks. However, achieving this requires large amounts of annotated volumes, which can be tedious and time-consuming task for expert annotators. In this paper, we introduce DeepEdit, a deep learning-based method for volumetric medical image annotation, that allows automatic and semi-automatic segmentation, and click-based refinement. DeepEdit combines the power of two methods: a non-interactive (i.e. automatic segmentation using nnU-Net, UNET or UNETR) and an interactive segmentation method (i.e. DeepGrow), into a single deep learning model. It allows easy integration of uncertainty-based ranking strategies (i.e. aleatoric and epistemic uncertainty computation) and active learning. We propose and implement a method for training DeepEdit by using standard training combined with user interaction simulation. Once trained, DeepEdit allows clinicians to quickly segment their datasets by using the algorithm in auto segmentation mode or by providing clicks via a user interface (i.e. 3D Slicer, OHIF). We show the value of DeepEdit through evaluation on the PROSTATEx dataset for prostate/prostatic lesions and the Multi-Atlas Labeling Beyond the Cranial Vault (BTCV) dataset for abdominal CT segmentation, using state-of-the-art network architectures as baseline for comparison. DeepEdit could reduce the time and effort annotating 3D medical images compared to DeepGrow alone. Source code is available at https://github.com/Project-MONAI/MONAILabel.

Original languageEnglish
Title of host publicationData Augmentation, Labelling, and Imperfections - 2nd MICCAI Workshop, DALI 2022, Held in Conjunction with MICCAI 2022, Proceedings
EditorsHien V. Nguyen, Sharon X. Huang, Yuan Xue
PublisherSpringer Science and Business Media Deutschland GmbH
Pages11-21
Number of pages11
ISBN (Print)9783031170263
DOIs
Publication statusPublished - 2022
Event2nd MICCAI Workshop on Data Augmentation, Labelling, and Imperfections, DALI 2022, held in conjunction with 25th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2022 - Singapore, Singapore
Duration: 22 Sept 202222 Sept 2022

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13567 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference2nd MICCAI Workshop on Data Augmentation, Labelling, and Imperfections, DALI 2022, held in conjunction with 25th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2022
Country/TerritorySingapore
CitySingapore
Period22/09/202222/09/2022

Keywords

  • CNNs
  • Deep learning
  • Interactive segmentation

Fingerprint

Dive into the research topics of 'DeepEdit: Deep Editable Learning for Interactive Segmentation of 3D Medical Images'. Together they form a unique fingerprint.

Cite this