King's College London

Research portal

A self-supervised learning strategy for postoperative brain cavity segmentation simulating resections

Research output: Contribution to journalArticlepeer-review

Fernando Pérez-García, Reuben Dorent, Michele Rizzi, Francesco Cardinale, Valerio Frazzini, Vincent Navarro, Caroline Essert, Irène Ollivier, Tom Vercauteren, Rachel Sparks, John S. Duncan, Sébastien Ourselin

Original languageEnglish
Pages (from-to)1653-1661
Number of pages9
JournalInternational Journal of Computer Assisted Radiology and Surgery
Issue number10
Accepted/In press2021
PublishedOct 2021

Bibliographical note

Funding Information: This publication represents, in part, independent research commissioned by the Wellcome Innovator Award (218380/Z/19/Z/). Computing infrastructure at the Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS) (UCL) (203145Z/16/Z) was used for this study. R.D. is supported by the Wellcome Trust (203148/Z/16/Z) and the Engineering and Physical Sciences Research Council (EPSRC) (NS/A000049/1). T.V. is supported by a Medtronic / Royal Academy of Engineering Research Chair (RCSRF1819/7/34). The views expressed in this publication are those of the authors and not necessarily those of the Wellcome Trust. Publisher Copyright: © 2021, The Author(s). Copyright: Copyright 2021 Elsevier B.V., All rights reserved.

King's Authors


Purpose: Accurate segmentation of brain resection cavities (RCs) aids in postoperative analysis and determining follow-up treatment. Convolutional neural networks (CNNs) are the state-of-the-art image segmentation technique, but require large annotated datasets for training. Annotation of 3D medical images is time-consuming, requires highly trained raters and may suffer from high inter-rater variability. Self-supervised learning strategies can leverage unlabeled data for training. Methods: We developed an algorithm to simulate resections from preoperative magnetic resonance images (MRIs). We performed self-supervised training of a 3D CNN for RC segmentation using our simulation method. We curated EPISURG, a dataset comprising 430 postoperative and 268 preoperative MRIs from 430 refractory epilepsy patients who underwent resective neurosurgery. We fine-tuned our model on three small annotated datasets from different institutions and on the annotated images in EPISURG, comprising 20, 33, 19 and 133 subjects. Results: The model trained on data with simulated resections obtained median (interquartile range) Dice score coefficients (DSCs) of 81.7 (16.4), 82.4 (36.4), 74.9 (24.2) and 80.5 (18.7) for each of the four datasets. After fine-tuning, DSCs were 89.2 (13.3), 84.1 (19.8), 80.2 (20.1) and 85.2 (10.8). For comparison, inter-rater agreement between human annotators from our previous study was 84.0 (9.9). Conclusion: We present a self-supervised learning strategy for 3D CNNs using simulated RCs to accurately segment real RCs on postoperative MRI. Our method generalizes well to data from different institutions, pathologies and modalities. Source code, segmentation models and the EPISURG dataset are available at

View graph of relations

© 2020 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454