King's College London

Research portal

Enhancing the estimation of fiber orientation distributions using convolutional neural networks

Research output: Contribution to journalArticlepeer-review

Oeslle Lucena, Sjoerd B. Vos, Vejay Vakharia, John Duncan, Keyoumars Ashkan, Rachel Sparks, Sebastien Ourselin

Original languageEnglish
Article number104643
JournalComputers in Biology and Medicine
Early online date14 Jul 2021
E-pub ahead of print14 Jul 2021
PublishedAug 2021

Bibliographical note

Funding Information: This research was funded by the National Institute for Health Research ( NIHR ) Biomedical Research Centre based at Guy's and St Thomas' NHS Foundation Trust and King's College London and the NIHR Clinical Research Facility. Oeslle Lucena is funded by EPSRC Research Council ( EPSRC DTP EP/R513064/1). Sjoerd B. Vos is funded by the National Institute for Health Research University College London Hospitals Biomedical Research Centre (NIHR BRC UCLH/UCL High Impact Initiative We also thank NVIDIA for providing the Titan V GPU used in this work. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health. Publisher Copyright: © 2021 The Author(s) Copyright: Copyright 2021 Elsevier B.V., All rights reserved.

King's Authors


Local fiber orientation distributions (FODs) can be computed from diffusion magnetic resonance imaging (dMRI). The accuracy and ability of FODs to resolve complex fiber configurations benefits from acquisition protocols that sample a high number of gradient directions, a high maximum b-value, and multiple b-values. However, acquisition time and scanners that follow these standards are limited in clinical settings, often resulting in dMRI acquired at a single shell (single b-value). In this work, we learn improved FODs from clinically acquired dMRI. We evaluate patch-based 3D convolutional neural networks (CNNs) on their ability to regress multi-shell FODs from single-shell FODs, using constrained spherical deconvolution (CSD). We evaluate U-Net and High-Resolution Network (HighResNet) 3D CNN architectures on data from the Human Connectome Project and an in-house dataset. We evaluate how well each CNN can resolve FODs 1) when training and testing on datasets with the same dMRI acquisition protocol; 2) when testing on a dataset with a different dMRI acquisition protocol than used to train the CNN; and 3) when testing on a dataset with a fewer number of gradient directions than used to train the CNN. This work is a step towards more accurate FOD estimation in time- and resource-limited clinical environments.

View graph of relations

© 2020 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454