Abstract
Clinical adoption of deep learning models has been hindered, in part, because the “black-box” nature of neural networks leads to concerns regarding their trustworthiness and reliability. These concerns are particularly relevant in the field of neuroimaging due to the complex brain phenotypes and inter subject heterogeneity often encountered. The challenge can be addressed by
interpretable deep learning (iDL) methods that enable the visualisation and interpretation of the inner workings of deep learning models. In this study, we systematically reviewed the literature on neuroimaging applications of iDL methods, and critically analysed how iDL explanation properties are evaluated. Sixty-two studies were included and ten categories of iDL methods were identified. We also reviewed six properties of iDL explanations that were analysed in the included studies: biological validity, robustness, continuity and selectivity, user satisfaction, and downstream task performance. We found the most popular iDL approaches used in the literature may be sub-optimal for neuroimaging data, and we discuss possible future directions for the field.
interpretable deep learning (iDL) methods that enable the visualisation and interpretation of the inner workings of deep learning models. In this study, we systematically reviewed the literature on neuroimaging applications of iDL methods, and critically analysed how iDL explanation properties are evaluated. Sixty-two studies were included and ten categories of iDL methods were identified. We also reviewed six properties of iDL explanations that were analysed in the included studies: biological validity, robustness, continuity and selectivity, user satisfaction, and downstream task performance. We found the most popular iDL approaches used in the literature may be sub-optimal for neuroimaging data, and we discuss possible future directions for the field.
Original language | English |
---|---|
Journal | Imaging Neuroscience |
Volume | 2 |
DOIs | |
Publication status | Accepted/In press - 20 May 2024 |