Abstract
Using AI models in healthcare is gaining popularity. To improve clinician confidence in the results of automated triage and to provide further information about the suggested diagnosis, an explanation produced by a separate post-hoc explainability
tool often accompanies the classification of an AI
model. If no abnormalities are detected, however,
it is not clear what an explanation should be. A
human clinician might be able to describe certain
salient features of tumors that are not in scan, but
existing Explainable AI (XAI) tools cannot do that,
as they cannot point to features that are absent
from the input. In this paper, we present a definition of and algorithm for providing explanations
of absence; that is, explanations of negative classifications in the context of healthcare AI.
Our approach is rooted in the concept of explanations in actual causality. It uses the model as a
black-box and is hence portable and works with
proprietary models. Moreover, the computation
is done in the preprocessing stage, based on the
model and the dataset. During the execution, the algorithm only projects the precomputed explanation
template on the current image.
We implemented this approach in a tool, NITO, and
trialed it on a number of medical datasets to demonstrate its utility on the classification of solid tumors.
We discuss the differences between the theoretical
approach and the implementation in the domain
of classifying solid tumors and address the additional complications posed by this domain. Finally,
we discuss the assumptions we make in our algorithm and its possible extensions to explanations
of absence for general image classifiers.
tool often accompanies the classification of an AI
model. If no abnormalities are detected, however,
it is not clear what an explanation should be. A
human clinician might be able to describe certain
salient features of tumors that are not in scan, but
existing Explainable AI (XAI) tools cannot do that,
as they cannot point to features that are absent
from the input. In this paper, we present a definition of and algorithm for providing explanations
of absence; that is, explanations of negative classifications in the context of healthcare AI.
Our approach is rooted in the concept of explanations in actual causality. It uses the model as a
black-box and is hence portable and works with
proprietary models. Moreover, the computation
is done in the preprocessing stage, based on the
model and the dataset. During the execution, the algorithm only projects the precomputed explanation
template on the current image.
We implemented this approach in a tool, NITO, and
trialed it on a number of medical datasets to demonstrate its utility on the classification of solid tumors.
We discuss the differences between the theoretical
approach and the implementation in the domain
of classifying solid tumors and address the additional complications posed by this domain. Finally,
we discuss the assumptions we make in our algorithm and its possible extensions to explanations
of absence for general image classifiers.
Original language | English |
---|---|
Title of host publication | The Conference on Uncertainty in Artificial Intelligence (UAI) |
Subtitle of host publication | Proceedings of Machine Learning Research (PMLR) |
Publication status | Accepted/In press - 7 May 2025 |