Explaining Image Classifiers Using Statistical Fault Localization

Youcheng Sun, Hana Chockler, Xiaowei Huang, Daniel Kroening

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

24 Citations (Scopus)
115 Downloads (Pure)


The black-box nature of deep neural networks (DNNs) makes it impossible to understand why a particular output is produced, creating demand for “Explainable AI”. In this paper, we show that statistical fault localization (SFL) techniques from software engineering deliver high quality explanations of the outputs of DNNs, where we define an explanation as a minimal subset of features sufficient for making the same decision as for the original input. We present an algorithm and a tool called DeepCover, which synthesizes a ranking of the features of the inputs using SFL and constructs explanations for the decisions of the DNN based on this ranking. We compare explanations produced by DeepCover with those of the state-of-the-art tools gradcam, lime, shap, rise and extremal and show that explanations generated by DeepCover are consistently better across a broad set of experiments. On a benchmark set with known ground truth, DeepCover achieves accuracy, which is better than the second best extremal.

Original languageEnglish
Title of host publicationComputer Vision – ECCV 2020 - 16th European Conference, 2020, Proceedings
EditorsAndrea Vedaldi, Horst Bischof, Thomas Brox, Jan-Michael Frahm
Number of pages16
ISBN (Print)9783030586034
Publication statusPublished - 23 Aug 2020

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12373 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


  • Deep learning
  • Explainability
  • Software testing
  • Statistical fault localization


Dive into the research topics of 'Explaining Image Classifiers Using Statistical Fault Localization'. Together they form a unique fingerprint.

Cite this