King's College London

Research portal

Explaining Image Classifiers Using Statistical Fault Localization

Research output: Chapter in Book/Report/Conference proceedingConference paper

Youcheng Sun, Hana Chockler, Xiaowei Huang, Daniel Kroening

Original languageEnglish
Title of host publication16TH EUROPEAN CONFERENCE ON COMPUTER VISION
PublisherSpringer
Number of pages16
Accepted/In press2 Jul 2020
Published23 Aug 2020

Documents

King's Authors

Abstract

The black-box nature of deep neural networks (DNNs) makes it impossible to understand why a particular output is produced, creating demand for “Explainable AI”. In this paper, we show that statistical fault localization (SFL) techniques from software engineering deliver high quality explanations of the outputs of DNNs, where we define an explanation as a minimal subset of features sufficient for making the same decision as for the original input. We present an algorithm and a tool calledDeepCover, which synthesizes a ranking of the features of the inputs using SFL and constructs explanations for the decisions ofthe DNN based on this ranking. We compare explanations produced by DeepCover with those of the state-of-the-art tools gradcam, lime, shap, rise and extremal and show that explanations generated by DeepCover are consistently better across a broad set of experiments. On a benchmark set with known ground truth,DeepCover achieves 76.7% accuracy, which is 6% better than the second best extremal.

Download statistics

No data available

View graph of relations

© 2020 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454