King's College London

Research portal

Explanations for AI: Computable or Not?

Research output: Contribution to conference typesPaperpeer-review

Niko Tsakalakis, Laura Carmichael, Sophie Stalla-bourdillon, Luc Moreau, Dong Huynh, Ayah Helal

Original languageEnglish
Published6 Jul 2020


King's Authors


Automated decision making continues to be used for a variety of purposes within a multitude of sectors. Ultimately, what makes a ‘good’ explanation is a focus not only for the designers and developers of AI systems, but for many disciplines, including law, philosophy, psychology, history, sociology and human-computer interaction. Given that the generation of compliant, valid and effective explanations for AI requires a high-level of critical, interdisciplinary thinking and collaboration, this area is therefore of particular interest for Web Science. The workshop ‘Explanations for AI: Computable or Not?’ (exAI’20) aims to bring together researchers, practitioners and representatives of those subjected to socially-sensitive decision-making to exchange ideas, methods and challenges as part of an interdisciplinary discussion on explanations for AI. It is hoped that this workshop will build a cross-sectoral, multi-disciplinary and international network of people focusing on explanations for AI, and an agenda to drive this work forward. exAI’20 will hold two position paper sessions, where the panel members and workshop attendees will debate the following key issues in an interactive dialogue: The sessions are hoped to stimulate a lively debate on whether explanations for AI are computable or not by providing time for an interactive discussion after each paper. The discussion will uncover key arguments for and against the computability of explanations for AI related to socially-sensitive decision-making. An introductory keynote from the team behind the project PLEAD (Provenance-Driven & Legally Grounded Explanations for Automated Decisions) will present use cases, scenarios and the practical experience of explanations for AI. The keynote will serve as a starting point for the discussions during the paper sessions about the rationale, technologies and/or organisations measures used; and, accounts from different perspectives – e.g. software designers, implementers and those subject to automated decision-making. By the end of this workshop, attendees will have gained a good insight into the critiques and the advantages of explanations for AI, including the extent in which explanations can or should be made computable. They will have the opportunity to participate and inform the discussions on complex topics about AI explainability, such as the legal requirements for explanations, the extent in which data ethics may drive explanations for AI, reflections on the similarities and differences of explanations for AI decisions and manual decisions, as well as what makes a ‘good’ explanation and the etymology of explanations for socially-sensitive decisions. exAI’20 is supported by the Engineering and Physical Sciences Research Council [grant number EP/S027238/1]. We would like to thank the organizers of the Web Science 2019 conference for agreeing to host our workshop and for their support.

View graph of relations

© 2020 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454