King's College London

Research portal

The Dual Function of Explanations: Why Computing Explanations is of Value

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Niko Tsakalakis, Sophie Stalla-Bourdillon, Laura Carmichael, Dong Huynh, Luc Moreau, Ayah Helal

Original languageEnglish
Title of host publicationData Protection and Privacy
Subtitle of host publicationEnforcing Rights in a Changing World
EditorsDara Hallinan, Ronald Leenes, Paul De Hert
PublisherHart Publishing
Chapter5
Pages127-156
Volume14
ISBN (Electronic)978-1-50995-453-7, 978-1-50995-452-0
ISBN (Print)978-1-50995-451-3
DOIs
Published21 Jan 2022
EventInternational Conference on Computers, Privacy and Data Protection - Brussels, Belgium
Duration: 27 Jan 202129 Jan 2021
Conference number: 14

Conference

ConferenceInternational Conference on Computers, Privacy and Data Protection
Abbreviated titleCPDP
Country/TerritoryBelgium
CityBrussels
Period27/01/202129/01/2021

King's Authors

Abstract

The increasing dependence of decision-making on some level of automation has naturally led to discussions about the trustworthiness of such automation, calls for transparent automated decision making and the emergence of ‘explainable Artificial Intelligence’ (XAI). Although XAI research has produced a number of taxonomies for the explanation of Artificial Intelligence (AI) and Machine Learning (ML) models, the legal debate has so far been mainly focused on whether a ‘right to explanation’ exists in the GDPR. Lately, a growing body of interdisciplinary literature is concentrating on the goals and substance of explanations produced for automated decision making, with a view to clarify their role and improve their value against unfairness, discrimination and opacity for the purposes of ensuring compliance with Article 22 of the GDPR. At the same time, several researchers have warned that transparency of the algorithmic processes in itself is not enough and tools for better and easier assessment and review of the whole socio-technical system that includes automated decision-making are needed. In this chapter, we suggest that generating computed explanations would be useful for most of the obligations set forth by the GDPR and can assist towards a holistic compliance strategy when used as detective controls. Computing explanations to support the detection of data protection breaches facilitates the monitoring and auditing of automated decision-making pipelines. Carefully constructed explanations can empower both the data controller and external recipients such as data subjects and regulators and should be seen as key controls in order to meet accountability and data protection-by-design obligations. To illustrate this claim, this chapter presents the work undertaken by the Provenance-driven and Legally-grounded Explanations for Automated Decisions (PLEAD) project towards ‘explainable-by-design’ socio-technical systems. PLEAD acknowledges the dual function of explanations as internal detective controls (to benefit data controllers) and external detective controls (to benefit data subjects) and leverages provenance-based technology to compute explanations and support the deployment of systematic compliance strategies.

View graph of relations

© 2020 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454