Projects per year
Abstract
Whilst the legal debate concerning automated decision-making has been focused mainly on whether a ‘right to explanation’ exists in the GDPR, the emergence of ‘explainable Artificial Intelligence’ (XAI) has produced taxonomies for the explanation of Artificial Intelligence (AI) systems. However, various researchers have warned that transparency of the algorithmic processes in itself is not enough. Better and easier tools for the assessment and review of the socio-technical systems that incorporate automated decision-making are needed. The PLEAD project suggests that, aside from fulfilling the obligations set forth by Article 22 of the GDPR, explanations can also assist towards a holistic compliance strategy if used as detective controls. PLEAD aims to show that computable explanations can facilitate monitoring and auditing, and make compliance more systematic. Automated computable explanations can be key controls in fulfilling accountability and data-protection-by-design obligations, able to empower both controllers and data subjects. This opinion piece presents the work undertaken by the PLEAD project towards facilitating the generation of computable explanations. PLEAD leverages provenance-based technology to compute explanations as external detective controls to the benefit of data subjects and as internal detective controls to the benefit of the data controller.
Original language | English |
---|---|
Article number | 105527 |
Journal | Computer Law & Security Review |
Volume | 41 |
Early online date | 18 Mar 2021 |
DOIs | |
Publication status | Published - Jul 2021 |
Fingerprint
Dive into the research topics of 'The dual function of explanations: Why it is useful to compute explanations'. Together they form a unique fingerprint.Projects
- 1 Finished
-
PLEAD: Provenance-driven and Legally-grounded Explanations for Automated Decisions
EPSRC Engineering and Physical Sciences Research Council
1/09/2019 → 31/03/2022
Project: Research