Addressing Regulatory Requirements on Explanations for Automated Decisions with Provenance: A Case Study

Trung Dong Huynh, Niko Tsakalakis, Ayah Helal, Sophie Stalla-Bourdillon, Luc Moreau

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)
50 Downloads (Pure)


AI-based automated decisions are increasingly used as part of new services being deployed to the general public. This approach to building services presents significant potential benefits, such as the reduced speed of execution, increased accuracy, lower cost, and ability to adapt to a wide variety of situations. However, equally significant concerns have been raised and are now well documented such as concerns about privacy, fairness, bias and ethics. On the consumer side, more often than not, the users of those services are provided with no or inadequate explanations for decisions that may impact their lives.
In this paper, we report the experience of developing a socio-technical approach to constructing explanations for such decisions from their audit trails, or provenance, in an automated manner. The work has been carried out in collaboration with the UK Information Commissioner's Office (ICO). In particular, we have implemented an automated Loan Decision scenario, instrumented its decision pipeline to record provenance, categorized relevant explanations according to their audience and their regulatory purposes, built an explanation-generation prototype, and deployed the whole system in an online demonstrator.
Original languageEnglish
Article number16e
Number of pages14
JournalDigital Government: Research and Practice
Issue number2
Publication statusPublished - 20 Jan 2021


  • data provenance
  • explainable computing
  • automated decisions
  • GDPR

Cite this