Exclusivity and Paternalism in the public governance of explainable AI

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

65 Downloads (Pure)

Abstract

In this comment, we address the apparent exclusivity and paternalism of goal and standard setting for explainable AI and its implications for the public governance of AI. We argue that the widening use of AI decision-making, including the development of autonomous systems, not only poses widely-discussed risks for human autonomy in itself, but is also the subject of a standard-setting process that is remarkably closed to effective public contestation. The implications of this turn in governance for democratic decision-making in Britain have also yet to be fully appreciated. As the governance of AI gathers pace, one of the major tasks will be ensure not only that AI systems are technically ‘explainable’ but that, in a fuller sense, the relevant standards and rules are contestable and that governing institutions and processes are open to democratic contestability.
Original languageEnglish
Title of host publicationComputer Law & Security Review
Subtitle of host publicationThe International Journal of Technology Law and Practice
Publication statusAccepted/In press - 2020

Keywords

  • artificial intelligence; explainability; trust; governance

Fingerprint

Dive into the research topics of 'Exclusivity and Paternalism in the public governance of explainable AI'. Together they form a unique fingerprint.

Cite this