In this comment, we address the apparent exclusivity and paternalism of goal and standard setting for explainable AI and its implications for the public governance of AI. We argue that the widening use of AI decision-making, including the development of autonomous systems, not only poses widely-discussed risks for human autonomy in itself, but is also the subject of a standard-setting process that is remarkably closed to effective public contestation. The implications of this turn in governance for democratic decision-making in Britain have also yet to be fully appreciated. As the governance of AI gathers pace, one of the major tasks will be ensure not only that AI systems are technically ‘explainable’ but that, in a fuller sense, the relevant standards and rules are contestable and that governing institutions and processes are open to democratic contestability.
|Title of host publication||Computer Law & Security Review|
|Subtitle of host publication||The International Journal of Technology Law and Practice|
|Publication status||Accepted/In press - 2020|
- artificial intelligence; explainability; trust; governance