Transparency for Whom? Assessing Discriminatory Artificial Intelligence

Research output: Contribution to journalArticle

33 Citations (Scopus)
783 Downloads (Pure)

Abstract

Artificial intelligence decision making can cause discriminatory harm to many vulnerable groups. Redress is often suggested through increased transparency of these systems. But for what group are we implementing it? This article seeks to identify what transparency means for technical, legislative, and public realities and stakeholders.

Original languageEnglish
Pages (from-to)36-44
Number of pages9
JournalCOMPUTER
Volume53
Issue number11
DOIs
Publication statusPublished - Nov 2020

Fingerprint

Dive into the research topics of 'Transparency for Whom? Assessing Discriminatory Artificial Intelligence'. Together they form a unique fingerprint.

Cite this