King's College London

Research portal

Attesting Digital Discrimination Using Norms

Research output: Contribution to journalArticlepeer-review

Original languageEnglish
Pages (from-to)16-23
Number of pages8
JournalInternational Journal of Interactive Multimedia and Artificial Intelligence
Volume6
Issue number5
DOIs
Accepted/In press1 Mar 2021

Bibliographical note

Funding Information: The research reported in this article was funded by the Engineering and Physical Sciences Research Council (EPSRC) under grant EP/ R033188/1. This research is part of the cross-disciplinary project Discovering and Attesting Digital Discrimination (DADD) – visit our project website for further details: https://dadd-project.org. Publisher Copyright: © 2021, Universidad Internacional de la Rioja. All rights reserved. Copyright: Copyright 2021 Elsevier B.V., All rights reserved.

Documents

  • IJIMAI__copy_for_pure_2_

    IJIMAI_copy_for_pure_2_.pdf, 410 KB, application/pdf

    Uploaded date:04 Jan 2021

    Version:Accepted author manuscript

King's Authors

Abstract

More and more decisions are delegated to Machine Learning (ML) and automatic decision systems recently. Despite initial misconceptions considering these systems unbiased and fair, recent cases such as racist algorithms being used to inform parole decisions in the US, low-income neighborhood's targeted with high-interest loans and low credit scores, and women being undervalued by online marketing, fueled public distrust in machine learning. This poses a significant challenge to the adoption of ML by companies or public sector organisations, despite ML having the potential to lead to significant reductions in cost and more efficient decisions, and is motivating research in the area of algorithmic fairness and fair ML. Much of that research is aimed at providing detailed statistics, metrics and algorithms which are difficult to interpret and use by someone without technical skills. This paper tries to bridge the gap between lay users and fairness metrics by using simpler notions and concepts to represent and reason about digital discrimination. In particular, we use norms as an abstraction to communicate situations that may lead to algorithms committing discrimination. In particular, we formalise non-discrimination norms in the context of ML systems and propose an algorithm to attest whether ML systems violate these norms.

Download statistics

No data available

View graph of relations

© 2020 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454