On Testing for Discrimination Using Causal Models

Hana Chockler, Joseph Y. Halpern

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

3 Citations (Scopus)
235 Downloads (Pure)

Abstract

Consider a bank that uses an AI system to decide which loan applications to approve. We want to ensure that the system is fair, that is, it does not discriminate against applicants based on a predefined list of sensitive attributes, such as gender and ethnicity. We expect there to be a regulator whose job it is to certify the bank’s system as fair or unfair. We consider issues that the regulator will have to confront when making such a decision, including the precise definition of fairness, dealing with proxy variables, and dealing with what we call allowed variables, that is, variables such as salary on which the decision is allowed to depend, despite being correlated with sensitive variables. We show (among other things) that the problem of deciding fairness as we have defined it is co- NP-complete, but then argue that, despite that, in practice the problem should be manageable.
Original languageEnglish
Title of host publicationProceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI-22)
PublisherAAAI Press
Publication statusPublished - 28 Jun 2022

Fingerprint

Dive into the research topics of 'On Testing for Discrimination Using Causal Models'. Together they form a unique fingerprint.

Cite this