Not So Fair: The Impact of Presumably Fair Machine Learning Models

MacKenzie Jorgensen, Hannah Richert, Elizabeth Black, Natalia Criado, Jose Such

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

7 Citations (Scopus)
367 Downloads (Pure)

Abstract

When bias mitigation methods are applied to make fairer machine learning models in fairness-related classification settings, there is an assumption that the disadvantaged group should be better off than if no mitigation method was applied. However, this is a potentially dangerous assumption because a "fair"model outcome does not automatically imply a positive impact for a disadvantaged individual - they could still be negatively impacted. Modeling and accounting for those impacts is key to ensure that mitigated models are not unintentionally harming individuals; we investigate if mitigated models can still negatively impact disadvantaged individuals and what conditions affect those impacts in a loan repayment example. Our results show that most mitigated models negatively impact disadvantaged group members in comparison to the unmitigated models. The domain-dependent impacts of model outcomes should help drive future bias mitigation method development.

Original languageEnglish
Title of host publicationAIES 2023 - Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society
Place of PublicationMontreal, Canada
PublisherACM
Pages297-311
Number of pages15
ISBN (Electronic)9798400702310
DOIs
Publication statusPublished - 29 Aug 2023

Publication series

NameAIES 2023 - Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society

Keywords

  • Fairness
  • Impact
  • Machine Learning
  • Synthetic data

Fingerprint

Dive into the research topics of 'Not So Fair: The Impact of Presumably Fair Machine Learning Models'. Together they form a unique fingerprint.

Cite this