Not So Fair: The Impact of Presumably Fair Machine Learning Models

MacKenzie Jorgensen, Hannah Richert, Elizabeth Black, Natalia Criado, Jose Such

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

Abstract

When mitigation methods are applied to make fairer machine learning models in fairness-related classification settings, there is an assumption that the disadvantaged group should be better off than if no fairness mitigation method was applied. However, this is a potentially dangerous assumption because a ``fair'' model outcome does not automatically imply a positive impact for a disadvantaged individual---they could still be negatively impacted. Modeling and accounting for those impacts is key to ensure that mitigated models are not unintentionally harming individuals; we investigate if mitigated models can still negatively impact disadvantaged individuals and what conditions affect those impacts in a loan repayment example. Our results show that most mitigated models negatively impact disadvantaged group members in comparison to the unmitigated models. The domain-dependent impacts of model outcomes should help drive future mitigation method development.
Original languageEnglish
Title of host publicationProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society
Place of PublicationMontreal, Canada
PublisherACM
Publication statusPublished - Aug 2023

Keywords

  • Fairness
  • Impact
  • Machine Learning
  • Synthetic data

Fingerprint

Dive into the research topics of 'Not So Fair: The Impact of Presumably Fair Machine Learning Models'. Together they form a unique fingerprint.

Cite this