Intersectional Experiences of Unfair Treatment Caused by Automated Computational Systems

Tom Van Nuenen*, Jose Such, Mark Cote

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)

Abstract

This paper reports on empirical work conducted to study perceptions of unfair treatment caused by automated computational systems. While the pervasiveness of algorithmic bias has been widely acknowledged, and perceptions of fairness are commonly studied in Human Computer Interaction, there is a lack of research on how unfair treatment by automated computational systems is experienced by users from disadvantaged and marginalised backgrounds. There is a need for more diversification in terms of the investigated users, domains, and tasks, and regarding the strategies that users employ to reduce harm. To unpack these issues, we ran a prescreened survey of 663 participants, oversampling those with at-risk characteristics. We collected occurrences and types of conflicts regarding unfair and discriminatory treatment and systems, as well as the actions taken towards resolving these situations. Drawing on intersectional research, we combine qualitative and quantitative approaches in order to highlight the nuances around power and privilege in the perceptions of automated computational systems. Among our participants, we discuss experiences of computational essentialism, attribute-based exclusion, and expected harm. We derive suggestions to address these perceptions of unfairness as they occur.

Original languageEnglish
Article number445
JournalProceedings of the ACM on Human-Computer Interaction
Volume6
Issue numberCSCW2
DOIs
Publication statusPublished - 11 Nov 2022

Keywords

  • Algorithmic fairness
  • Automated computational systems
  • Conflicts
  • Intersectionality

Fingerprint

Dive into the research topics of 'Intersectional Experiences of Unfair Treatment Caused by Automated Computational Systems'. Together they form a unique fingerprint.

Cite this