Attesting Biases and Discrimination using Language Semantics

Research output: Contribution to journalConference paperpeer-review

97 Downloads (Pure)


AI agents are increasingly deployed and used to make auto-
mated decisions that affect our lives on a daily basis. It is imperative to
ensure that these systems embed ethical principles and respect human
values. We focus on how we can attest whether AI agents treat users fairly
without discriminating against particular individuals or groups through
biases in language. In particular, we discuss human unconscious biases,
how they are embedded in language, and how AI systems inherit those bi-
ases by learning from and processing human language. Then, we outline a
roadmap for future research to better understand and attest problematic
AI biases derived from language.
Original languageEnglish
Pages (from-to)1
Number of pages8
JournalAutonomous Agents and Multi-Agent Systems
Publication statusAccepted/In press - 17 Mar 2019


  • digital discrimination
  • bias
  • ethics
  • agents
  • NLP


Dive into the research topics of 'Attesting Biases and Discrimination using Language Semantics'. Together they form a unique fingerprint.

Cite this