Discovering and Interpreting Biased Concepts in Online Communities

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)
102 Downloads (Pure)

Abstract

Language carries implicit human biases, functioning both as a reflection and a perpetuation of stereotypes that people carry with them. Recently, ML-based NLP methods such as word embeddings have been shown to learn such language biases with striking accuracy. This capability of word embeddings has been successfully exploited as a tool to quantify and study human biases. However, previous studies only consider a predefined set of biased concepts to attest (e.g., whether gender is more or less associated with particular jobs), or just discover biased words without helping to understand their meaning at the conceptual level. As such, these approaches can be either unable to find biased concepts that have not been defined in advance, or the biases they find are difficult to interpret and study. This could make existing approaches unsuitable to discover and interpret biases in online communities, as such communities may carry different biases than those in mainstream culture. This paper improves upon, extends, and evaluates our previous data-driven method to automatically discover and help interpret biased concepts encoded in word embeddings. We apply this approach to study the biased concepts present in the language used in online communities and experimentally show the validity and stability of our method.

Original languageEnglish
JournalIEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
DOIs
Publication statusAccepted/In press - 2021

Fingerprint

Dive into the research topics of 'Discovering and Interpreting Biased Concepts in Online Communities'. Together they form a unique fingerprint.

Cite this