Urban Dictionary Embeddings for Slang NLP Applications

Steven Wilson, Walid Magdy, Barbara McGillivray, Kiran Garimella, Gareth Tyson

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

21 Citations (Scopus)

Abstract

The choice of the corpus on which word embeddings are trained can have a sizable effect on the learned representations, the types of analyses that can be performed with them, and their utility as features for machine learning models. To contribute to the existing sets of pre-trained word embeddings, we introduce and release the first set of word embeddings trained on the content of Urban Dictionary, a crowd-sourced dictionary for slang words and phrases. We show that although these embeddings are trained on fewer total tokens (by at least an order of magnitude compared to most popular pre-trained embeddings), they have high performance across a range of common word embedding evaluations, ranging from semantic similarity to word clustering tasks. Further, for some extrinsic tasks such as sentiment analysis and sarcasm detection where we expect to require some knowledge of colloquial language on social media data, initializing classifiers with the Urban Dictionary Embeddings resulted in improved performance compared to initializing with a range of other well-known, pre-trained embeddings that are order of magnitude larger in size.
Original languageEnglish
Title of host publicationProceedings of The 12th Language Resources and Evaluation Conference
Place of PublicationMarseille, France
PublisherEuropean Language Resources Association (ELRA)
Pages4764-4773
Number of pages10
ISBN (Print)979-10-95546-34-4
Publication statusPublished - 1 May 2020

Fingerprint

Dive into the research topics of 'Urban Dictionary Embeddings for Slang NLP Applications'. Together they form a unique fingerprint.

Cite this