Abstract
The choice of the corpus on which word embeddings are trained can have a sizable effect on the learned representations, the types of analyses that can be performed with them, and their utility as features for machine learning models. To contribute to the existing sets of pre-trained word embeddings, we introduce and release the first set of word embeddings trained on the content of Urban Dictionary, a crowd-sourced dictionary for slang words and phrases. We show that although these embeddings are trained on fewer total tokens (by at least an order of magnitude compared to most popular pre-trained embeddings), they have high performance across a range of common word embedding evaluations, ranging from semantic similarity to word clustering tasks. Further, for some extrinsic tasks such as sentiment analysis and sarcasm detection where we expect to require some knowledge of colloquial language on social media data, initializing classifiers with the Urban Dictionary Embeddings resulted in improved performance compared to initializing with a range of other well-known, pre-trained embeddings that are order of magnitude larger in size.
Original language | English |
---|---|
Title of host publication | Proceedings of The 12th Language Resources and Evaluation Conference |
Place of Publication | Marseille, France |
Publisher | European Language Resources Association (ELRA) |
Pages | 4764-4773 |
Number of pages | 10 |
ISBN (Print) | 979-10-95546-34-4 |
Publication status | Published - 1 May 2020 |