How to Use Word Embeddings for Natural Language Processing

Research output: Other contribution

349 Downloads (Pure)


This How-to Guide introduces computational methods for the analysis of word meaning, covering the main concepts of distributional semantics and word embedding models. At their core, distributional approaches to lexical semantics (i.e., the study of word meanings) are based on the idea that by looking at the context of use of a word (in terms of other words it occurs with in a text, for instance), we can infer important features of its meaning. This simple idea has turned out to be extremely powerful because it can be directly implemented computationally from text data. Word embeddings have been used extensively in social science research, and they have the potential to highlight expected and less expected similarities between words, and how these change across factors such as time. This guide will summarise not only the strengths of word embeddings for applied text-based research but also their limitations and the features that need to be considered when using or training them in your own research.
Original languageEnglish
TypeLearning support
Media of outputOnline
PublisherSAGE Publications Ltd
Number of pages32
VolumeSAGE Research Methods: Doing Research Online
ISBN (Electronic)9781529609578
Publication statusPublished - 12 Jul 2022


Dive into the research topics of 'How to Use Word Embeddings for Natural Language Processing'. Together they form a unique fingerprint.

Cite this