Does GPT-3 Grasp Metaphors? Identifying Metaphor Mappings with Generative Language Models

Lennart Wachowiak, Dagmar Gromann

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

14 Citations (Scopus)

Abstract

Conceptual metaphors present a powerful cognitive vehicle to transfer knowledge structures from a source to a target domain. Prior neural approaches focus on detecting whether natural language sequences are metaphoric or literal. We believe that to truly probe metaphoric knowledge in pre-trained language models, their capability to detect this transfer should be investigated. To this end, this paper proposes to probe the ability of GPT-3 to detect metaphoric language and predict the metaphor’s source domain without any pre-set domains. We experiment with different training sample configurations for fine-tuning and few-shot prompting on two distinct datasets. When provided 12 few-shot samples in the prompt, GPT-3 generates the correct source domain for a new sample with an accuracy of 65.15% in English and 34.65% in Spanish. GPT’s most common error is a hallucinated source domain for which no indicator is present in the sentence. Other common errors include identifying a sequence as literal even though a metaphor is present and predicting the wrong source domain based on specific words in the sequence that are not metaphorically related to the target domain.
Original languageEnglish
Title of host publicationProceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
PublisherAssociation for Computational Linguistics (ACL)
Pages1018–1032
DOIs
Publication statusPublished - Jul 2023

Keywords

  • natural language processing
  • NLP
  • conceptual metaphor
  • Cognitive Linguistics

Fingerprint

Dive into the research topics of 'Does GPT-3 Grasp Metaphors? Identifying Metaphor Mappings with Generative Language Models'. Together they form a unique fingerprint.

Cite this