Can Large Language Models Understand Argument Schemes?

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

133 Downloads (Pure)

Abstract

Argument schemes represent stereotypical patterns of reasoning that occur in everyday arguments. However, despite their usefulness, argument scheme classification, that is classifying natural language arguments according to the schemes they are instances of, is an under-explored task in NLP. In this paper, we present a systematic evaluation of large language models (LLMs) for classifying argument schemes based on Walton’s taxonomy. We experiment with seven LLMs in zero-shot, few-shot, and chain-of-thought prompting, and explore two strategies to enhance task instructions: employing formal definitions and LLM-generated descriptions. Our analysis on both manually annotated and automatically generated arguments, including enthymemes, indicates that while larger models exhibit satisfactory performance in identifying argument schemes, challenges remain for smaller models. Our work offers the first comprehensive assessment of LLMs in identifying argument schemes, and provides insights for advancing reasoning capabilities in computational argumentation.
Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics
Subtitle of host publicationACL 2025 - Findings
PublisherAssociation for Computational Linguistics (ACL)
Publication statusAccepted/In press - 2025

Fingerprint

Dive into the research topics of 'Can Large Language Models Understand Argument Schemes?'. Together they form a unique fingerprint.

Cite this