Automatic Summarization Evaluation: Methods and Practices

Hector D. Menendez, Aidan Dakhama

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

72 Downloads (Pure)

Abstract

Automatic summarization systems extract relevant information from documents to provide concise summaries that maintain the same information. Traditionally a human task, automatic summarization has been a challenge since the 1950s and continues to evolve with the advent of large language models (LLMs). The main techniques used in summarization are statistical and fall into two categories: extractive and abstractive summarization. Extractive summarization involves selecting relevant sections of the original text to form the summary, while abstractive summarization creates an entirely new text based on the information in the original document. Evaluation methods for these techniques have evolved from traditional metrics like ROUGE and BLEU to the use of advanced LLMs that assess summary quality in terms of writing, completeness, conciseness, and factuality. This tutorial introduces various summarization techniques and explores both historical and contemporary approaches to their evaluation.
Original languageEnglish
Title of host publication36th International Conference on Testing Software and Systems
Subtitle of host publicationICTSS 2024
Publication statusAccepted/In press - 13 Oct 2024

Fingerprint

Dive into the research topics of 'Automatic Summarization Evaluation: Methods and Practices'. Together they form a unique fingerprint.

Cite this