Projects per year
Abstract
Automatic summarization systems extract relevant information from documents to provide concise summaries that maintain the same information. Traditionally a human task, automatic summarization has been a challenge since the 1950s and continues to evolve with the advent of large language models (LLMs). The main techniques used in summarization are statistical and fall into two categories: extractive and abstractive summarization. Extractive summarization involves selecting relevant sections of the original text to form the summary, while abstractive summarization creates an entirely new text based on the information in the original document. Evaluation methods for these techniques have evolved from traditional metrics like ROUGE and BLEU to the use of advanced LLMs that assess summary quality in terms of writing, completeness, conciseness, and factuality. This tutorial introduces various summarization techniques and explores both historical and contemporary approaches to their evaluation.
Original language | English |
---|---|
Title of host publication | 36th International Conference on Testing Software and Systems |
Subtitle of host publication | ICTSS 2024 |
Publication status | Accepted/In press - 13 Oct 2024 |
Fingerprint
Dive into the research topics of 'Automatic Summarization Evaluation: Methods and Practices'. Together they form a unique fingerprint.Projects
- 1 Finished
-
MuSE: Multi-lingual Summarizer Evaluation framework using LLM Testing and Adversarial Strategies
Menendez Benito, H. (Primary Investigator)
12/08/2024 → 11/11/2024
Project: Research