The Mystery of In-Context Learning: A Comprehensive Survey on Interpretation and Analysis

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

Abstract

Understanding in-context learning (ICL) capability that enables large language models (LLMs) to excel in proficiency through demonstration examples is of utmost importance. This importance stems not only from the better utilization of this capability across various tasks, but also from the proactive identification and mitigation of potential risks, including concerns regarding truthfulness, bias, and toxicity, that may arise alongside the capability. In this paper, we present a thorough survey on the interpretation and analysis of in-context learning. First, we provide a concise introduction to the background and definition of in-context learning. Then, we give an overview of advancements from two perspectives: 1) a theoretical perspective, emphasizing studies on mechanistic interpretability and delving into the mathematical foundations behind ICL; and 2) an empirical perspective, concerning studies that empirically analyze factors associated with ICL. We conclude by discussing open questions and the challenges encountered, and suggesting potential avenues for future research. We believe that our work establishes the basis for further exploration into the interpretation of in-context learning. To aid this effort, we have created a repository containing resources that will be continually updated.
Original languageUndefined/Unknown
Title of host publicationProceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
EditorsYaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Place of PublicationMiami, Florida, USA
PublisherAssociation for Computational Linguistics
Pages14365-14378
Number of pages14
DOIs
Publication statusPublished - 1 Nov 2024

Cite this