Abstract
Understanding in-context learning (ICL) capability that enables large language models (LLMs) to excel in proficiency through demonstration examples is of utmost importance. This importance stems not only from the better utilization of this capability across various tasks, but also from the proactive identification and mitigation of potential risks, including concerns regarding truthfulness, bias, and toxicity, that may arise alongside the capability. In this paper, we present a thorough survey on the interpretation and analysis of in-context learning. First, we provide a concise introduction to the background and definition of in-context learning. Then, we give an overview of advancements from two perspectives: 1) a theoretical perspective, emphasizing studies on mechanistic interpretability and delving into the mathematical foundations behind ICL; and 2) an empirical perspective, concerning studies that empirically analyze factors associated with ICL. We conclude by discussing open questions and the challenges encountered, and suggesting potential avenues for future research. We believe that our work establishes the basis for further exploration into the interpretation of in-context learning. To aid this effort, we have created a repository containing resources that will be continually updated.
Original language | Undefined/Unknown |
---|---|
Title of host publication | Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing |
Editors | Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen |
Place of Publication | Miami, Florida, USA |
Publisher | Association for Computational Linguistics |
Pages | 14365-14378 |
Number of pages | 14 |
DOIs | |
Publication status | Published - 1 Nov 2024 |