Updating Large Language Models’ Memories with Time Constraints

Xin Wu, Yuqi Bu, Yi Cai, Tao Wang

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

25 Downloads (Pure)

Abstract

By incorporating the latest external knowledge, large language models (LLMs) can modify their internal memory. However, in practical applications, LLMs may encounter outdated in- formation, necessitating the filtering of such data and updating of knowledge beyond in- ternal memory. This paper explores whether LLMs can selectively update their memories based on the time constraints between internal memory and external knowledge. We evalu- ate existing LLMs using three types of data that exhibit different time constraints. Our ex- perimental results reveal the challenges most LLMs face with time-constrained knowledge and highlight the differences in how various LLMs handle such information. Additionally, to address the difficulties LLMs encounter in understanding time constraints, we propose a two-stage decoupling framework that separates the identification and computation of time con- straint into a symbolic system. Experimental re- sults demonstrate that the proposed framework yields an improvement of over 60% in Chat- GPT’s performance, and achieves a 12-24% enhancement in state-of-the-art LLM GPT-4.
Original languageEnglish
Title of host publicationThe 2024 Conference on Empirical Methods in Natural Language Processing
Publication statusAccepted/In press - 1 Oct 2024

Fingerprint

Dive into the research topics of 'Updating Large Language Models’ Memories with Time Constraints'. Together they form a unique fingerprint.

Cite this