Abstract
Personalized large language models (LLMs) aim to tailor their outputs to user preferences. Recent advances in parameter-efficient fine-tuning (PEFT) methods have highlighted the effectiveness of adapting population-level LLMs to personalized LLMs by fine-tuning user-specific parameters with user history. However, user data is typically sparse, making it challenging to adapt LLMs to specific user patterns. To address this challenge, we propose PROgressive PERsonalization (PROPER), a novel progressive learning framework inspired by meso-level theory in social science. PROPER bridges population-level and user-level models by grouping users based on preferences and adapting LLMs in stages. It combines a Mixture-of-Experts (MoE) structure with Low Ranked Adaptation (LoRA), using a user-aware router to assign users to appropriate groups automatically. Additionally, a LoRA-aware router is proposed to facilitate the integration of individual user LoRAs with group-level LoRAs. Experimental results show that PROPER significantly outperforms SOTA models across multiple tasks, demonstrating the effectiveness of our approach. Our code is available at https://github.com/callanwu/PROPER.
Original language | English |
---|---|
Publication status | Accepted/In press - 15 May 2025 |
Event | The 63rd Annual Meeting of the Association for Computational Linguistics: ACL 2025 - Vienna, Austria Duration: 27 Jul 2025 → 1 Aug 2025 https://2025.aclweb.org/ |
Conference
Conference | The 63rd Annual Meeting of the Association for Computational Linguistics |
---|---|
Country/Territory | Austria |
City | Vienna |
Period | 27/07/2025 → 1/08/2025 |
Internet address |
Keywords
- Large Language Models
- Personalization
- Parameter-Efficient Fine-Tuning
- Mixture of Experts