Abstract
Role-playing is important for Large Language Models (LLMs) to follow diverse instructions while maintaining role identity and the role’s pre-defined ability limits. Existing role-playing datasets mostly contribute to controlling role style and knowledge boundaries, but overlook role-playing in instruction-following scenarios. We introduce a fine-grained role-playing and instruction-following composite benchmark, named RoleMRC, including: (1) Multi-turn dialogues between ideal roles and humans, including free chats or discussions upon given passages; (2) Role-playing machine reading comprehension, involving response, refusal, and attempts according to passage answerability and role ability; (3) More complex scenarios with nested, multi-turn and prioritized instructions. The final RoleMRC features a 10.2k role profile meta-pool, 37.9k well-synthesized role-playing instructions, and 1.4k testing samples. We develop a pipeline to quantitatively evaluate the fine-grained role-playing and instruction-following capabilities of several mainstream LLMs, as well as models that are fine-tuned on our data. Moreover, cross-evaluation on external role-playing datasets confirms that models fine-tuned on RoleMRC enhance instruction-following without compromising general role-playing and reasoning capabilities. We also probe the neural-level activation maps of different capabilities over post-tuned LLMs.
Original language | English |
---|---|
Publication status | Published - 2025 |
Event | The 63rd Annual Meeting of the Association for Computational Linguistics: ACL 2025 - Vienna, Austria Duration: 27 Jul 2025 → 1 Aug 2025 https://2025.aclweb.org/ |
Conference
Conference | The 63rd Annual Meeting of the Association for Computational Linguistics |
---|---|
Country/Territory | Austria |
City | Vienna |
Period | 27/07/2025 → 1/08/2025 |
Internet address |