PlanCollabNL: Leveraging Large Language Models for Adaptive Plan Generation in Human-Robot Collaboration

Silvia Izquierdo-Badiola*, Gerard Canal, Carlos Rizzo, Guillem Alenyà

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

118 Downloads (Pure)

Abstract

"Hey, robot. Let's tidy up the kitchen. By the way, I have back pain today". How can a robotic system devise a shared plan with an appropriate task allocation from this abstract goal and agent condition? Classical AI task planning has been explored for this purpose, but it involves a tedious definition of an inflexible planning problem. Large Language Models (LLMs) have shown promising generalisation capabilities in robotics decision-making through knowledge extraction from Natural Language (NL). However, the translation of NL information into constrained robotics domains remains a challenge. In this paper, we use LLMs as translators between NL information and a structured AI task planning problem, targeting human-robot collaborative plans. The LLM generates information that is encoded in the planning problem, including specific subgoals derived from an NL abstract goal, as well as recommendations for subgoal allocation based on NL agent conditions. The framework, PlanCollabNL, is evaluated for a number of goals and agent conditions, and the results show that correct and executable plans are found in most cases. With this framework, we intend to add flexibility and generalisation to HRC plan generation, eliminating the need for a manual and laborious definition of restricted planning problems and agent models.
Original languageEnglish
Title of host publicationIEEE International Conference on Robotics and Automation (ICRA)
PublisherIEEE
Number of pages7
Publication statusPublished - May 2024

Fingerprint

Dive into the research topics of 'PlanCollabNL: Leveraging Large Language Models for Adaptive Plan Generation in Human-Robot Collaboration'. Together they form a unique fingerprint.

Cite this