Explainability in Multi-Agent Path/Motion Planning: User-study-driven Taxonomy and Requirements

Martim Brandao, Masoumeh Mansouri, Areeb Mohammed, Paul Luff, Amanda Coles

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

200 Downloads (Pure)


Multi-Agent Path Finding (MAPF) and Multi-Robot Motion Planning (MRMP) are complex problems to solve, analyze and build algorithms for. Automatically-generated explanations of algorithm output, by improving human understanding of the underlying problems and algorithms, could thus lead to better user experience, developer knowledge, and MAPF/MRMP algorithm designs. Explanations are contextual, however, and thus developers need a good understanding of the questions that can be asked about algorithm output, the kinds of explanations that exist, and the potential users and uses of explanations in MAPF/MRMP applications. In this paper we provide a first step towards establishing a taxonomy of explanations, and a list of requirements for the development of explainable MAPF/MRMP planners. We use interviews and a questionnaire with expert developers and industry practitioners to identify the kinds of questions, explanations, users, uses, and requirements of explanations that should be considered in the design of such explainable planners. Our insights cover a diverse set of applications: warehouse automation, computer games, and mining.
Original languageEnglish
Title of host publicationInternational Conference on Autonomous Agents and Multiagent Systems (AAMAS)
Publication statusAccepted/In press - 2022


Dive into the research topics of 'Explainability in Multi-Agent Path/Motion Planning: User-study-driven Taxonomy and Requirements'. Together they form a unique fingerprint.

Cite this