Towards providing explanations for robot motion planning

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

262 Downloads (Pure)


Recent research in AI ethics has put forth explainability as an essential principle for AI algorithms. However, it is still unclear how this is to be implemented in practice for specific classes of algorithms—such as motion planners. In this paper we unpack the concept of explanation in the context of motion planning, introducing a new taxonomy of kinds and purposes of explanations in this context. We focus not only on explanations of failure (previously addressed in motion planning literature) but also on contrastive explanations—which explain why a trajectory A was returned by a planner, instead of a different trajectory B expected by the user. We develop two explainable motion planners, one based on optimization, the other on sampling, which are capable of answering failure and constrastive questions. We use simulation experiments and a user study to motivate a technical and social research agenda.
Original languageEnglish
Title of host publication2021 IEEE International Conference on Robotics and Automation
Publication statusAccepted/In press - 2021

Cite this