When Do People Want an Explanation from a Robot?

Lennart Wachowiak*, Andrew Fenn, Haris Kamran, Andrew Coles, Oya Celiktutan, Gerard Canal

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

1 Citation (Scopus)
276 Downloads (Pure)

Abstract

Explanations are a critical topic in AI and robotics, and their importance in generating trust and allowing for successful human–robot interactions has been widely recognized. However, it is still an open question when and in what interaction contexts users most want an explanation from a robot. In our pre-registered study with 186 participants, we set out to identify a set of scenarios in which users show a strong need for explanations. Participants are shown 16 videos portraying seven distinct situation types, from successful human–robot interactions to robot errors and robot inabilities. Afterwards, they are asked to indicate if and how they wish the robot to communicate subsequent to the interaction in the video.
The results provide a set of interactions, grounded in literature and verified empirically, in which people show the need for an explanation. Moreover, we can rank these scenarios by how strongly users think an explanation is necessary and find statistically significant differences. Comparing giving explanations with other possible response types, such as the robot apologizing or asking for help, we find that why-explanations are always among the two highest-rated responses, with the exception of when the robot simply acts normally and successfully. This stands in stark contrast to the other possible response types that are useful in a much more restricted set of situations. Lastly, we test for factors of an individual that might influence their response preferences, for example, their general attitude towards robots, but find no significant correlations. Our results can guide roboticists in designing more user-centered and transparent interactions and let explainability researchers develop more pinpointed explanations.
Original languageEnglish
Title of host publicationProceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’24)
PublisherACM
Pages752-761
Number of pages10
ISBN (Electronic)9798400703225
DOIs
Publication statusPublished - 11 Mar 2024

Publication series

NameACM/IEEE International Conference on Human-Robot Interaction
ISSN (Electronic)2167-2148

Keywords

  • robotics
  • user study
  • user-centered design
  • explainable AI
  • explainability
  • hri
  • AI

Fingerprint

Dive into the research topics of 'When Do People Want an Explanation from a Robot?'. Together they form a unique fingerprint.

Cite this