Projects per year
Abstract
As human–agent collaborations become more prevalent, it is increasingly important for an agent to be able to adapt to their collaborator and explain their own behavior. In order to do so, they need to be able to identify critical states during the interaction that call for proactive clarifications or behavioral adaptations. In this paper, we explore whether the agent could infer such states from the human's eye gaze for which we compare gaze patterns across different situations in a collaborative task. Our findings show that the human's gaze patterns significantly differ between times at which the user is confused about the task, times at which the agent makes an error, and times of normal workflow. During errors the amount of gaze towards the agent increases, while during confusion the amount towards the environment increases. We conclude that these signals could tell the agent what and when to explain.
Original language | English |
---|---|
Title of host publication | RO-MAN 2022 - 31st IEEE International Conference on Robot and Human Interactive Communication |
Subtitle of host publication | Social, Asocial, and Antisocial Robots |
Pages | 224-229 |
Number of pages | 6 |
ISBN (Electronic) | 9781728188591 |
DOIs | |
Publication status | Published - 2022 |
Publication series
Name | RO-MAN 2022 - 31st IEEE International Conference on Robot and Human Interactive Communication: Social, Asocial, and Antisocial Robots |
---|
Fingerprint
Dive into the research topics of 'Analysing Eye Gaze Patterns during Confusion and Errors in Human–Agent Collaborations'. Together they form a unique fingerprint.-
COHERENT - COllaborative HiErarchical Robotic ExplaNaTions
Coles, A. (Primary Investigator)
EPSRC Engineering and Physical Sciences Research Council
1/04/2021 → 31/03/2025
Project: Research
-
Plan and Goal Reasoning for Explainable Autonomous Robots
Coles, A. (Primary Investigator) & Canal, G. (Co-Investigator)
1/11/2021 → 31/10/2024
Project: Research