TY - CHAP
T1 - Analysing Eye Gaze Patterns during Confusion and Errors in Human–Agent Collaborations
AU - Wachowiak, Lennart
AU - Tisnikar, Peter
AU - Canal, Gerard
AU - Coles, Andrew
AU - Leonetti, Matteo
AU - Celiktutan, Oya
N1 - Funding Information:
This work was supported by UK Research and Innovation (EP/S023356/1), in the UKRI CDT in Safe and Trusted AI. This work was also supported by the CHIST-ERA project COHERENT (EP/V062506/1) and the EPSRC project LISI (EP/V010875/1). Gerard Canal was supported by the Royal Academy of Engineering and the Office of the Chief Science Adviser for National Security under the UK IC Postdoctoral Research Fellowship programme.
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - As human–agent collaborations become more prevalent, it is increasingly important for an agent to be able to adapt to their collaborator and explain their own behavior. In order to do so, they need to be able to identify critical states during the interaction that call for proactive clarifications or behavioral adaptations. In this paper, we explore whether the agent could infer such states from the human's eye gaze for which we compare gaze patterns across different situations in a collaborative task. Our findings show that the human's gaze patterns significantly differ between times at which the user is confused about the task, times at which the agent makes an error, and times of normal workflow. During errors the amount of gaze towards the agent increases, while during confusion the amount towards the environment increases. We conclude that these signals could tell the agent what and when to explain.
AB - As human–agent collaborations become more prevalent, it is increasingly important for an agent to be able to adapt to their collaborator and explain their own behavior. In order to do so, they need to be able to identify critical states during the interaction that call for proactive clarifications or behavioral adaptations. In this paper, we explore whether the agent could infer such states from the human's eye gaze for which we compare gaze patterns across different situations in a collaborative task. Our findings show that the human's gaze patterns significantly differ between times at which the user is confused about the task, times at which the agent makes an error, and times of normal workflow. During errors the amount of gaze towards the agent increases, while during confusion the amount towards the environment increases. We conclude that these signals could tell the agent what and when to explain.
UR - http://www.scopus.com/inward/record.url?scp=85140790012&partnerID=8YFLogxK
U2 - 10.1109/RO-MAN53752.2022.9900589
DO - 10.1109/RO-MAN53752.2022.9900589
M3 - Conference paper
T3 - RO-MAN 2022 - 31st IEEE International Conference on Robot and Human Interactive Communication: Social, Asocial, and Antisocial Robots
SP - 224
EP - 229
BT - RO-MAN 2022 - 31st IEEE International Conference on Robot and Human Interactive Communication
ER -