A Multimodal Dataset for Robot Learning to Imitate Social Human-Human Interaction

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

144 Downloads (Pure)

Abstract

Humans tend to use various nonverbal signals to communicate their messages to their interaction partners. Previous studies utilised this channel as an essential clue to develop automatic approaches for understanding, modelling and synthesizing individual behaviours in human-human interaction and human-robot interaction settings. On the other hand, in small-group interactions, an essential aspect of communication is the dynamic exchange of social signals among interlocutors. This paper introduces LISI-HHI – Learning to Imitate Social Human-Human Interaction, a dataset of dyadic human inter- actions recorded in a wide range of communication scenarios. The dataset contains multiple modalities simultaneously captured by high-accuracy sensors, including motion capture, RGB-D cameras, eye trackers, and microphones. LISI-HHI is designed to be a bench- mark for HRI and multimodal learning research for modelling intra- and interpersonal nonverbal signals in social interaction contexts and investigating how to transfer such models to social robots.
Original languageEnglish
Title of host publicationCompanion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (HRI’23 Companion)
PublisherACM
Publication statusAccepted/In press - 2023

Fingerprint

Dive into the research topics of 'A Multimodal Dataset for Robot Learning to Imitate Social Human-Human Interaction'. Together they form a unique fingerprint.

Cite this