Abstract
Humans tend to use various nonverbal signals to communicate their messages to their interaction partners. Previous studies utilised this channel as an essential clue to develop automatic approaches for understanding, modelling and synthesizing individual behaviours in human-human interaction and human-robot interaction settings. On the other hand, in small-group interactions, an essential aspect of communication is the dynamic exchange of social signals among interlocutors. This paper introduces LISI-HHI - Learning to Imitate Social Human-Human Interaction, a dataset of dyadic human interactions recorded in a wide range of communication scenarios. The dataset contains multiple modalities simultaneously captured by high-accuracy sensors, including motion capture, RGB-D cameras, eye trackers, and microphones. LISI-HHI is designed to be a benchmark for HRI and multimodal learning research for modelling intraand interpersonal nonverbal signals in social interaction contexts and investigating how to transfer such models to social robots.
Original language | English |
---|---|
Pages (from-to) | 238-242 |
Number of pages | 5 |
Journal | ACM/IEEE International Conference on Human-Robot Interaction |
DOIs | |
Publication status | Published - 13 Mar 2023 |
Event | 18th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2023 - Stockholm, Sweden Duration: 13 Mar 2023 → 16 Mar 2023 |
Keywords
- human-human interaction
- multimodal dataset
- nonverbal behaviour analysis and synthesis