King's College London

Research portal

A Multimodal Dataset for Robot Learning to Imitate Social Human-Human Interaction

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

Original languageEnglish
Title of host publicationCompanion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (HRI’23 Companion)
PublisherACM
Accepted/In press2023

Documents

King's Authors

Abstract

Humans tend to use various nonverbal signals to communicate their messages to their interaction partners. Previous studies utilised this channel as an essential clue to develop automatic approaches for understanding, modelling and synthesizing individual behaviours in human-human interaction and human-robot interaction settings. On the other hand, in small-group interactions, an essential aspect of communication is the dynamic exchange of social signals among interlocutors. This paper introduces LISI-HHI – Learning to Imitate Social Human-Human Interaction, a dataset of dyadic human inter- actions recorded in a wide range of communication scenarios. The dataset contains multiple modalities simultaneously captured by high-accuracy sensors, including motion capture, RGB-D cameras, eye trackers, and microphones. LISI-HHI is designed to be a bench- mark for HRI and multimodal learning research for modelling intra- and interpersonal nonverbal signals in social interaction contexts and investigating how to transfer such models to social robots.

Download statistics

No data available

View graph of relations

© 2020 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454