Abstract
Detecting people interacting and conversing with each other is essential to equipping social robots with autonomous navigation and service capabilities in crowded social scenes. In this paper, we introduced a method for unsupervised conversational group detection in images captured from a mobile robot’s perspective. To this end, we collected a novel dataset called Robocentric Indoor Crowd Analysis (RICA). The RICA dataset features over 100,000 RGB, depth, and wideangle camera images as well as LIDAR readings, recorded during a social event where the robot navigated between participants and captured interactions among groups using its on-board sensors. Using the RICA dataset, we implemented an unsupervised group detection method based on agglomerative hierarchical clustering. Our results show that incorporating the depth modality and using normalised features in the clustering algorithm improved group detection accuracy by a margin of 3% on average.
Original language | English |
---|---|
Title of host publication | The 29th IEEE International Conference on Robot & Human Interactive Communication |
Publisher | IEEE |
Publication status | Published - 2020 |