Bias and Performance Disparities in Reinforcement Learning for Human-Robot Interaction

Zoe Evans*, Matteo Leonetti, Martim Brandão

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

91 Downloads (Pure)

Abstract

Bias has been shown to be a pervasive problem in machine learning, with severe and unanticipated consequences, for example in the form of algorithm performance disparities across social groups. In this paper, we investigate and characterise how similar issues may arise in Reinforcement Learning (RL) for Human-Robot Interaction (HRI), with the intent of averting the same ramifications. Using an assistive robotics simulation as a case study, we show that RL for HRI can perform differently across models with different waist circumferences. We show this behaviour can arise due to representation bias - unbalanced exposure during training - but also due to inherent task properties that may make assistance difficult depending on physical characteristics. The findings underscore the need to address bias in RL for HRI. We conclude with a discussion of potential practical solutions, their consequences and limitations, and avenues for future research.
Original languageEnglish
Title of host publication2025 ACM/IEEE International Conference on Human-Robot Interaction
Publication statusAccepted/In press - 2025

Fingerprint

Dive into the research topics of 'Bias and Performance Disparities in Reinforcement Learning for Human-Robot Interaction'. Together they form a unique fingerprint.

Cite this