Abstract
Bias has been shown to be a pervasive problem in machine learning, with severe and unanticipated consequences, for example in the form of algorithm performance disparities across social groups. In this paper, we investigate and characterise how similar issues may arise in Reinforcement Learning (RL) for Human-Robot Interaction (HRI), with the intent of averting the same ramifications. Using an assistive robotics simulation as a case study, we show that RL for HRI can perform differently across models with different waist circumferences. We show this behaviour can arise due to representation bias - unbalanced exposure during training - but also due to inherent task properties that may make assistance difficult depending on physical characteristics. The findings underscore the need to address bias in RL for HRI. We conclude with a discussion of potential practical solutions, their consequences and limitations, and avenues for future research.
Original language | English |
---|---|
Title of host publication | 2025 ACM/IEEE International Conference on Human-Robot Interaction |
Publication status | Accepted/In press - 2025 |