Explainability of systems of artificial intelligence (AI) and autonomous agents is recognised as an important aspect in improving human-agent interaction, fostering trust and aiding adoption and acceptance. However, these benefits of explanations can only be expected if agent explanations are successful. Given the complex nature of many AI systems as well as human cognition and the influence of contextual factors, designing successful AI explanation is far from being a straightforward task. While a lot of research has been conducted into how to design human-centered explanations, many aspects remain unexplored. This thesis aims to contribute to the field of explainable AI by investigating several factors pertaining to the user as well as robot in human-robot interaction. We primarily focus on the influence of various socio-cultural user characteristics such as gender, education, political and religious affiliation and nationality, however, we also investigate user cognitive tendencies, perception and acceptance of robot and robot’s appearance. In this thesis, we present the results from three online, empirical studies, two of them conducted in the United Kingdom and one cross-cultural study from the United Kingdom and South Korea. Regarding user characteristics, we focus on the explanations that represent the robot’s goals, beliefs or their combination (or no new information as a baseline condition). In the case of robot appearance, we investigate whether people expect robots to refer to their mental capacities in their explanation and whether this tendency changes with increasing human-like appearance of the robot. Our findings indicate that various socio-cultural and cognitive factors and robot acceptance and perception affect explanation preferences and their effect differs between the United Kingdom and Korea. Nevertheless, explanation combining both the robot’s belief and goal perform the best across the two countries. Regarding the robot’s appearance, our data indicate that the robots with more human-like appearance are expected to posses human mental capacities and refer to them in their explanation more often then robots with lower level of human-like appearance. This thesis highlights the importance to study factors that affect explanation preferences both be it user or robot characteristics and the relationship of the user towards the robot.
Date of Award | 1 Jan 2025 |
---|
Original language | English |
---|
Awarding Institution | |
---|
Supervisor | Jose Such (Supervisor), Mark Coté (Supervisor) & Michael Luck (Supervisor) |
---|
Preferences for AI Explanations: Considering the Role of User and Robot Characteristics
Kopecka, H. (Author). 1 Jan 2025
Student thesis: Doctoral Thesis › Doctor of Philosophy