Abstract
This research focuses on establishing trust in multiagent systems where human and AI agents collaborate. We propose a computational notion of actual trust, emphasising the modelling of an agent's capacity to deliver tasks. Unlike reputation-based trust or performing a statistical analysis on past behaviour, our approach considers the specific setting in which agents interact. We integrate non-deterministic semantics
for capturing inherent uncertainties within the behaviour of a multiagent system, but stress the importance of verifying an agent's actual capabilities. We provide a conceptual analysis of actual trust's characteristics and highlight relevant trust verification tools. By advancing the understanding and verification of trust in collaborative systems, this research contributes to responsible and trustworthy human-AI interactions, enhancing reliability in various domains.
for capturing inherent uncertainties within the behaviour of a multiagent system, but stress the importance of verifying an agent's actual capabilities. We provide a conceptual analysis of actual trust's characteristics and highlight relevant trust verification tools. By advancing the understanding and verification of trust in collaborative systems, this research contributes to responsible and trustworthy human-AI interactions, enhancing reliability in various domains.
Original language | English |
---|---|
Title of host publication | The Third International Conference on Hybrid Human-Artificial Intelligence |
Publisher | IOS Press |
Number of pages | 12 |
Publication status | Accepted/In press - Apr 2024 |
Keywords
- Trust
- Multiagent systems
- Human-AI interactions