Abstract
We study how trust can be established in multiagent systems where human and AI agents collaborate. We propose a computational notion of actual trust, emphasising the modelling of trust based on agents’ capacity to deliver tasks in prospect. Unlike reputation based trust, we consider the specific setting in which agents interact and model a forward-looking notion of trust. We provide a conceptual analysis of actual trust’s characteristics and highlight relevant trust verification tools. By advancing the understanding and verification of trust in collaborative systems, we contribute to responsible and trustworthy human-AI interactions, enhancing reliability in various domains.
Original language | English |
---|---|
Publication status | Published - 6 May 2024 |