Actual Trust in Multiagent Systems

Michael Akintunde, Vahid Yazdanpanah, Asieh Salehi, Corina Cirstea, Mehdi Dastani, Luc Moreau

Research output: Contribution to conference typesAbstractpeer-review

17 Downloads (Pure)


We study how trust can be established in multiagent systems where human and AI agents collaborate. We propose a computational notion of actual trust, emphasising the modelling of trust based on agents’ capacity to deliver tasks in prospect. Unlike reputation based trust, we consider the specific setting in which agents interact and model a forward-looking notion of trust. We provide a conceptual analysis of actual trust’s characteristics and highlight relevant trust verification tools. By advancing the understanding and verification of trust in collaborative systems, we contribute to responsible and trustworthy human-AI interactions, enhancing reliability in various domains.
Original languageEnglish
Publication statusPublished - 6 May 2024


Dive into the research topics of 'Actual Trust in Multiagent Systems'. Together they form a unique fingerprint.

Cite this