Formal Specification of Actual Trust in Multiagent Systems

Michael Akintunde, Vahid Yazdanpanah, Asieh Salehi, Corina Cirstea, Mehdi Dastani, Luc Moreau

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

77 Downloads (Pure)

Abstract

This research focuses on establishing trust in multiagent systems where human and AI agents collaborate. We propose a computational notion of actual trust, emphasising the modelling of an agent's capacity to deliver tasks. Unlike reputation-based trust or performing a statistical analysis on past behaviour, our approach considers the specific setting in which agents interact. We integrate non-deterministic semantics
for capturing inherent uncertainties within the behaviour of a multiagent system, but stress the importance of verifying an agent's actual capabilities. We provide a conceptual analysis of actual trust's characteristics and highlight relevant trust verification tools. By advancing the understanding and verification of trust in collaborative systems, this research contributes to responsible and trustworthy human-AI interactions, enhancing reliability in various domains.
Original languageEnglish
Title of host publicationThe Third International Conference on Hybrid Human-Artificial Intelligence
PublisherIOS Press
Number of pages12
Publication statusAccepted/In press - Apr 2024

Keywords

  • Trust
  • Multiagent systems
  • Human-AI interactions

Fingerprint

Dive into the research topics of 'Formal Specification of Actual Trust in Multiagent Systems'. Together they form a unique fingerprint.

Cite this