Verifiably Safe and Trusted Human-AI Systems: A Socio-technical Perspective

Michael Akintunde, Luc Moreau, Victoria Young, Asieh Salehi, Vahid Yazdanpanah, Pauline Leonard, Michael Butler

Research output: Contribution to conference typesAbstractpeer-review

105 Downloads (Pure)


Replacing human decision-making with machine decision-making results in challenges associated with stakeholders' trust in AI systems that interact with and keep the human user in the loop. We refer to such systems as Human-AI Systems (HAIS) and argue that technical safety and social trustworthiness of a HAIS are key to its wide-spread adoption by society. To develop a verifiably safe and trusted HAIS, it is important to understand how different stakeholders perceive an autonomous system (AS) as trusted, and how the context of application affects their perceptions. Technical approaches to meet trust and safety concerns are widely investigated and under-used in the context of measuring users' trust in autonomous AI systems. Interdisciplinary socio-technical approaches, grounded in social science (trust) and computer science (safety), are less considered in HAIS investigations. This paper aims to elaborate on the need for the application of formal methods, for ensuring safe behaviour of HAIS, based on the real-life understanding of users about trust, and analysing trust dynamics. This work puts forward core challenges in this area and presents a research agenda on verifiably safe and trusted human-AI systems.
Original languageEnglish
Number of pages6
Publication statusPublished - 11 Jul 2023
EventFirst International Symposium on Trustworthy Autonomous Systems: TAS'23 -
Duration: 11 Jul 202312 Jul 2023


ConferenceFirst International Symposium on Trustworthy Autonomous Systems: TAS'23
Abbreviated titleTAS'23
Internet address


  • Trust
  • Human-AI Systems
  • Safety
  • Verification


Dive into the research topics of 'Verifiably Safe and Trusted Human-AI Systems: A Socio-technical Perspective'. Together they form a unique fingerprint.

Cite this