Abstract
Replacing human decision-making with machine decision-making results in challenges associated with stakeholders' trust in AI systems that interact with and keep the human user in the loop. We refer to such systems as Human-AI Systems (HAIS) and argue that technical safety and social trustworthiness of a HAIS are key to its wide-spread adoption by society. To develop a verifiably safe and trusted HAIS, it is important to understand how different stakeholders perceive an autonomous system (AS) as trusted, and how the context of application affects their perceptions. Technical approaches to meet trust and safety concerns are widely investigated and under-used in the context of measuring users' trust in autonomous AI systems. Interdisciplinary socio-technical approaches, grounded in social science (trust) and computer science (safety), are less considered in HAIS investigations. This paper aims to elaborate on the need for the application of formal methods, for ensuring safe behaviour of HAIS, based on the real-life understanding of users about trust, and analysing trust dynamics. This work puts forward core challenges in this area and presents a research agenda on verifiably safe and trusted human-AI systems.
Original language | English |
---|---|
Number of pages | 6 |
DOIs | |
Publication status | Published - 11 Jul 2023 |
Event | First International Symposium on Trustworthy Autonomous Systems: TAS'23 - Duration: 11 Jul 2023 → 12 Jul 2023 https://symposium.tas.ac.uk |
Conference
Conference | First International Symposium on Trustworthy Autonomous Systems: TAS'23 |
---|---|
Abbreviated title | TAS'23 |
Period | 11/07/2023 → 12/07/2023 |
Internet address |
Keywords
- Trust
- Human-AI Systems
- Safety
- Verification