Dynamic Calibration of Trust and Trustworthiness in AI-Enabled Systems

Magnus Liebherr, Ellen Enkel, Effie Law, Mohammadreza Mousavi, Matteo Sammartino, Philipp Sieberg

Research output: Contribution to journalArticlepeer-review

116 Downloads (Pure)

Abstract

Trust is a multi-faceted phenomenon traditionally studied in human relations and more recently in human-machine interactions. In the context of AI-enabled systems, trust is about the belief of the user that in a given scenario the system is going to be helpful and safe. The system-side counterpart to trust is trustworthiness. When trust and trustworthiness are aligned with each other, there is calibrated trust. Trust, trustworthiness, and calibrated trust are all dynamic phenomena, evolving throughout the history and evolution of user beliefs, systems, and their interaction.
In this paper, we review the basic concepts of trust, trustworthiness and calibrated trust and provide definitions for them. We discuss their various metrics used in the literature, and the causes that may affect their dynamics, particularly in the context of AI-enabled systems. We discuss the implications of the discussed concepts for various types of stakeholders and suggest some challenges for future research.
Original languageEnglish
JournalInternational Journal on Software Tools for Technology Transfer
Publication statusAccepted/In press - 14 Mar 2025

Keywords

  • Trust
  • Trustworthiness
  • Calibrated Trust
  • AI-Enabled Systems

Fingerprint

Dive into the research topics of 'Dynamic Calibration of Trust and Trustworthiness in AI-Enabled Systems'. Together they form a unique fingerprint.

Cite this