Abstract
Trust is a multi-faceted phenomenon traditionally studied in human relations and more recently in human-machine interactions. In the context of AI-enabled systems, trust is about the belief of the user that in a given scenario the system is going to be helpful and safe. The system-side counterpart to trust is trustworthiness. When trust and trustworthiness are aligned with each other, there is calibrated trust. Trust, trustworthiness, and calibrated trust are all dynamic phenomena, evolving throughout the history and evolution of user beliefs, systems, and their interaction.
In this paper, we review the basic concepts of trust, trustworthiness and calibrated trust and provide definitions for them. We discuss their various metrics used in the literature, and the causes that may affect their dynamics, particularly in the context of AI-enabled systems. We discuss the implications of the discussed concepts for various types of stakeholders and suggest some challenges for future research.
In this paper, we review the basic concepts of trust, trustworthiness and calibrated trust and provide definitions for them. We discuss their various metrics used in the literature, and the causes that may affect their dynamics, particularly in the context of AI-enabled systems. We discuss the implications of the discussed concepts for various types of stakeholders and suggest some challenges for future research.
Original language | English |
---|---|
Journal | International Journal on Software Tools for Technology Transfer |
Publication status | Accepted/In press - 14 Mar 2025 |
Keywords
- Trust
- Trustworthiness
- Calibrated Trust
- AI-Enabled Systems