Ensuring Artificial Intelligence is Safe and Trustworthy: The Need for Participatory Auditing

Patrizia Di Campli San Vito*, Simone Stumpf, Cari Hyde-Vaamonde, Gefion Thuermer

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

102 Downloads (Pure)

Abstract

Artificial intelligence (AI) is increasingly being used in many applications, yet governance approaches for these systems and applications are lagging behind. Recent regulations, such as the EU AI Act 2024, have highlighted the need for regular assessment of AI systems along their design and development lifecycle. In this context, auditing is critical to developing responsible AI systems, yet has typically been performed only by AI experts. In our work, we conduct fundamental research to design and develop auditing workbenches and methodologies for predictive and generative AI systems that are usable by stakeholders without an AI background, such as decision subjects, domain experts, or regulators. We describe our project to develop AI auditing workbenches and methodologies using co-design approaches, initial findings, as well as potential impacts of our work. We would like to share our experiences with the other workshop participants as well as discuss potential avenues for furthering the governance of AI systems.
Original languageEnglish
Title of host publication2025 Conference on Human Factors in Computing Systems
Subtitle of host publicationSociotechnical AI Governance: Opportunities and Challenges for HCI
Publication statusPublished - 3 Apr 2025

Fingerprint

Dive into the research topics of 'Ensuring Artificial Intelligence is Safe and Trustworthy: The Need for Participatory Auditing'. Together they form a unique fingerprint.

Cite this