Projects per year
Abstract
Artificial intelligence (AI) is increasingly being used in many applications, yet governance approaches for these systems and applications are lagging behind. Recent regulations, such as the EU AI Act 2024, have highlighted the need for regular assessment of AI systems along their design and development lifecycle. In this context, auditing is critical to developing responsible AI systems, yet has typically been performed only by AI experts. In our work, we conduct fundamental research to design and develop auditing workbenches and methodologies for predictive and generative AI systems that are usable by stakeholders without an AI background, such as decision subjects, domain experts, or regulators. We describe our project to develop AI auditing workbenches and methodologies using co-design approaches, initial findings, as well as potential impacts of our work. We would like to share our experiences with the other workshop participants as well as discuss potential avenues for furthering the governance of AI systems.
Original language | English |
---|---|
Title of host publication | 2025 Conference on Human Factors in Computing Systems |
Subtitle of host publication | Sociotechnical AI Governance: Opportunities and Challenges for HCI |
Publication status | Published - 3 Apr 2025 |
Fingerprint
Dive into the research topics of 'Ensuring Artificial Intelligence is Safe and Trustworthy: The Need for Participatory Auditing'. Together they form a unique fingerprint.Projects
- 1 Active
-
PHAWM: Participatory Harm Auditing Workbenches and Methodologies (PHAWM)
Simperl, E. (Primary Investigator), Black, E. (Co-Investigator), Hunter, D. (Co-Investigator) & Quercia, D. (Co-Investigator)
EPSRC Engineering and Physical Sciences Research Council
1/05/2024 → 31/03/2028
Project: Research