Abstract
The responsibility of making clinical diagnoses and treatment decisions will fall on clinicians regardless of whether an AI system is used. Ensuring trust in such systems is challenging given the fast paced development of AI technologies. In this paper, we present a methodology for designing appropriate auditing strategies and mechanisms involved in the life cycle of AI-based clinical decision support systems. The methodology is based on MLOps and responsible AI design principles. To ensure clinicians are involved in the MLOps process, we propose a Medical-MLOps process that includes clinicians in the design and deployment of an auditing extension for the MLighter tool. MLighter is designed to provide with the necessary information to understand the system's limitations and make informed decisions about its outcomes. MLighter is holistic, interpretable, and easy-to-use. The design decisions and requirements are derived from interviews with 20 clinicians in the UK and US. This information, along with input from developers and QA testers, enables us to define a set of auditing requirements. This work outlines
the steps we followed during requirement extraction, the architecture we
designed to adapt the tool, and the various workflows we established for
different user types to ensure a holistic experience.
the steps we followed during requirement extraction, the architecture we
designed to adapt the tool, and the various workflows we established for
different user types to ensure a holistic experience.
Original language | English |
---|---|
Title of host publication | 36th International Conference on Testing Software and Systems |
Publication status | Accepted/In press - 13 Oct 2024 |