Coping with AI errors with provable guarantees

Ivan y. Tyukin, Tatiana Tyukina, Daniël p. Van helden, Zedong Zheng, Evgeny m. Mirkes, Oliver j. Sutton, Qinghua Zhou, Alexander n. Gorban, Penelope Allison

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

AI errors pose a significant challenge, hindering real-world applications. This work introduces a novel approach to cope with AI errors using weakly supervised error correctors that guarantee a specific level of error reduction. Our correctors have low computational cost and can be used to decide whether to abstain from making an unsafe classification. We provide new upper and lower bounds on the probability of errors in the corrected system. In contrast to existing works, these bounds are distribution agnostic, non-asymptotic, and can be efficiently computed just using the corrector training data. They also can be used in settings with concept drifts when the observed frequencies of separate classes vary. The correctors can easily be updated, removed, or replaced in response to changes in distributions within each class without retraining the underlying classifier. The application of the approach is illustrated with two relevant challenging tasks: (i) an image classification problem with scarce training data, and (ii) moderating responses of large language models without retraining or otherwise fine-tuning.
Original languageEnglish
Article number120856
Pages (from-to)120856
JournalINFORMATION SCIENCES
Volume678
Early online date8 Jun 2024
DOIs
Publication statusPublished - 1 Sept 2024

Fingerprint

Dive into the research topics of 'Coping with AI errors with provable guarantees'. Together they form a unique fingerprint.

Cite this