Abstract
This letter critically examines the recent article by Infante et al. assessing the utility of large language models (LLMs) like GPT-4, Perplexity, and Bard in identifying urgent findings in emergency radiology reports. While acknowledging the potential of LLMs in generating labels for computer vision, concerns are raised about the ethical implications of using patient data without explicit approval, highlighting the necessity of stringent data protection measures under GDPR.
Original language | English |
---|---|
Journal | Clinical Radiology |
Publication status | Accepted/In press - 13 Mar 2024 |
Keywords
- cs.CV