TY - JOUR
T1 - Clinical benefit of AI-assisted lung ultrasound in a resource-limited intensive care unit
AU - Nhat, Phung tran huy
AU - Van hao, Nguyen
AU - Tho, Phan vinh
AU - Kerdegari, Hamideh
AU - Pisani, Luigi
AU - Thu, Le ngoc minh
AU - Phuong, Le thanh
AU - Duong, Ha thi hai
AU - Thuy, Duong bich
AU - Mcbride, Angela
AU - Xochicale, Miguel
AU - Schultz, Marcus
AU - Razavi, Reza
AU - King, Andrew
AU - Thwaites, Louise
AU - Van vinh chau, Nguyen
AU - Yacoub, Sophie
AU - Thao, Dang phuong
AU - Kien, Dang trung
AU - Thy, Doan bui xuan
AU - Trinh, Dong huu khanh
AU - Duc, Du hong
AU - Geskus, Ronald
AU - Hai, Ho bich
AU - Chanh, Ho quang
AU - Van hien, Ho
AU - Trieu, Huynh trung
AU - Kestelyn, Evelyne
AU - Yen, Lam minh
AU - Van khoa, Le dinh
AU - Phuong, Le thanh
AU - Khanh, Le thuy thuy
AU - Tran, Luu hoai bao
AU - An, Luu phuoc
AU - Mcbride, Angela
AU - Vuong, Nguyen lam
AU - Huy, Nguyen quang
AU - Quyen, Nguyen than ha
AU - Ngoc, Nguyen thanh
AU - Giang, Nguyen thi
AU - Trinh, Nguyen thi diem
AU - Le thanh, Nguyen thi
AU - Dung, Nguyen thi phuong
AU - Thao, Nguyen thi phuong
AU - Van, Ninh thi thanh
AU - Kieu, Pham tieu
AU - Khanh, Phan nguyen quoc
AU - Lam, Phung khanh
AU - Nhat, Phung tran huy
AU - Thwaites, Guy
AU - Thwaites, Louise
AU - Duc, Tran minh
AU - Hung, Trinh manh
AU - Turner, Hugo
AU - Van nuil, Jennifer ilo
AU - Hoang, Vo tan
AU - Huyen, Vu ngo thanh
AU - Yacoub, Sophie
AU - Tam, Cao thi
AU - Thuy, Duong bich
AU - Duong, Ha thi hai
AU - Nghia, Ho dang trung
AU - Chau, Le buu
AU - Toan, Le mau
AU - Thu, Le ngoc minh
AU - Thao, Le thi mai
AU - Tai, Luong thi hue
AU - Phu, Nguyen hoan
AU - Viet, Nguyen quoc
AU - Dung, Nguyen thanh
AU - Nguyen, Nguyen thanh
AU - Phong, Nguyen thanh
AU - Anh, Nguyen thi kim
AU - Van hao, Nguyen
AU - Van thanh duoc, Nguyen
AU - Oanh, Pham kieu nguyet
AU - Van, Phan thi hong
AU - Qui, Phan tu
AU - Tho, Phan vinh
AU - Thao, Truong thi phuong
AU - Ali, Natasha
AU - Clifton, David
AU - English, Mike
AU - Hagenah, Jannis
AU - Lu, Ping
AU - Mcknight, Jacob
AU - Paton, Chris
AU - Zhu, Tingting
AU - Georgiou, Pantelis
AU - Perez, Bernard hernandez
AU - Hill-Cawthorne, Kerri
AU - Holmes, Alison
AU - Karolcik, Stefan
AU - Ming, Damien
AU - Moser, Nicolas
AU - Manzano, Jesus rodriguez
AU - Canas, Liane
AU - Gomez, Alberto
AU - Kerdegari, Hamideh
AU - King, Andrew
AU - Modat, Marc
AU - Razavi, Reza
AU - Xochicale, Miguel
AU - Karlen, Walter
AU - Denehy, Linda
AU - Rollinson, Thomas
AU - Pisani, Luigi
AU - Schultz, Marcus
AU - Gomez, Alberto
N1 - Funding Information:
This work was supported by the Wellcome Trust under grant 217650/Z/19/Z.
Funding Information:
We thank all the patients, nurses and clinicians who generously donated their time during the study. Dang Phuong Thao1, Dang Trung Kien1, Doan Bui Xuan Thy1, Dong Huu Khanh Trinh1,5, Du Hong Duc1, Ronald Geskus1, Ho Bich Hai1, Ho Quang Chanh1, Ho Van Hien1, Huynh Trung Trieu1, Evelyne Kestelyn1, Lam Minh Yen1, Le Dinh Van Khoa1, Le Thanh Phuong1, Le Thuy Thuy Khanh1, Luu Hoai Bao Tran1, Luu Phuoc An1, Angela Mcbride1, Nguyen Lam Vuong1, Nguyen Quang Huy1, Nguyen Than Ha Quyen1, Nguyen Thanh Ngoc1, Nguyen Thi Giang1, Nguyen Thi Diem Trinh1, Nguyen Thi Le Thanh1, Nguyen Thi Phuong Dung1, Nguyen Thi Phuong Thao1, Ninh Thi Thanh Van1, Pham Tieu Kieu1, Phan Nguyen Quoc Khanh1, Phung Khanh Lam1, Phung Tran Huy Nhat1,5, Guy Thwaites1,3, Louise Thwaites1,3, Tran Minh Duc1, Trinh Manh Hung1, Hugo Turner1, Jennifer Ilo Van Nuil1, Vo Tan Hoang1, Vu Ngo Thanh Huyen1, Sophie Yacoub1,3, Cao Thi Tam2, Duong Bich Thuy2, Ha Thi Hai Duong2, Ho Dang Trung Nghia2, Le Buu Chau2, Le Mau Toan2, Le Ngoc Minh Thu2, Le Thi Mai Thao2, Luong Thi Hue Tai2, Nguyen Hoan Phu2, Nguyen Quoc Viet2, Nguyen Thanh Dung2, Nguyen Thanh Nguyen2, Nguyen Thanh Phong2, Nguyen Thi Kim Anh2, Nguyen Van Hao2, Nguyen Van Thanh Duoc2, Pham Kieu Nguyet Oanh2, Phan Thi Hong Van2, Phan Tu Qui2, Phan Vinh Tho2, Truong Thi Phuong Thao2, Natasha Ali3, David Clifton3, Mike English3, Jannis Hagenah3, Ping Lu3, Jacob McKnight3, Chris Paton3, Tingting Zhu3, Pantelis Georgiou4, Bernard Hernandez Perez4, Kerri Hill-Cawthorne4, Alison Holmes4, Stefan Karolcik4, Damien Ming4, Nicolas Moser4, Jesus Rodriguez Manzano4, Liane Canas5, Alberto Gomez5, Hamideh Kerdegari5, Andrew King5, Marc Modat5, Reza Razavi5, Miguel Xochicale5, Walter Karlen6, Linda Denehy7, Thomas Rollinson7, Luigi Pisani8, Marcus Schultz8.1Oxford University Clinical Research Unit;2Hospital for Tropical Diseases, Ho Chi Minh City;3University of Oxford;4Imperial College London;5King’s College London;6University of Ulm;7The University of Melbourne;8Mahidol Oxford Tropical Medicine Research Unit.
Publisher Copyright:
© 2023, The Author(s).
PY - 2023/7/1
Y1 - 2023/7/1
N2 - Background: Interpreting point-of-care lung ultrasound (LUS) images from intensive care unit (ICU) patients can be challenging, especially in low- and middle- income countries (LMICs) where there is limited training available. Despite recent advances in the use of Artificial Intelligence (AI) to automate many ultrasound imaging analysis tasks, no AI-enabled LUS solutions have been proven to be clinically useful in ICUs, and specifically in LMICs. Therefore, we developed an AI solution that assists LUS practitioners and assessed its usefulness in a low resource ICU. Methods: This was a three-phase prospective study. In the first phase, the performance of four different clinical user groups in interpreting LUS clips was assessed. In the second phase, the performance of 57 non-expert clinicians with and without the aid of a bespoke AI tool for LUS interpretation was assessed in retrospective offline clips. In the third phase, we conducted a prospective study in the ICU where 14 clinicians were asked to carry out LUS examinations in 7 patients with and without our AI tool and we interviewed the clinicians regarding the usability of the AI tool. Results: The average accuracy of beginners’ LUS interpretation was 68.7% [95% CI 66.8–70.7%] compared to 72.2% [95% CI 70.0–75.6%] in intermediate, and 73.4% [95% CI 62.2–87.8%] in advanced users. Experts had an average accuracy of 95.0% [95% CI 88.2–100.0%], which was significantly better than beginners, intermediate and advanced users (p < 0.001). When supported by our AI tool for interpreting retrospectively acquired clips, the non-expert clinicians improved their performance from an average of 68.9% [95% CI 65.6–73.9%] to 82.9% [95% CI 79.1–86.7%], (p < 0.001). In prospective real-time testing, non-expert clinicians improved their baseline performance from 68.1% [95% CI 57.9–78.2%] to 93.4% [95% CI 89.0–97.8%], (p < 0.001) when using our AI tool. The time-to-interpret clips improved from a median of 12.1 s (IQR 8.5–20.6) to 5.0 s (IQR 3.5–8.8), (p < 0.001) and clinicians’ median confidence level improved from 3 out of 4 to 4 out of 4 when using our AI tool. Conclusions: AI-assisted LUS can help non-expert clinicians in an LMIC ICU improve their performance in interpreting LUS features more accurately, more quickly and more confidently.
AB - Background: Interpreting point-of-care lung ultrasound (LUS) images from intensive care unit (ICU) patients can be challenging, especially in low- and middle- income countries (LMICs) where there is limited training available. Despite recent advances in the use of Artificial Intelligence (AI) to automate many ultrasound imaging analysis tasks, no AI-enabled LUS solutions have been proven to be clinically useful in ICUs, and specifically in LMICs. Therefore, we developed an AI solution that assists LUS practitioners and assessed its usefulness in a low resource ICU. Methods: This was a three-phase prospective study. In the first phase, the performance of four different clinical user groups in interpreting LUS clips was assessed. In the second phase, the performance of 57 non-expert clinicians with and without the aid of a bespoke AI tool for LUS interpretation was assessed in retrospective offline clips. In the third phase, we conducted a prospective study in the ICU where 14 clinicians were asked to carry out LUS examinations in 7 patients with and without our AI tool and we interviewed the clinicians regarding the usability of the AI tool. Results: The average accuracy of beginners’ LUS interpretation was 68.7% [95% CI 66.8–70.7%] compared to 72.2% [95% CI 70.0–75.6%] in intermediate, and 73.4% [95% CI 62.2–87.8%] in advanced users. Experts had an average accuracy of 95.0% [95% CI 88.2–100.0%], which was significantly better than beginners, intermediate and advanced users (p < 0.001). When supported by our AI tool for interpreting retrospectively acquired clips, the non-expert clinicians improved their performance from an average of 68.9% [95% CI 65.6–73.9%] to 82.9% [95% CI 79.1–86.7%], (p < 0.001). In prospective real-time testing, non-expert clinicians improved their baseline performance from 68.1% [95% CI 57.9–78.2%] to 93.4% [95% CI 89.0–97.8%], (p < 0.001) when using our AI tool. The time-to-interpret clips improved from a median of 12.1 s (IQR 8.5–20.6) to 5.0 s (IQR 3.5–8.8), (p < 0.001) and clinicians’ median confidence level improved from 3 out of 4 to 4 out of 4 when using our AI tool. Conclusions: AI-assisted LUS can help non-expert clinicians in an LMIC ICU improve their performance in interpreting LUS features more accurately, more quickly and more confidently.
UR - http://www.scopus.com/inward/record.url?scp=85164150325&partnerID=8YFLogxK
U2 - 10.1186/s13054-023-04548-w
DO - 10.1186/s13054-023-04548-w
M3 - Article
SN - 1364-8535
VL - 27
JO - CRITICAL CARE
JF - CRITICAL CARE
IS - 1
M1 - 257
ER -