TY - CHAP
T1 - Automatic detection of B-lines in lung ultrasound videos from severe dengue patients
AU - Kerdegari, Hamideh
AU - Nhat, Phung Tran Huy
AU - McBride, Angela
AU - Razavi, Reza
AU - Hao, Nguyen Van
AU - Thwaites, Louise
AU - Yacoub, Sophie
AU - Gomez, Alberto
N1 - Funding Information:
This work was supported by the Wellcome Trust UK (110179/Z/15/Z, 203905/Z/16/Z). H. Kerdegari, N. Phung, R. Razavi and A. Gomez also acknowledge financial support from the Department of Health via the National Institute for Health Research (NIHR) comprehensive Biomedical Research Centre award to Guy’s and St Thomas’ NHS Foundation Trust in partnership with King’s College London and King’s College Hospital NHS Foundation Trust.
Publisher Copyright:
© 2021 IEEE.
Copyright:
Copyright 2021 Elsevier B.V., All rights reserved.
PY - 2021/4/13
Y1 - 2021/4/13
N2 - Lung ultrasound (LUS) imaging is used to assess lung abnormalities, including the presence of B-line artefacts due to fluid leakage into the lungs caused by a variety of diseases. However, manual detection of these artefacts is challenging. In this paper, we propose a novel methodology to automatically detect and localize B-lines in LUS videos using deep neural networks trained with weak labels. To this end, we combine a convolutional neural network (CNN) with a long short-term memory (LSTM) network and a temporal attention mechanism. Four different models are compared using data from 60 patients. Results show that our best model can determine whether one-second clips contain B-lines or not with an F1 score of 0.81, and extracts a representative frame with B-lines with an accuracy of 87.5%.
AB - Lung ultrasound (LUS) imaging is used to assess lung abnormalities, including the presence of B-line artefacts due to fluid leakage into the lungs caused by a variety of diseases. However, manual detection of these artefacts is challenging. In this paper, we propose a novel methodology to automatically detect and localize B-lines in LUS videos using deep neural networks trained with weak labels. To this end, we combine a convolutional neural network (CNN) with a long short-term memory (LSTM) network and a temporal attention mechanism. Four different models are compared using data from 60 patients. Results show that our best model can determine whether one-second clips contain B-lines or not with an F1 score of 0.81, and extracts a representative frame with B-lines with an accuracy of 87.5%.
KW - Classification
KW - Lung ultrasound (LUS)
KW - Video analysis
UR - http://www.scopus.com/inward/record.url?scp=85107199973&partnerID=8YFLogxK
U2 - 10.1109/ISBI48211.2021.9434006
DO - 10.1109/ISBI48211.2021.9434006
M3 - Conference paper
AN - SCOPUS:85107199973
T3 - Proceedings - International Symposium on Biomedical Imaging
SP - 989
EP - 993
BT - 2021 IEEE 18th International Symposium on Biomedical Imaging, ISBI 2021
PB - IEEE Computer Society
T2 - 18th IEEE International Symposium on Biomedical Imaging, ISBI 2021
Y2 - 13 April 2021 through 16 April 2021
ER -