B-line Detection and Localization in Lung Ultrasound Videos Using Spatiotemporal Attention

Hamideh Kerdegari*, Phung Tran Huy Nhat, Angela McBride, Luigi Pisani, Nguyen Van Hao, Duong Bich Thuy, Reza Razavi, Louise Thwaites, Sophie Yacoub, Alberto Gomez Herrero

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)


The presence of B-line artefacts, the main artefact reflecting lung abnormalities in dengue patients, is often assessed using lung ultrasound (LUS) imaging. Inspired by human visual attention that enables us to process videos efficiently by paying attention to where and when it is required, we propose a spatiotemporal attention mechanism for B-line detection in LUS videos. The spatial attention allows the model to focus on the most task relevant parts of the image by learning a saliency map. The temporal attention generates an attention score for each attended frame to identify the most relevant frames from an input video. Our model not only identifies videos where B-lines show, but also localizes, within those videos, B-line related features both spatially and temporally, despite being trained in a weakly-supervised manner. We evaluate our approach on a LUS video dataset collected from severe dengue patients in a resource-limited hospital, assessing the B-line detection rate and the model’s ability to localize discriminative B-line regions spatially and B-line frames temporally. Experimental results demonstrate the efficacy of our approach for classifying B-line videos with an F1 score of up to 83.2% and localizing the most salient B-line regions both spatially and temporally with a correlation coefficient of 0.67 and an IoU of 69.7%, respectively.

Original languageEnglish
Article number11697
JournalApplied Sciences (Switzerland)
Issue number24
Early online date9 Dec 2021
Publication statusE-pub ahead of print - 9 Dec 2021


Dive into the research topics of 'B-line Detection and Localization in Lung Ultrasound Videos Using Spatiotemporal Attention'. Together they form a unique fingerprint.

Cite this