King's College London

Research portal

A Case Study on Computer-Aided Diagnosis of Nonerosive Reflux Disease Using Deep Learning Techniques

Research output: Contribution to journalArticlepeer-review

Junkai Liao, Hak-Keung Lam, Guangyu Jia, Shraddha Gulati, Julius Bernth, Dmytro Poliyivets, Yujia Xu, Hongbin Liu, Bu Hayee

Original languageEnglish
Pages (from-to)149-166
Number of pages18
JournalNEUROCOMPUTING
Volume445
Early online date4 Mar 2021
DOIs
E-pub ahead of print4 Mar 2021
Published20 Jul 2021

Bibliographical note

Funding Information: This work was partly supported by King’s College London and China Scholarship Council. Funding Information: This work was partly supported by King's College London and China Scholarship Council. Publisher Copyright: © 2021 Elsevier B.V. Copyright: Copyright 2021 Elsevier B.V., All rights reserved.

Documents

King's Authors

Abstract

This paper aims to develop deep-learning-based algorithms to automatically diagnose the nonerosive reflux disease (NERD) using the near focus narrow band imaging (NF-NBI) images, which are collected by clinicians of King’s College Hospital. To diagnose this disease, we propose a deep learning classification system to distinguish the NF-NBI images captured in the esophagus of healthy people and the NERD patients, which is a binary classification of two classes: non-NERD and NERD. To achieve an effective and accurate classification, we first propose an algorithm to automatically extract the region of interest (ROI) from the NF-NBI images and then generate image patches through a patch-generating algorithm. After that, we train six representative state-of-the-art deep convolutional neural network (CNN) models (ResNet18, ResNet50, ResNet101, DenseNet201, InceptionV3, and Inception-ResNetV2) to extract robust hierarchical features from these patches and classify them based on the hierarchical features. Finally, to determine the classification results of each subject, majority voting is employed to the corresponding generated NF-NBI image patches. We verify our classification system by ten-fold cross-validation using the clinical dataset. We perform subject-dependent and subject-independent experiments. In both experiments, we compare the classification performance of the ROI-based CNN models (the CNN models with our proposed ROI-based algorithms) with the CNN models. Meanwhile, we compare the classification performance of the ROI-based CNN models with the local binary pattern (LBP)-based support vector machine (SVM) classifier, the histograms of oriented gradients (HOG)-based SVM classifier, and the scale-invariant feature transform (SIFT)-based SVM classifier. The results show that the ROI-based CNN models are able to obtain higher average mean of ten-fold test accuracy on image level than the CNN models in the subject-dependent experiment (29.0% improvement) and the subject-independent experiment (10.5% improvement), which demonstrate the effectiveness of our proposed ROI-based algorithms. Meanwhile, the ROI-based CNN models are able to obtain higher average mean of ten-fold test accuracy on image level than the SVM classifiers in the subject-dependent experiment (20.5% improvement) and the subject-independent experiment (14.0% improvement), which demonstrate the ROI-based CNN models have better classification performance than the SVM classifiers. Among the ROI-based CNN models, the ROI-based InceptionV3 model achieves the best classification performance in the subject-dependent experiment, while the ROI-based Inception-ResNetV2 model achieves the best classification performance in the subject-independent experiment, which suggests the ROI-based Inception-ResNetV2 model has better generalization ability than the ROI-based InceptionV3 model. Moreover, the highest mean of ten-fold test accuracy (77.8%) on subject level obtained by using the ROI-based InceptionV3 model or the ROI-based Inception-ResNetV2 model demonstrates the practicality of our proposed classification system for assisting clinical diagnosis of the NERD.

Download statistics

No data available

View graph of relations

© 2020 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454