TY - JOUR
T1 - A clinically interpretable convolutional neural network for the real-time prediction of early squamous cell cancer of the esophagus
T2 - comparing diagnostic performance with a panel of expert European and Asian endoscopists
AU - Everson, Martin A.
AU - Garcia-Peraza-Herrera, Luis
AU - Wang, Hsiu Po
AU - Lee, Ching Tai
AU - Chung, Chen Shuan
AU - Hsieh, Ping Hsin
AU - Chen, Chien Chuan
AU - Tseng, Cheng Hao
AU - Hsu, Ming Hung
AU - Vercauteren, Tom
AU - Ourselin, Sebastien
AU - Kashin, Sergey
AU - Bisschops, Raf
AU - Pech, Oliver
AU - Lovat, Laurence
AU - Wang, Wen Lun
AU - Haidry, Rehan J.
N1 - Funding Information:
This work was supported by the Wellcome Trust (WT101957, 203145Z/16/Z), EPSRC (NS/A000027/1, NS/A000050/1), NIHR BRC UCLH/UCL High Impact Initiative, and a UCL EPSRC CDT Scholarship Award (EP/L016478/1). The authors would like to thank NVIDIA Corporation for the donated GeForce GTX TITAN X GPU. The authors would like to thank Dr David Graham, Dr Vinay Seghal, Dr Mohamed Hussein, Dr Yezen Sammaraiee, and Dr Stephen Mitrasinovic for their contributions to this study and manuscript.
Funding Information:
DISCLOSURE: Dr Vercauteren is co-founder and shareholder of Hypervision Surgical Ltd, London, UK. He is also a shareholder of Mauna Kea Technologies, Paris, France. Dr Bisschops is supported by a grant of Research Foundation Flanders (FWO). Dr Pech has received speaker honorarium from Fujifilm, Medtronic, Cook, Boston Scientific. Dr Lovat has undertaken consultancy and has a minor shareholding in Odin Vision. Dr Haidry has received educational grants to support research from Medtronic, Cook Endoscopy (fellowship support), Pentax Europe, Pentax UK, C2 Therapeutics, Beamline Diagnostics, and Fractyl. All other authors disclosed no financial relationships.
Funding Information:
DISCLOSURE: Dr Vercauteren is co-founder and shareholder of Hypervision Surgical Ltd, London, UK. He is also a shareholder of Mauna Kea Technologies, Paris, France. Dr Bisschops is supported by a grant of Research Foundation Flanders (FWO). Dr Pech has received speaker honorarium from Fujifilm, Medtronic, Cook, Boston Scientific. Dr Lovat has undertaken consultancy and has a minor shareholding in Odin Vision. Dr Haidry has received educational grants to support research from Medtronic, Cook Endoscopy (fellowship support), Pentax Europe, Pentax UK, C2 Therapeutics, Beamline Diagnostics, and Fractyl. All other authors disclosed no financial relationships.This work was supported by the Wellcome Trust (WT101957, 203145Z/16/Z), EPSRC (NS/A000027/1, NS/A000050/1), NIHR BRC UCLH/UCL High Impact Initiative, and a UCL EPSRC CDT Scholarship Award (EP/L016478/1). The authors would like to thank NVIDIA Corporation for the donated GeForce GTX TITAN X GPU. The authors would like to thank Dr David Graham, Dr Vinay Seghal, Dr Mohamed Hussein, Dr Yezen Sammaraiee, and Dr Stephen Mitrasinovic for their contributions to this study and manuscript.
Publisher Copyright:
© 2021 American Society for Gastrointestinal Endoscopy
Copyright:
Copyright 2021 Elsevier B.V., All rights reserved.
PY - 2021/8
Y1 - 2021/8
N2 - Background and Aims: Intrapapillary capillary loops (IPCLs) are microvascular structures that correlate with the invasion depth of early squamous cell neoplasia and allow accurate prediction of histology. Artificial intelligence may improve human recognition of IPCL patterns and prediction of histology to allow prompt access to endoscopic therapy for early squamous cell neoplasia where appropriate. Methods: One hundred fifteen patients were recruited at 2 academic Taiwanese hospitals. Magnification endoscopy narrow-band imaging videos of squamous mucosa were labeled as dysplastic or normal according to their histology, and IPCL patterns were classified by consensus of 3 experienced clinicians. A convolutional neural network (CNN) was trained to classify IPCLs, using 67,742 high-quality magnification endoscopy narrow-band images by 5-fold cross validation. Performance measures were calculated to give an average F1 score, accuracy, sensitivity, and specificity. A panel of 5 Asian and 4 European experts predicted the histology of a random selection of 158 images using the Japanese Endoscopic Society IPCL classification; accuracy, sensitivity, specificity, positive and negative predictive values were calculated. Results: Expert European Union (EU) and Asian endoscopists attained F1 scores (a measure of binary classification accuracy) of 97.0% and 98%, respectively. Sensitivity and accuracy of the EU and Asian clinicians were 97%, 98% and 96.9%, 97.1%, respectively. The CNN average F1 score was 94%, sensitivity 93.7%, and accuracy 91.7%. Our CNN operates at video rate and generates class activation maps that can be used to visually validate CNN predictions. Conclusions: We report a clinically interpretable CNN developed to predict histology based on IPCL patterns, in real time, using the largest reported dataset of images for this purpose. Our CNN achieved diagnostic performance comparable with an expert panel of endoscopists.
AB - Background and Aims: Intrapapillary capillary loops (IPCLs) are microvascular structures that correlate with the invasion depth of early squamous cell neoplasia and allow accurate prediction of histology. Artificial intelligence may improve human recognition of IPCL patterns and prediction of histology to allow prompt access to endoscopic therapy for early squamous cell neoplasia where appropriate. Methods: One hundred fifteen patients were recruited at 2 academic Taiwanese hospitals. Magnification endoscopy narrow-band imaging videos of squamous mucosa were labeled as dysplastic or normal according to their histology, and IPCL patterns were classified by consensus of 3 experienced clinicians. A convolutional neural network (CNN) was trained to classify IPCLs, using 67,742 high-quality magnification endoscopy narrow-band images by 5-fold cross validation. Performance measures were calculated to give an average F1 score, accuracy, sensitivity, and specificity. A panel of 5 Asian and 4 European experts predicted the histology of a random selection of 158 images using the Japanese Endoscopic Society IPCL classification; accuracy, sensitivity, specificity, positive and negative predictive values were calculated. Results: Expert European Union (EU) and Asian endoscopists attained F1 scores (a measure of binary classification accuracy) of 97.0% and 98%, respectively. Sensitivity and accuracy of the EU and Asian clinicians were 97%, 98% and 96.9%, 97.1%, respectively. The CNN average F1 score was 94%, sensitivity 93.7%, and accuracy 91.7%. Our CNN operates at video rate and generates class activation maps that can be used to visually validate CNN predictions. Conclusions: We report a clinically interpretable CNN developed to predict histology based on IPCL patterns, in real time, using the largest reported dataset of images for this purpose. Our CNN achieved diagnostic performance comparable with an expert panel of endoscopists.
UR - http://www.scopus.com/inward/record.url?scp=85104945700&partnerID=8YFLogxK
U2 - 10.1016/j.gie.2021.01.043
DO - 10.1016/j.gie.2021.01.043
M3 - Article
C2 - 33549586
AN - SCOPUS:85104945700
SN - 0016-5107
VL - 94
SP - 273
EP - 281
JO - Gastrointestinal Endoscopy
JF - Gastrointestinal Endoscopy
IS - 2
ER -