Abstract
Purpose: To train an explainable deep learning model for patient re-identification in chest radiograph datasets and assess changes in model-perceived patient identity as a marker for emerging radiological abnormalities in longitudinal image sets.
Materials and Methods: This retrospective study used a set of 1,207, 350 frontal chest radiographs and free-text reports from 259, 152 patients obtained from six hospitals between 2006-2019, with validation on the public ChestX-ray14, CheXpert and MIMIC-CXR datasets. A deep learning model was trained for patient re-identification and assessed on patient identity confirmation, retrieval of patient images from a database based on a query image, and radiological abnormality prediction in longitudinal image sets. The representation learned was incorporated into a generative adversarial network, allowing visual explanations of the relevant features. Performance was evaluated with sensitivity, specificity, F1 score, Precision@1, R-precision, and area under the receiver operating characteristic curve (AUC) for normal and abnormal prediction.
Results Patient re-identification was achieved with an F1 score of 0.996±0.001 on the internal test set (N=26,152 patients) and scores of 0.947-0.991 on the external test data. Database retrieval achieved Precision@1 of 0.976±0.008 at 299x299 resolution on the internal test set and between 0.868-0.950 on the external datasets. Patient sex, age, weight and other factors were identified as key model features. The model achieved an AUC of 0.73±0.01 for abnormality prediction versus 0.58±0.01 achieved by age prediction error.
Conclusion The image features used by a deep learning patient re-identification model for chest radiographs corresponded to intuitive human-interpretable characteristics, and changes in these identifying features over time may act as a marker for emerging abnormality.
Materials and Methods: This retrospective study used a set of 1,207, 350 frontal chest radiographs and free-text reports from 259, 152 patients obtained from six hospitals between 2006-2019, with validation on the public ChestX-ray14, CheXpert and MIMIC-CXR datasets. A deep learning model was trained for patient re-identification and assessed on patient identity confirmation, retrieval of patient images from a database based on a query image, and radiological abnormality prediction in longitudinal image sets. The representation learned was incorporated into a generative adversarial network, allowing visual explanations of the relevant features. Performance was evaluated with sensitivity, specificity, F1 score, Precision@1, R-precision, and area under the receiver operating characteristic curve (AUC) for normal and abnormal prediction.
Results Patient re-identification was achieved with an F1 score of 0.996±0.001 on the internal test set (N=26,152 patients) and scores of 0.947-0.991 on the external test data. Database retrieval achieved Precision@1 of 0.976±0.008 at 299x299 resolution on the internal test set and between 0.868-0.950 on the external datasets. Patient sex, age, weight and other factors were identified as key model features. The model achieved an AUC of 0.73±0.01 for abnormality prediction versus 0.58±0.01 achieved by age prediction error.
Conclusion The image features used by a deep learning patient re-identification model for chest radiographs corresponded to intuitive human-interpretable characteristics, and changes in these identifying features over time may act as a marker for emerging abnormality.
Original language | English |
---|---|
Journal | Radiology: Artificial intelligence |
DOIs | |
Publication status | E-pub ahead of print - 20 Sept 2023 |