TY - JOUR
T1 - Mutual Information-Based Disentangled Neural Networks for Classifying Unseen Categories in Different Domains
T2 - Application to Fetal Ultrasound Imaging
AU - Meng, Qingjie
AU - Matthew, Jacqueline
AU - Zimmer, Veronika A.
AU - Gomez, Alberto
AU - Lloyd, David F.A.
AU - Rueckert, Daniel
AU - Kainz, Bernhard
N1 - Publisher Copyright:
© 1982-2012 IEEE.
PY - 2021/2
Y1 - 2021/2
N2 - Deep neural networks exhibit limited generalizability across images with different entangled domain features and categorical features. Learning generalizable features that can form universal categorical decision boundaries across domains is an interesting and difficult challenge. This problem occurs frequently in medical imaging applications when attempts are made to deploy and improve deep learning models across different image acquisition devices, across acquisition parameters or if some classes are unavailable in new training databases. To address this problem, we propose Mutual Information-based Disentangled Neural Networks (MIDNet), which extract generalizable categorical features to transfer knowledge to unseen categories in a target domain. The proposed MIDNet adopts a semi-supervised learning paradigm to alleviate the dependency on labeled data. This is important for real-world applications where data annotation is time-consuming, costly and requires training and expertise. We extensively evaluate the proposed method on fetal ultrasound datasets for two different image classification tasks where domain features are respectively defined by shadow artifacts and image acquisition devices. Experimental results show that the proposed method outperforms the state-of-The-Art on the classification of unseen categories in a target domain with sparsely labeled training data.
AB - Deep neural networks exhibit limited generalizability across images with different entangled domain features and categorical features. Learning generalizable features that can form universal categorical decision boundaries across domains is an interesting and difficult challenge. This problem occurs frequently in medical imaging applications when attempts are made to deploy and improve deep learning models across different image acquisition devices, across acquisition parameters or if some classes are unavailable in new training databases. To address this problem, we propose Mutual Information-based Disentangled Neural Networks (MIDNet), which extract generalizable categorical features to transfer knowledge to unseen categories in a target domain. The proposed MIDNet adopts a semi-supervised learning paradigm to alleviate the dependency on labeled data. This is important for real-world applications where data annotation is time-consuming, costly and requires training and expertise. We extensively evaluate the proposed method on fetal ultrasound datasets for two different image classification tasks where domain features are respectively defined by shadow artifacts and image acquisition devices. Experimental results show that the proposed method outperforms the state-of-The-Art on the classification of unseen categories in a target domain with sparsely labeled training data.
KW - domain adaptation
KW - image classification
KW - Representation disentanglement
KW - semi-supervised learning
UR - http://www.scopus.com/inward/record.url?scp=85100657220&partnerID=8YFLogxK
U2 - 10.1109/TMI.2020.3035424
DO - 10.1109/TMI.2020.3035424
M3 - Article
C2 - 33141662
AN - SCOPUS:85100657220
SN - 0278-0062
VL - 40
SP - 722
EP - 734
JO - IEEE Transactions on Medical Imaging
JF - IEEE Transactions on Medical Imaging
IS - 2
M1 - 9247170
ER -