Abstract
Purpose
In cardiac interventions, such as cardiac resynchronization therapy (CRT), image guidance can be enhanced by involving preoperative models. Multimodality 3D/2D registration for image guidance, however, remains a significant research challenge for fundamentally different image data, i.e., MR to X-ray. Registration methods must account for differences in intensity, contrast levels, resolution, dimensionality, field of view. Furthermore, same anatomical structures may not be visible in both modalities. Current approaches have focused on developing modality-specific solutions for individual clinical use cases, by introducing constraints, or identifying cross-modality information manually. Machine learning approaches have the potential to create more general registration platforms. However, training image to image methods would require large multimodal datasets and ground truth for each target application.
Methods
This paper proposes a model-to-image registration approach instead, because it is common in image-guided interventions to create anatomical models for diagnosis, planning or guidance prior to procedures. An imitation learning-based method, trained on 702 datasets, is used to register preoperative models to intraoperative X-ray images.
Results
Accuracy is demonstrated on cardiac models and artificial X-rays generated from CTs. The registration error was 2.92±2.22 mm 2.92±2.22 mm on 1000 test cases, superior to that of manual (6.48±5.6 mm 6.48±5.6 mm) and gradient-based (6.79±4.75 mm 6.79±4.75 mm ) registration. High robustness is shown in 19 clinical CRT cases.
Conclusion
Besides the proposed methods feasibility in a clinical environment, evaluation has shown good accuracy and high robustness indicating that it could be applied in image-guided interventions.
In cardiac interventions, such as cardiac resynchronization therapy (CRT), image guidance can be enhanced by involving preoperative models. Multimodality 3D/2D registration for image guidance, however, remains a significant research challenge for fundamentally different image data, i.e., MR to X-ray. Registration methods must account for differences in intensity, contrast levels, resolution, dimensionality, field of view. Furthermore, same anatomical structures may not be visible in both modalities. Current approaches have focused on developing modality-specific solutions for individual clinical use cases, by introducing constraints, or identifying cross-modality information manually. Machine learning approaches have the potential to create more general registration platforms. However, training image to image methods would require large multimodal datasets and ground truth for each target application.
Methods
This paper proposes a model-to-image registration approach instead, because it is common in image-guided interventions to create anatomical models for diagnosis, planning or guidance prior to procedures. An imitation learning-based method, trained on 702 datasets, is used to register preoperative models to intraoperative X-ray images.
Results
Accuracy is demonstrated on cardiac models and artificial X-rays generated from CTs. The registration error was 2.92±2.22 mm 2.92±2.22 mm on 1000 test cases, superior to that of manual (6.48±5.6 mm 6.48±5.6 mm) and gradient-based (6.79±4.75 mm 6.79±4.75 mm ) registration. High robustness is shown in 19 clinical CRT cases.
Conclusion
Besides the proposed methods feasibility in a clinical environment, evaluation has shown good accuracy and high robustness indicating that it could be applied in image-guided interventions.
Original language | English |
---|---|
Pages (from-to) | 1141-1149 |
Number of pages | 9 |
Journal | International Journal of Computer Assisted Radiology and Surgery |
Volume | 13 |
Issue number | 8 |
Early online date | 12 May 2018 |
DOIs | |
Publication status | Published - Aug 2018 |