Abstract
Joint PET-MR imaging is a medical imaging technique combining the strengths of positron emission tomography (PET) and magnetic resonance imaging (MRI). It allows for more detailed and informative images than using separate modalities, exploiting both PET functional information and MRI high-resolution images of the body's anatomy. Shortcomings from each modality are compensated by the other. Joint PET-MR imaging is mainly applied for diagnosing and treating cancer, neurological disorders, and cardiovascular diseases.Long scanning times are required for PET and MR imaging to obtain high-quality data. In addition, PET data are inherently noisy and suffer from poor spatial resolution. Developing new reconstruction methods for low-dose and undersampled data achieving similar qualitative and quantitative accuracy as for high-count and fully sampled data reconstruction is therefore crucial. Model-based image reconstruction algorithms provide state-of-the-art reconstructions of PET and MR data. They consider processes occurring during the data acquisition and perform regularization by introducing prior knowledge into the reconstruction. PET and MR images can guide one another’s regularization to make the most of the two images. A limitation of guided regularization is the possibility of imposing too much of the prior image structure, preventing its clinical use. This thesis presents three novel frameworks for regularized PET and joint PET-MR image reconstruction with deep learning.
The first method focuses on deep learned independent PET reconstruction and proposes a memory-efficient method for the training of fully unrolled networks. It is applied to the forward-backward splitting expectation-minimization (FBSEM). The second framework focuses on joint deep learned PET-MR image reconstruction. A joint reconstruction is unrolled, and the joint regularizer is learned. The superiority of single-modality loss training is demonstrated. Finally, the settings under which a joint reconstruction benefits MR reconstruction are also investigated using various undersampling factors for MR data and different count levels for PET data. The last method proposed is a simultaneous independent and MR-guided unrolled PET reconstruction jointly learned with an optimal combination of the two reconstructions into one final image. Combining the two reconstructions with post-processing allows for overcoming guided and joint reconstruction’s trade-off between global and local accuracy in PET and MR mismatch areas. Similar performance to MR-guided methods for regions of match and similar performance to pure independent PET image reconstruction for areas of mismatch between the two modalities is achieved. Overall, this thesis demonstrates the benefits of deep learning to learn optimal regularization during reconstruction and to combine multi-modality information in PET-MR imaging.
Date of Award | 1 Jul 2023 |
---|---|
Original language | English |
Awarding Institution |
|
Supervisor | Andrew Reader (Supervisor) & Julia Schnabel (Supervisor) |