Learning to compensate spectral coloring in a LED-based photoacoustic/ultrasound imaging system

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

72 Downloads (Pure)

Abstract

Photoacoustic (PA) imaging combines optical spectroscopic contrast with deep tissue penetration, offering valuable functional, molecular, and structural information about tissue. However, a long-standing challenge with PA imaging has been that the quantification accuracy of tissue chromophores concentrations remains limited due to the spectral colouring effect. Monte Carlo (MC) simulation is regarded as the gold standard to model light transportation in tissue but can be computationally demanding, thus not suitable for real-time applications. We propose a time-efficiency solution using conditional generative adversarial networks (cGANs) to generate light fluence distributions within tissue towards real-time spectral decolouring in PA imaging. The networks were trained to predict light fluence distribution from realistic tissue anatomy and optical properties using MC simulation as ground truth. We achieved high-quality light fluence synthesis, with a peak signal-to-noise ratio of 31.9 dB using in vivo segmentation. We also demonstrated the validity of spectral decolouring for PA quantification, with an error of absorption efficient estimation around 0.05 using numerical phantoms. Thus, this approach holds promise for enhancing the quantification performance of PA imaging in real-time.
Original languageEnglish
Title of host publicationProceedings Volume 12842, Photons Plus Ultrasound: Imaging and Sensing 2024;
Place of PublicationSan Francisco, California
PublisherSPIE-Intl Soc Optical Eng
Volume12842
Publication statusPublished - 12 Mar 2024

Fingerprint

Dive into the research topics of 'Learning to compensate spectral coloring in a LED-based photoacoustic/ultrasound imaging system'. Together they form a unique fingerprint.

Cite this