Real-Time Deep-Learned Reconstruction for a Scanning Intraoperative Probe

Research output: Contribution to journalArticlepeer-review

117 Downloads (Pure)

Abstract

Accurate delineation of the boundary between cancerous and healthy tissue during cancer resection surgeries is important to ensure complete removal of cancerous cells while preserving healthy tissue. Labelling cancer cells with radiotracers, and then using a probe during surgery to detect the radiotracer distribution, is a potential solution for accurate tumour localisation and hence better surgical outcomes. This work explores the feasibility of using deep learning to reconstruct a radiotracer distribution from data acquired by an intraoperative probe. The probe’s sensor array outputs (SAOs), obtained by scanning the probe over a region of interest, are supplied to the deep network, which then outputs a reconstructed radiotracer distribution for the region of interest. This initial work demonstrates that the deep network used here, a convolutional encoder decoder (CED), can successfully reconstruct simulated 2D radiotracer distributions from synthesised input data. However, the network was unable to generalise reliably when tested with count levels not present in the training set. Therefore the network must be trained with desired count levels or else should include estimation of epistemic uncertainty to avoid misleading outcomes. We also show that test-time augmentation can improve reconstructed image quality, and hence can also be used to reduce the amount of training data required.

Original languageEnglish
Pages (from-to)1
Number of pages1
JournalTransactions on Radiation and Plasma Medical Sciences
DOIs
Publication statusPublished - 27 Sept 2022

Fingerprint

Dive into the research topics of 'Real-Time Deep-Learned Reconstruction for a Scanning Intraoperative Probe'. Together they form a unique fingerprint.

Cite this