King's College London

Research portal

Learned Optical Flow for Intra-Operative Tracking of the Retinal Fundus

Research output: Contribution to journalArticle

Standard

Learned Optical Flow for Intra-Operative Tracking of the Retinal Fundus. / Ravasio, Claudio; Pissas, Theodoros; Bloch, Edward; Flores, Blanca; Jalali, Sepehr; Stoyanov, Danail; Cardoso, M. Jorge; Da Cruz, Lyndon; Bergeles, Christos.

In: International Journal of Computer Assisted Radiology and Surgery, Vol. 15, No. 5, 01.05.2020, p. 827-836.

Research output: Contribution to journalArticle

Harvard

Ravasio, C, Pissas, T, Bloch, E, Flores, B, Jalali, S, Stoyanov, D, Cardoso, MJ, Da Cruz, L & Bergeles, C 2020, 'Learned Optical Flow for Intra-Operative Tracking of the Retinal Fundus', International Journal of Computer Assisted Radiology and Surgery, vol. 15, no. 5, pp. 827-836. https://doi.org/10.1007/s11548-020-02160-9

APA

Ravasio, C., Pissas, T., Bloch, E., Flores, B., Jalali, S., Stoyanov, D., Cardoso, M. J., Da Cruz, L., & Bergeles, C. (2020). Learned Optical Flow for Intra-Operative Tracking of the Retinal Fundus. International Journal of Computer Assisted Radiology and Surgery, 15(5), 827-836. https://doi.org/10.1007/s11548-020-02160-9

Vancouver

Ravasio C, Pissas T, Bloch E, Flores B, Jalali S, Stoyanov D et al. Learned Optical Flow for Intra-Operative Tracking of the Retinal Fundus. International Journal of Computer Assisted Radiology and Surgery. 2020 May 1;15(5):827-836. https://doi.org/10.1007/s11548-020-02160-9

Author

Ravasio, Claudio ; Pissas, Theodoros ; Bloch, Edward ; Flores, Blanca ; Jalali, Sepehr ; Stoyanov, Danail ; Cardoso, M. Jorge ; Da Cruz, Lyndon ; Bergeles, Christos. / Learned Optical Flow for Intra-Operative Tracking of the Retinal Fundus. In: International Journal of Computer Assisted Radiology and Surgery. 2020 ; Vol. 15, No. 5. pp. 827-836.

Bibtex Download

@article{b2dce4caaca845f1bbbbfd38665dbd0f,
title = "Learned Optical Flow for Intra-Operative Tracking of the Retinal Fundus",
abstract = "PurposeSustained delivery of regenerative retinal therapies by robotic systems requires intra-operative tracking of the retinal fundus. We propose a supervised deep convolutional neural network to densely predict semantic segmentation and optical flow of the retina as mutually supportive tasks, implicitly inpainting retinal flow information missing due to occlusion by surgical tools.MethodsAs manual annotation of optical flow is infeasible, we propose a flexible algorithm for generation of large synthetic training datasets on the basis of given intra-operative retinal images. We evaluate optical flow estimation by tracking a grid and sparsely annotated ground truth points on a benchmark of challenging real intra-operative clips obtained from an extensive internally acquired dataset encompassing representative vitreoretinal surgical cases.ResultsThe U-Net-based network trained on the synthetic dataset is shown to generalise well to the benchmark of real surgical videos. When used to track retinal points of interest, our flow estimation outperforms variational baseline methods on clips containing tool motions which occlude the points of interest, as is routinely observed in intra-operatively recorded surgery videos.ConclusionsThe results indicate that complex synthetic training datasets can be used to specifically guide optical flow estimation. Our proposed algorithm therefore lays the foundation for a robust system which can assist with intra-operative tracking of moving surgical targets even when occluded.",
keywords = "Deep learning, Optical flow, Retinal tracking, Synthetic data",
author = "Claudio Ravasio and Theodoros Pissas and Edward Bloch and Blanca Flores and Sepehr Jalali and Danail Stoyanov and Cardoso, {M. Jorge} and {Da Cruz}, Lyndon and Christos Bergeles",
year = "2020",
month = may,
day = "1",
doi = "10.1007/s11548-020-02160-9",
language = "English",
volume = "15",
pages = "827--836",
journal = "International Journal of Computer Assisted Radiology and Surgery",
issn = "1861-6410",
publisher = "Springer Verlag",
number = "5",

}

RIS (suitable for import to EndNote) Download

TY - JOUR

T1 - Learned Optical Flow for Intra-Operative Tracking of the Retinal Fundus

AU - Ravasio, Claudio

AU - Pissas, Theodoros

AU - Bloch, Edward

AU - Flores, Blanca

AU - Jalali, Sepehr

AU - Stoyanov, Danail

AU - Cardoso, M. Jorge

AU - Da Cruz, Lyndon

AU - Bergeles, Christos

PY - 2020/5/1

Y1 - 2020/5/1

N2 - PurposeSustained delivery of regenerative retinal therapies by robotic systems requires intra-operative tracking of the retinal fundus. We propose a supervised deep convolutional neural network to densely predict semantic segmentation and optical flow of the retina as mutually supportive tasks, implicitly inpainting retinal flow information missing due to occlusion by surgical tools.MethodsAs manual annotation of optical flow is infeasible, we propose a flexible algorithm for generation of large synthetic training datasets on the basis of given intra-operative retinal images. We evaluate optical flow estimation by tracking a grid and sparsely annotated ground truth points on a benchmark of challenging real intra-operative clips obtained from an extensive internally acquired dataset encompassing representative vitreoretinal surgical cases.ResultsThe U-Net-based network trained on the synthetic dataset is shown to generalise well to the benchmark of real surgical videos. When used to track retinal points of interest, our flow estimation outperforms variational baseline methods on clips containing tool motions which occlude the points of interest, as is routinely observed in intra-operatively recorded surgery videos.ConclusionsThe results indicate that complex synthetic training datasets can be used to specifically guide optical flow estimation. Our proposed algorithm therefore lays the foundation for a robust system which can assist with intra-operative tracking of moving surgical targets even when occluded.

AB - PurposeSustained delivery of regenerative retinal therapies by robotic systems requires intra-operative tracking of the retinal fundus. We propose a supervised deep convolutional neural network to densely predict semantic segmentation and optical flow of the retina as mutually supportive tasks, implicitly inpainting retinal flow information missing due to occlusion by surgical tools.MethodsAs manual annotation of optical flow is infeasible, we propose a flexible algorithm for generation of large synthetic training datasets on the basis of given intra-operative retinal images. We evaluate optical flow estimation by tracking a grid and sparsely annotated ground truth points on a benchmark of challenging real intra-operative clips obtained from an extensive internally acquired dataset encompassing representative vitreoretinal surgical cases.ResultsThe U-Net-based network trained on the synthetic dataset is shown to generalise well to the benchmark of real surgical videos. When used to track retinal points of interest, our flow estimation outperforms variational baseline methods on clips containing tool motions which occlude the points of interest, as is routinely observed in intra-operatively recorded surgery videos.ConclusionsThe results indicate that complex synthetic training datasets can be used to specifically guide optical flow estimation. Our proposed algorithm therefore lays the foundation for a robust system which can assist with intra-operative tracking of moving surgical targets even when occluded.

KW - Deep learning

KW - Optical flow

KW - Retinal tracking

KW - Synthetic data

UR - http://www.scopus.com/inward/record.url?scp=85084061872&partnerID=8YFLogxK

U2 - 10.1007/s11548-020-02160-9

DO - 10.1007/s11548-020-02160-9

M3 - Article

VL - 15

SP - 827

EP - 836

JO - International Journal of Computer Assisted Radiology and Surgery

JF - International Journal of Computer Assisted Radiology and Surgery

SN - 1861-6410

IS - 5

ER -

View graph of relations

© 2018 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454