Deep homography estimation in dynamic surgical scenes for laparoscopic camera motion extraction

Martin Huber*, Sébastien Ourselin, Christos Bergeles, Tom Vercauteren

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

Current laparoscopic camera motion automation relies on rule-based approaches or only focuses on surgical tools. Imitation Learning (IL) methods could alleviate these shortcomings, but have so far been applied to oversimplified setups. Instead of extracting actions from oversimplified setups, in this work we introduce a method that allows to extract a laparoscope holder’s actions from videos of laparoscopic interventions. We synthetically add camera motion to a newly acquired dataset of camera motion free da Vinci surgery image sequences through a novel homography generation algorithm. The synthetic camera motion serves as a supervisory signal for camera motion estimation that is invariant to object and tool motion. We perform an extensive evaluation of state-of-the-art (SOTA) Deep Neural Networks (DNNs) across multiple compute regimes, finding our method transfers from our camera motion free da Vinci surgery dataset to videos of laparoscopic interventions, outperforming classical homography estimation approaches in both, precision by (Formula presented.), and runtime on a CPU by (Formula presented.).

Original languageEnglish
Pages (from-to)321-329
Number of pages9
JournalComputer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization
Volume10
Issue number3
DOIs
Publication statusPublished - 2022

Keywords

  • Deep learning
  • homography estimation
  • image processing and analysis
  • laparoscopic surgery
  • virtual reality
  • visual data mining and knowledge discovery

Fingerprint

Dive into the research topics of 'Deep homography estimation in dynamic surgical scenes for laparoscopic camera motion extraction'. Together they form a unique fingerprint.

Cite this