Articulated multi-instrument 2-d pose estimation using fully convolutional networks

Xiaofei Du*, Thomas Kurmann, Ping Lin Chang, Maximilian Allan, Sebastien Ourselin, Raphael Sznitman, John D. Kelly, Danail Stoyanov

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

91 Citations (Scopus)

Abstract

Instrument detection, pose estimation, and tracking in surgical videos are an important vision component for computer-assisted interventions. While significant advances have been made in recent years, articulation detection is still a major challenge. In this paper, we propose a deep neural network for articulated multi-instrument 2-D pose estimation, which is trained on detailed annotations of endoscopic and microscopic data sets. Our model is formed by a fully convolutional detection-regression network. Joints and associations between joint pairs in our instrument model are located by the detection subnetwork and are subsequently refined through a regression subnetwork. Based on the output from the model, the poses of the instruments are inferred using maximum bipartite graph matching. Our estimation framework is powered by deep learning techniques without any direct kinematic information from a robot. Our framework is tested on single-instrument RMIT data, and also on multi-instrument EndoVis and in vivo data with promising results. In addition, the data set annotations are publicly released along with our code and model.

Original languageEnglish
Pages (from-to)1276-1287
Number of pages12
JournalIEEE Transactions on Medical Imaging
Volume37
Issue number5
DOIs
Publication statusPublished - 1 May 2018

Keywords

  • articulated pose estimation
  • fully convolutional networks
  • Surgical instrument detection
  • surgical vision

Fingerprint

Dive into the research topics of 'Articulated multi-instrument 2-d pose estimation using fully convolutional networks'. Together they form a unique fingerprint.

Cite this