Explaining away results in more robust visual tracking

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

Many current trackers utilise an appearance model to localise the target object in each frame. However, such approaches often fail when there are similar-looking distractor objects in the surrounding background, meaning that target appearance alone is insufficient for robust tracking. In contrast, humans consider the distractor objects as additional visual cues, in order to infer the position of the target. Inspired by this observation, this paper proposes a novel tracking architecture in which not only is the appearance of the tracked object, but also the appearance of the distractors detected in previous frames, taken into consideration using a form of probabilistic inference known as explaining away. This mechanism increases the robustness of tracking by making it more likely that the target appearance model is matched to the true target, rather than similar-looking regions of the current frame. The proposed method can be combined with many existing trackers. Combining it with SiamFC, DaSiamRPN, Super_DiMP, and ARSuper_DiMP all resulted in an increase in the tracking accuracy compared to that achieved by the underlying tracker alone. When combined with Super_DiMP and ARSuper_DiMP, the resulting trackers produce performance that is competitive with the state of the art on seven popular benchmarks.

Original languageEnglish
JournalVISUAL COMPUTER
DOIs
Publication statusPublished - 5 Apr 2022

Keywords

  • Distractor submission
  • Explaining away
  • Object tracking
  • Tracking-by-Detection trackers

Fingerprint

Dive into the research topics of 'Explaining away results in more robust visual tracking'. Together they form a unique fingerprint.

Cite this