Original language | English |
---|
Article number | 108628 |
---|
Journal | SIGNAL PROCESSING |
---|
Volume | 199 |
---|
Early online date | 22 May 2022 |
---|
DOIs | |
---|
Accepted/In press | 21 May 2022 |
---|
E-pub ahead of print | 22 May 2022 |
---|
Published | Oct 2022 |
---|
Additional links | |
---|
Funding Information:
This research was funded by the China Scholarship Council .The authors acknowledge use of the research computing facility at Kings College London, Rosalind ( https://rosalind.kcl.ac.uk ), and the Joint Academic Data science Endeavour (JADE) facility.
Funding Information:
This research was funded by the China Scholarship Council.The authors acknowledge use of the research computing facility at Kings College London, Rosalind (https://rosalind.kcl.ac.uk), and the Joint Academic Data science Endeavour (JADE) facility.
Publisher Copyright:
© 2022 The Author(s)
Most current trackers utilise an appearance model to localise the target object in each frame. However, such approaches often fail when there are similar looking distractor objects in the surrounding background. This paper promotes an approach that can be combined with many existing trackers to tackle this issue and improve tracking robustness. The proposed approach makes use of two additional cues to target location: shape cues which are exploited through offline training of the appearance model, and motion cues which are exploited online to predict the target object’s future position based on its history of past locations. Combining these additional mechanisms with the existing trackers SiamFC, SiamFC++, Super_DiMP and ARSuper_DiMP all resulted in an increase in the tracking accuracy compared to that achieved by the corresponding underlying tracker alone. When combined with ARSuper_DiMP the resulting tracker is shown to outperform all popular state-of-the-art trackers on three benchmark datasets (OTB-100, NFS, and LaSOT), and produce performance that is competitive with the state-of-the-art on the UAV123, Trackingnet, GOT-10K and VOT2020 datasets.