TY - CONF
T1 - Robust Template Matching via Hierarchical Convolutional Features from a Shape Biased CNN
AU - Gao, Bo
AU - Spratling, Michael
N1 - Funding Information:
Acknowledgements The authors acknowledge use of the research computing facility at King’s College London, Rosalind (https://rosalind.kcl.ac.uk), and the Joint Academic Data science Endeavour (JADE) facility. This research was funded by China Scholarship Council.
Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
PY - 2022/1/1
Y1 - 2022/1/1
N2 - Finding a template in a search image is an important task underlying many computer vision applications. Recent approaches perform template matching in a deep feature-space, produced by a convolutional neural network (CNN), which is found to provide more tolerance to changes in appearance. In this article, we investigate whether enhancing the CNN’s encoding of shape information can produce more distinguishable features, so as to improve the performance of template matching. This investigation results in a new template matching method that produces state-of-the-art results in a standard benchmark. To confirm these results, we also create a new benchmark and show that the proposed method also outperforms existing techniques on this new dataset. Our code and dataset is available at: https://github.com/iminfine/Deep-DIM.
AB - Finding a template in a search image is an important task underlying many computer vision applications. Recent approaches perform template matching in a deep feature-space, produced by a convolutional neural network (CNN), which is found to provide more tolerance to changes in appearance. In this article, we investigate whether enhancing the CNN’s encoding of shape information can produce more distinguishable features, so as to improve the performance of template matching. This investigation results in a new template matching method that produces state-of-the-art results in a standard benchmark. To confirm these results, we also create a new benchmark and show that the proposed method also outperforms existing techniques on this new dataset. Our code and dataset is available at: https://github.com/iminfine/Deep-DIM.
UR - http://www.scopus.com/inward/record.url?scp=85126338138&partnerID=8YFLogxK
U2 - 10.1007/978-981-16-6963-7_31
DO - 10.1007/978-981-16-6963-7_31
M3 - Paper
SP - 333
EP - 344
T2 - International Conference on Image, Vision and Intelligent Systems
Y2 - 19 June 2021 through 20 June 2021
ER -