Research output: Contribution to journal › Article › peer-review
DCT-Inspired Feature Transform for Image Retrieval and Reconstruction. / Wang, Yunhe; Shi, Miaojing; You, Shan et al.
In: IEEE TRANSACTIONS ON IMAGE PROCESSING, Vol. 25, No. 9, 09.2016, p. 4406-4420.Research output: Contribution to journal › Article › peer-review
}
TY - JOUR
T1 - DCT-Inspired Feature Transform for Image Retrieval and Reconstruction
AU - Wang, Yunhe
AU - Shi, Miaojing
AU - You, Shan
AU - Xu, Chao
PY - 2016/9
Y1 - 2016/9
N2 - Scale invariant feature transform (SIFT) is effective for representing images in computer vision tasks, as one of the most resistant feature descriptions to common image deformations. However, two issues should be addressed: first, feature description based on gradient accumulation is not compact and contains redundancies; second, multiple orientations are often extracted from one local region and therefore produce multiple descriptions, which is not good for memory efficiency. To resolve these two issues, this paper introduces a novel method to determine the dominant orientation for multiple-orientation cases, named discrete cosine transform (DCT) intrinsic orientation, and a new DCT inspired feature transform (DIFT). In each local region, it first computes a unique DCT intrinsic orientation via DCT matrix and rotates the region accordingly, and then describes the rotated region with partial DCT matrix coefficients to produce an optimized low-dimensional descriptor. We test the accuracy and robustness of DIFT on real image matching. Afterward, extensive applications performed on public benchmarks for visual retrieval show that using DCT intrinsic orientation achieves performance on a par with SIFT, but with only 60% of its features; replacing the SIFT description with DIFT reduces dimensions from 128 to 32 and improves precision. Image reconstruction resulting from DIFT is presented to show another of its advantages over SIFT.
AB - Scale invariant feature transform (SIFT) is effective for representing images in computer vision tasks, as one of the most resistant feature descriptions to common image deformations. However, two issues should be addressed: first, feature description based on gradient accumulation is not compact and contains redundancies; second, multiple orientations are often extracted from one local region and therefore produce multiple descriptions, which is not good for memory efficiency. To resolve these two issues, this paper introduces a novel method to determine the dominant orientation for multiple-orientation cases, named discrete cosine transform (DCT) intrinsic orientation, and a new DCT inspired feature transform (DIFT). In each local region, it first computes a unique DCT intrinsic orientation via DCT matrix and rotates the region accordingly, and then describes the rotated region with partial DCT matrix coefficients to produce an optimized low-dimensional descriptor. We test the accuracy and robustness of DIFT on real image matching. Afterward, extensive applications performed on public benchmarks for visual retrieval show that using DCT intrinsic orientation achieves performance on a par with SIFT, but with only 60% of its features; replacing the SIFT description with DIFT reduces dimensions from 128 to 32 and improves precision. Image reconstruction resulting from DIFT is presented to show another of its advantages over SIFT.
U2 - 10.1109/TIP.2016.2590323
DO - 10.1109/TIP.2016.2590323
M3 - Article
VL - 25
SP - 4406
EP - 4420
JO - IEEE TRANSACTIONS ON IMAGE PROCESSING
JF - IEEE TRANSACTIONS ON IMAGE PROCESSING
SN - 1057-7149
IS - 9
ER -
King's College London - Homepage
© 2020 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454