Mobile augmented reality applications rely on automatically matching a captured visual scene to an image in a database. This is typically achieved by deriving a set of features for the captured image, transmitting them through a network and then matching with features derived for a database of reference images. A fundamental problem is to select as few and robust features as possible such that the matching accuracy is invariant to distortions caused by camera capture whilst minimising the bit rate required for their transmission. In this paper, novel feature selection methods are proposed, based on the entropy of the image content, entropy of extracted features and the Discrete Cosine Transformation (DCT) coefficients. The methods proposed in the descriptor domain and DCT domain achieve better matching accuracy under low bit rate transmission than start-of-the-art peak based feature selection used within the MPEG-7 Compact Descriptor for Visual Search (CDVS). This is verified from image retrieval experiments and results for a realistic dataset with complex real world capturing distortion. Results show that the proposed method can improve the matching accuracy for various detectors and also indicate that the feature selection can not only achieves low bit rate transmission but also results in a higher matching accuracy than using all features when applied to distorted images. Hence, even if all the features can be transmitted in high transmission bandwidth scenarios, feature selection should still be applied to the distorted query image to ensure high matching accuracy.
History
Citation
Y. Cao, C. Ritz & R. Raad, "Adaptive and robust feature selection for low bitrate mobile augmented reality applications," in Signal Processing and Communication Systems (ICSPCS), 2014 8th International Conference on, 2014, pp. 1-7.
Parent title
2014, 8th International Conference on Signal Processing and Communication Systems, ICSPCS 2014 - Proceedings