RIS ID

25160

Publication Details

Zhong, Y., Ye, L., Li, W. & Ogunbona, P. (2007). Perceived similarity and visual descriptions in content-based image retrieval. The IEEE International Symposium on Multimedia (pp. 173-180). IEEE Computer Society Press.

Abstract

The use of low-level feature descriptors is pervasive in content-based image retrieval tasks and the answer to the question of how well these features describe users’ intention is inconclusive. In this paper we devise experiments to gauge the degree of alignment between the description of target images by humans and that implicitly provided by low-level image feature descriptors. Data was collected on how humans perceive similarity in images. Using images judged by humans to be similar, as ground truth, the performance of some MPEG-7 visual feature descriptors were evaluated. It is found that various descriptors play different roles in different queries and their appropriate combination can improve the performance of retrieval tasks. This forms a basis for the development of adaptive weight assignment to features depending on the query and retrieval task.

Share

COinS
 

Link to publisher version (DOI)

http://dx.doi.org/10.1109/ISM.Workshops.2007.38