A novel unsupervised camera-aware domain adaptation framework for person re-identification

RIS ID

142273

Publication Details

Qi, L., Wang, L., Huo, J., Zhou, L., Shi, Y. & Gao, Y. (2019). A novel unsupervised camera-aware domain adaptation framework for person re-identification. Proceedings of the IEEE International Conference on Computer Vision (pp. 8079-8088).

Abstract

© 2019 IEEE. Unsupervised cross-domain person re-identification (Re-ID) faces two key issues. One is the data distribution discrepancy between source and target domains, and the other is the lack of discriminative information in target domain. From the perspective of representation learning, this paper proposes a novel end-to-end deep domain adaptation framework to address them. For the first issue, we highlight the presence of camera-level sub-domains as a unique characteristic in person Re-ID, and develop a 'camera-aware' domain adaptation method via adversarial learning. With this method, the learned representation reduces distribution discrepancy not only between source and target domains but also across all cameras. For the second issue, we exploit the temporal continuity in each camera of target domain to create discriminative information. This is implemented by dynamically generating online triplets within each batch, in order to maximally take advantage of the steadily improved representation in training process. Together, the above two methods give rise to a new unsupervised domain adaptation framework for person Re-ID. Extensive experiments and ablation studies conducted on benchmark datasets demonstrate its superiority and interesting properties.

Please refer to publisher version or contact your library.

Share

COinS
 

Link to publisher version (DOI)

http://dx.doi.org/10.1109/ICCV.2019.00817