Year

2019

Degree Name

Doctor of Philosophy

Department

School of Computing and Information Technology

Abstract

Machine learning algorithms usually require a huge amount of training examples to learn a new model from scratch and often fail to apply the learned model to test data acquired from the scenarios different from those of the training data mainly due to domain divergence and task divergence. Transfer learning tries to use previously available data, models or knowledge effectively for a new domain or task with scarce data. This thesis focuses on addressing the cross-domain visual recognition using transfer learning.

First, a comprehensive literature review of transfer learning methods for cross-dataset visual recognition is presented by taking a problem-oriented perspective. Second, though there has been extensive research on unsupervised domain adaptation, the performance on the target domain is still far from comparable to that without distribution shift. Hence, a novel feature transformation-based method on unsupervised domain adaptation is proposed by taking both geometrical and statistical shift into consideration, and the performance is improved compared to previous methods. Third, a novel classifier-based unsupervised domain adaptation method is proposed by presenting a new perspective that the unsupervised domain adaptation can be formulated as a multi-task learning problem. This formulation removes the commonly used shared classifier assumption in previous methods and proposes unshared classifiers for the source and target domains for exploiting more domain specific features. Fourth, an important weighted adversarial nets-based method for partial domain adaptation is proposed, where the target domain has less number of classes compared to the source domain. Different from previous domain adaptation methods that generally assume the identical label spaces, a more realistic scenario that requires adaptation from a larger and more diverse source domain to a smaller target domain with less number of classes is considered. Last, a new research topic called multi-source domain expansion (MSDE) is introduced, which expands the source domains to include the new domain, such that the learned model is capable to perform well on both the new target domain and the old source domains. The MSDE is different from traditional domain adaptation whose target domain is defined only as the new domain excluding the source domains. MSDE is also different from multi-task learning, lifelong Learning, and incremental learning all of which assume no domain shift among different tasks. Specifically, a novel method is proposed for unsupervised MSDE without source data.

Share

COinS