Incomplete-Data Oriented Multiview Dimension Reduction via Sparse Low-Rank Representation

RIS ID

127645

Publication Details

Yang, W., Shi, Y., Gao, Y., Wang, L. & Yang, M. (2018). Incomplete-Data Oriented Multiview Dimension Reduction via Sparse Low-Rank Representation. IEEE Transactions on Neural Networks and Learning Systems, Online first 1-16.

Abstract

IEEE For dimension reduction on multiview data, most of the previous studies implicitly take an assumption that all samples are completed in all views. Nevertheless, this assumption could often be violated in real applications due to the presence of noise, limited access to data, equipment malfunction, and so on. Most of the previous methods will cease to work when missing values in one or multiple views occur, thus an incomplete-data oriented dimension reduction becomes an important issue. To this end, we mathematically formulate the above-mentioned issue as sparse low-rank representation through multiview subspace (SRRS) learning to impute missing values, by jointly measuring intraview relations (via sparse low-rank representation) and interview relations (through common subspace representation). Moreover, by exploiting various subspace priors in the proposed SRRS formulation, we develop three novel dimension reduction methods for incomplete multiview data: 1) multiview subspace learning via graph embedding; 2) multiview subspace learning via structured sparsity; and 3) sparse multiview feature selection via rank minimization. For each of them, the objective function and the algorithm to solve the resulting optimization problem are elaborated, respectively. We perform extensive experiments to investigate their performance on three types of tasks including data recovery, clustering, and classification. Both two toy examples (i.e., Swiss roll and S-curve) and four real-world data sets (i.e., face images, multisource news, multicamera activity, and multimodality neuroimaging data) are systematically tested. As demonstrated, our methods achieve the performance superior to that of the state-of-the-art comparable methods. Also, the results clearly show the advantage of integrating the sparsity and low-rankness over using each of them separately.

Please refer to publisher version or contact your library.

Share

COinS
 

Link to publisher version (DOI)

http://dx.doi.org/10.1109/TNNLS.2018.2828699