ASMFS: Adaptive-similarity-based multi-modality feature selection for classification of Alzheimer's disease
journal contribution
posted on 2024-11-17, 16:47authored byYuang Shi, Chen Zu, Mei Hong, Luping Zhou, Lei Wang, Xi Wu, Jiliu Zhou, Daoqiang Zhang, Yan Wang
Multimodal classification methods using different modalities have great advantages over traditional single-modality-based ones for the diagnosis of Alzheimer's disease (AD) and its prodromal stage mild cognitive impairment (MCI). With the increasing amount of high-dimensional heterogeneous data to be processed, multi-modality feature selection has become a crucial research direction for AD classification. However, traditional methods usually depict the data structure using pre-defined similarity matrix as a priori, which is difficult to precisely measure the intrinsic relationship across different modalities in high-dimensional space. In this paper, we propose a novel multimodal feature selection method called Adaptive-Similarity-based Multi-modality Feature Selection (ASMFS) which performs adaptive similarity learning and feature selection simultaneously. Specifically, a similarity matrix is learned by jointly considering different modalities and at the same time, an efficient feature selection is conducted by imposing group sparsity-inducing l2,1-norm constraint. Evaluated on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database with baseline MRI and FDG-PET imaging data collected from 51 AD, 43 MCI converters (MCI-C), 56 MCI non-converters (MCI-NC) and 52 normal controls (NC), we demonstrate the effectiveness and superiority of our proposed method against other state-of-the-art approaches for multi-modality classification of AD/MCI.
Funding
Sichuan Province Science and Technology Support Program (2020YFG0079)