University of Wollongong
Browse

Absent multiple kernel learning

Download (382.51 kB)
conference contribution
posted on 2024-11-13, 21:30 authored by Xinwang Liu, Lei WangLei Wang, Jianping Yin, Yong Dou, Jian Zhang
Multiple kernel learning (MKL) optimally combines the multiple channels of each sample to improve classification performance. However, existing MKL algorithms cannot effectively handle the situation where some channels are missing, which is common in practical applications. This paper proposes an absent MKL (AMKL) algorithm to address this issue. Different from existing approaches where missing channels are firstly imputed and then a standard MKL algorithm is deployed on the imputed data, our algorithm directly classifies each sample with its observed channels. In specific, we define a margin for each sample in its own relevant space, which corresponds to the observed channels of that sample. The proposed AMKL algorithm then maximizes the minimum of all sample-based margins, and this leads to a difficult optimization problem. We show that this problem can be reformulated as a convex one by applying the representer theorem. This makes it readily be solved via existing convex optimization packages. Extensive experiments are conducted on five MKL benchmark data sets to compare the proposed algorithm with existing imputation-based methods. As observed, our algorithm achieves superior performance and the improvement is more significant with the increasing missing ratio.

History

Citation

Liu, X., Wang, L., Yin, J., Dou, Y. & Zhang, J. (2015). Absent multiple kernel learning. Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (pp. 2807-2813). United States: IEEE.

Parent title

Proceedings of the National Conference on Artificial Intelligence

Volume

4

Pagination

2807-2813

Language

English

RIS ID

97196

Usage metrics

    Categories

    Keywords

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC