University of Wollongong
Browse

Sample-adaptive multiple kernel learning

Download (436.06 kB)
conference contribution
posted on 2024-11-13, 21:17 authored by Xinwang Liu, Lei WangLei Wang, Jian Zhang, Jianping Yin
Existing multiple kernel learning (MKL) algorithms indiscriminately apply a same set of kernel combination weights to all samples. However, the utility of base kernels could vary across samples and a base kernel useful for one sample could become noisy for another. In this case, rigidly applying a same set of kernel combination weights could adversely affect the learning performance. To improve this situation, we propose a sample-adaptive MKL algorithm, in which base kernels are allowed to be adaptively switched on/off with respect to each sample. We achieve this goal by assigning a latent binary variable to each base kernel when it is applied to a sample. The kernel combination weights and the latent variables are jointly optimized via margin maximization principle. As demonstrated on five benchmark data sets, the proposed algorithm consistently outperforms the comparable ones in the literature.

History

Citation

Liu, X., Wang, L., Zhang, J. & Yin, J. (2014). Sample-adaptive multiple kernel learning. Twenty-Eighth AAAI Conference on Artificial Intelligence (AAAI-14) (pp. 1975-1981). Palo Alto, California: AAAI Press.

Parent title

Proceedings of the National Conference on Artificial Intelligence

Volume

3

Pagination

1975-1981

Language

English

RIS ID

90606

Usage metrics

    Categories

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC