University of Wollongong
Browse

DFN: A deep fusion network for flexible single and multi-modal action recognition

journal contribution
posted on 2024-11-17, 16:26 authored by Chuankun Li, Yonghong Hou, Wanqing Li, Zewei Ding, Pichao Wang
Multi-modal action recognition methods can be generally classified into two categories: (1) fusing multi-modal features with simple concatenation or fusing the classification scores of individual modalities without considering the interaction among the multi-modalities; (2) using one of the modalities as privileged information in training to boost the recognition on the other modalities in inference. The former approach usually is not able to deal with the cases where one of the modalities is missing. In the latter, the trained classifier does not work on the privileged modality. To address these shortcomings, this paper presents a novel end-to-end trainable deep fusion network (DFN) that is able to improve the performance not only in the cases where all modalities are available and also in the cases where there is a missing modality. The DFN is simple yet effective with the capability of retrieving an estimation of one modality by using another modality through a Multilayer Perceptron (MLP). In order to better preserve structure information, the DFN first maps the individual modality features to a high dimensional Kronecker-product space and subsequently learns a low-dimensional discriminative space for classification. The effectiveness of the proposed DFN has been verified on three benchmark datasets: the large NTU RGB+D, UTD-MHAD, and SYSU-3D datasets and it has achieved state-of-the-art results.

Funding

National Natural Science Foundation of China (20210302124031)

History

Journal title

Expert Systems with Applications

Volume

245

Language

English

Usage metrics

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC