FVW: Finding Valuable Weight on Deep Neural Network for Model Pruning

Publication Name

International Conference on Information and Knowledge Management, Proceedings

Abstract

The rapid development of deep learning has demonstrated its potential for deployment in many intelligent service systems. However, some issues such as optimisation (e.g., how to reduce the deployment resources costs and further improve the detection speed), especially in scenarios where limited resources are available, remain challenging to address. In this paper, we aim to delve into the principles of deep neural networks, focusing on the importance of network neurons. The goal is to identify the neurons that exert minimal impact on model performances, thereby aiding in the process of model pruning. In this work, we have thoroughly considered the deep learning model pruning process with and without fine-tuning step, ensuring the model performance consistency. To achieve our objectives, we propose a methodology that employs adversarial attack methods to explore deep neural network parameters. This approach is combined with an innovative attribution algorithm to analyse the level of network neurons involvement. In our experiments, our approach can effectively quantify the importance of network neuron. We extend the evaluation through comprehensive experiments conducted on a range of datasets, including CIFAR-10, CIFAR-100 and Caltech101. The results demonstrate that, our method have consistently achieved the state-of-the-art performance over many existing methods. We anticipate that this work will help to reduce the heavy training and inference cost of deep neural network models where a lightweight deep learning enhanced service and system is possible. The source code is open source at https://github.com/LMBTough/FVW.

Open Access Status

This publication is not available as open access

First Page

3657

Last Page

3666

Share

COinS
 

Link to publisher version (DOI)

http://dx.doi.org/10.1145/3583780.3614889