Title

A neural network compression method based on knowledge-distillation and parameter quantization for the bearing fault diagnosis

Publication Name

Applied Soft Computing

Abstract

Condition monitoring and fault diagnosis have been critical for the optimal scheduling of machines, improving the system reliability and the reducing maintenance cost. In recent years, various of methods based on the deep learning method have made the great progress in the field of the mechanical fault diagnosis. However, there is a conflict between the massive parameters of the fault diagnosis networks and the limited computing resource of the embedded platforms. It is difficult to deploy the trained network on the small scale embedded platforms (like field programmable gate array (FPGA)) in the actual industrial situations. This seriously hinders the practical process of the intelligent fault diagnosis method. To address this problem, a new neural network compression method based on knowledge-distillation (K-D) and parameter quantization is proposed in this paper. In the proposed method, a large scale deep neural network with multiple convolutional layers and fully-connected layers is designed and trained as the teacher network. Then a small scale network with just one convolutional layer and one fully-connected layer is designed as the student network. When training the student network, the K-D process is conducted to improve the accuracy of the student network. After the training process, the parameter quantization is conducted to further compress the scale of the student network. Experimental results on the field programmable gate array (FPGA) are presented to demonstrate the effectiveness of the proposed method. The results show that the proposed method can greatly compress the scales of the fault diagnosis networks for over 10 times at the cost of the minimal loss of the accuracy.

Open Access Status

This publication is not available as open access

Volume

127

Article Number

109331

Funding Number

51875138

Funding Sponsor

National Natural Science Foundation of China

Share

COinS
 

Link to publisher version (DOI)

http://dx.doi.org/10.1016/j.asoc.2022.109331