Feed-forward neural network training using sparse representation

RIS ID

130493

Publication Details

Yang, J. & Ma, J. (2019). Feed-forward neural network training using sparse representation. Expert Systems with Applications, 116 255-264.

Abstract

The feed-forward neural network (FNN) has drawn great interest in many applications due to its universal approximation capability. In this paper, a novel algorithm for training FNNs is proposed using the concept of sparse representation. The major advantage of the proposed algorithm is that it is capable of training the initial network and optimizing the network structure simultaneously. The proposed algorithm consists of two core stages: structure optimization and weight update. In the structure optimization stage, the sparse representation technique is employed to select important hidden neurons that minimize the residual output error. In the weight update stage, a dictionary learning based method is implemented to update network weights by maximizing the output diversity from hidden neurons. This weight-updating process is designed to improve the performance of the structure optimization. Based on several benchmark classification and regression problems, we present experimental results comparing the proposed algorithm with state-of-the-art methods. Simulation results show that the proposed algorithm offers comparative performance in terms of the final network size and generalization ability.

Please refer to publisher version or contact your library.

COinS
 

Link to publisher version (DOI)

http://dx.doi.org/10.1016/j.eswa.2018.08.038