A new exploratory neural network training method

RIS ID

15472

Publication Details

S. Admed, J. Cross & A. Bouzerdoum, "A new exploratory neural network training method," in Artificial Neural Networks in Engineering Conference, 2006, pp. 611-616.

Abstract

A new exploratory self-adaptive derivative free training algorithm is developed. It only evaluates error function that is reduced to a set of sub-problems in a constrained search space and the search directions follow rectilinear moves. To accelerate the training algorithm, an interpolation search is developed that determines the best learning rates. The constrained interpolation search decides the best learning rates such that the direction of search is not deceived in locating the minimum trajectory of the error function.

The proposed algorithm is practical when the error function is ill conditioned implying that the Hessian matrix property is unstable, or the derivative evaluation is difficult. The benchmark XOR problem (Nitta, 2003) is used to compare the performance of the proposed algorithm with the standard back propagation training methods. The proposed algorithm improves over the standard first order back propagation training method in function evaluations by a factor of 32. The improvement in standard deviation for the same metric has a ratio of 38:1. This implies that the proposed algorithm has less intractable instances.

Please refer to publisher version or contact your library.

Share

COinS
 

Link to publisher version (DOI)

http://dx.doi.org/10.1115/1.802566.paper91