Degree Name

Doctor of Philosophy


Department of Mechanical, Materials and Mechatronic Engineering - Faculty of Engineering


This thesis investigates the correlation of Hermite functions in the form of a Hermite neural network. The terminology “Hermite neural network” describes the Hermite series of orthonormal functions, which have been generalized to include numerical algorithms that are often associated with artificial neural networks. The aim of the investigation was to determine whether there are any benefits over the traditional correlation method using Fourier functions. The relative performance of the Hermite neural network correlator was compared to a Fourier neural network correlator. The main result is that the correlation of a Hermite neural network is a summation of NxN associated Laguerre functions whereas a Fourier neural network correlation is a summation of N Fourier functions. In this regard the Fourier neural network will be more efficient for the general correlation of functions. An exception occurs for the correlation of the Hermite neural network with a Gaussian function, or with the CHIRP radar signal. For these signals the correlation is also a summation of N terms. In these applications the Hermite correlator proved to be superior to the Fourier correlator for the following reasons. • It does not suffer from a circular correlation error, which is a characteristic of the Fourier correlator. • It allows the Gaussian inverse correlation to be computed without the numerical instability that occurs with a Fourier correlator. • It achieves a more compact signal interpolation for the CHIRP radar signal correlator than is possible with a Fourier correlator. The relatively good performance of the Hermite correlation, particularly the numerical stability of the inverse correlation, can be expected to be a useful asset for image processing, where the Gaussian function is especially important. As a digression from the main topic of Hermite neural network correlation, during the investigation a new method of fast training the sigmoid neural network was discovered. The principle of the new training method is that it trains the sigmoid neural network to the rate of change of the unknown non-linear function rather than to the function itself. This allows the sigmoid neural network to be trained with an associated radial basis neural network with the speed that is inherent with this type of network. In tests the associated training method was 100 times faster than the conventional training method for the sigmoid neural network. The new training method can be expected to widen the application of the sigmoid neural network to include applications that have previously not been possible on account of the slow training.



Unless otherwise indicated, the views expressed in this thesis are those of the author and do not necessarily represent the views of the University of Wollongong.