#### Year

2010

#### Degree Name

Doctor of Philosophy

#### Department

University of Wollongong. School of Mathematics and Applied Statistics

#### Recommended Citation

Dechpichai, Porntip, Nonlinear neural network for conditional mean and variance forecasts, Doctor of Philosophy thesis, University of Wollongong. School of Mathematics and Applied Statistics, University of Wollongong, 2010. https://ro.uow.edu.au/theses/3278

#### Abstract

Over-fitting usually appears in financial time series forecasting by neural networks, where the target series are usually contaminated with noise. To avoid this problem, regularization and committee machine methods are considered.

In the context of financial applications, especially in the stock market predictions, the variance of asset returns is of particular interest in addition to the mean, in order to provide information to balance risk and return in investment at each point of time. To be able to fit two-dimensional mean and variance output to one-dimensional asset return data, an objective function based on the negative log likelihood of the conditional distribution of the target variable is considered as an alternative to conventional least squares function. The likelihood approach also enables flexibility in the choice of distribution to model the underlying error process.

In this thesis, regularization and ensemble techniques are adapted for networks with likelihood-based objective function. As well as a conditional Gaussian error distribution typically assumed, a long-tailed t-distribution is another choice for extreme leptokurtic situations. The regularization terms are formed by likelihood contributions corresponding to simple autoregressive models for the conditional mean and log variance parameters. In the ensemble technique, the mean and variance predictions are averaged rather than the probability density function. In addition, learning algorithms are developed for an Output-to-Output recurrent structure applied to a network with two-dimensional mean and variance output. These differ with respect to the terms and direction of unfolding the dynamic structure. The proposed methods are investigated by application to simulation data and real data.

These following conclusions can be drawn. Regularization is found to improve generalization ability. Although the individual networks differ only with respect to the initial weights used for training, they appear to be in sufficient disagreement for the ensemble averaging approach. The error of the ensemble network is found to be smaller than the weighted average error over all ensemble members, and the resulting forecasts vary more smoothly over time. In order to reduce the memory and time requirements, truncated versions with a bounded history approximation are applied to recurrent networks, because of their efficiency without a trade-off in effectiveness. The recurrent networks yield more accurate forecasts and less fluctuation than non-recurrent networks.

Ensemble techniques with proposed neural networks are successfully applied to real financial time series data. With a Gaussian error distribution assumed, the network with regularization out performs and produces smoother outputs than the networks without regularization. The non-recurrent network with regularizationhas simpler structure but is reasonably effective, compared to the recurrent network without regularization. In case of extreme leptokurtic data, the network with at-error distribution is superior to the network with a Gaussian error distribution.