Year

2023

Degree Name

Doctor of Philosophy

Department

Institute for Superconducting and Electronic Materials

Abstract

Time series forecasting, which capitalizes on historical trends and patterns to anticipate future events, is especially significant in engineering. Two critical components in this process are data preprocessing and the selection of suitable machine learning models. Given that raw data frequently presents issues such as missing values, outliers, or noise, data preprocessing is an indispensable step to refine this data prior to its application in machine learning models. Moreover, the choice of a machine learning model is crucial as various models are optimized for different types of data. Therefore, designing an appropriate architecture that aligns with the specific characteristics of the data is fundamental for effective time series forecasting.

In our research, we have employed time series forecasting for various applications including estimating the state-of-charge (SOC) and state-of-health (SOH) of batteries, forecasting strata pressure in underground coal mines, predicting mixing and segregation behaviors in the solid-liquid fluidized bed, and assessing air quality. For data preprocessing, we utilize the exponential smoothing (ES) method to diminish noise in the data. To augment the original features, we employed the sliding window (SW) technique and polynomial regression (PR). Feature selection was conducted using the Pearson correlation coefficient (PCC) and Shapley additive explanation (SHAP), while principal component analysis (PCA) was used for dimensionality reduction by projecting them into a lower-dimensional space.

Regarding machine learning models, we proposed a variety of approaches including many-to-many long short-term model (LSTM) model, time-series Wasserstein generative adversarial (TS-WGAN) model, convolutional neural network (CNN) Transformer, CNN-LSTM, and backpropagation neural network (BPNN) Transformer to cater to different data types.

Our evaluations revealed that our time series forecasting methodology effectively estimates various tasks with high accuracy and robustness. Activation maps indicated that these models excel in extracting both local and global information from the data.

For future studies, to better address long-term and multi-step tasks, our focus should shift more toward Transformer-based models. These models, which eschew the traditional loops in recurrent neural networks, utilize innovative position encoding, an encoder-decoder structure, and a self-attention mechanism, making them highly effective for time series problems. However, the adoption of Transformer-based models increases the complexity of the system. Therefore, future efforts should concentrate on reducing computational demands and saving storage space to make these models more feasible for real-world applications.

FoR codes (2008)

0801 ARTIFICIAL INTELLIGENCE AND IMAGE PROCESSING

Share

COinS
 

Unless otherwise indicated, the views expressed in this thesis are those of the author and do not necessarily represent the views of the University of Wollongong.