University of Wollongong
Browse

File(s) not publicly available

Oversampling techniques for imbalanced data in regression

journal contribution
posted on 2024-11-17, 14:51 authored by Samir Brahim Belhaouari, Ashhadul Islam, Khelil Kassoul, Ala Al-Fuqaha, Abdesselam Bouzerdoum
Our study addresses the challenge of imbalanced regression data in Machine Learning (ML) by introducing tailored methods for different data structures. We adapt K-Nearest Neighbor Oversampling-Regression (KNNOR-Reg), originally for imbalanced classification, to address imbalanced regression in low population datasets, evolving to KNNOR-Deep Regression (KNNOR-DeepReg) for high-population datasets. For tabular data, we also present the Auto-Inflater neural network, utilizing an exponential loss function for Autoencoders. For image datasets, we employ Multi-Level Autoencoders, consisting of Convolutional and Fully Connected Autoencoders. For such high-dimension data our approach outperforms the Synthetic Minority Oversampling Technique for Regression (SMOTER) algorithm for the IMDB-WIKI and AgeDB image datasets. For tabular data we conducted a comprehensive experiment using various models trained on both augmented and non-augmented datasets, followed by performance comparisons on test data. The outcomes revealed a positive impact of data augmentation, with a success rate of 83.75% for Light Gradient Boosting Method (LightGBM) and 71.57% for the 18 other regressors employed in the study. This success rate is determined by the frequency of instances where models performed better when augmented data was used compared to instances with no augmentation. Access to the comparative code can be found in GitHub.

History

Journal title

Expert Systems with Applications

Volume

252

Language

English

Usage metrics

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC