Data-driven audiogram classifier using data normalization and multi-stage feature selection
Audiograms are used to show the hearing capability of a person at different frequencies. The filter bank in a hearing aid is designed to match the shape of patients’ audiograms. Configuring the hearing aid is done by modifying the designed filters’ gains to match the patient’s audiogram. There are few problems faced in achieving this objective successfully. There is a shortage in the number of audiologists; the filter bank hearing aid designs are complex; and, the hearing aid fitting process is tiring. In this work, a machine learning solution is introduced to classify the audiograms according to the shapes based on unsupervised spectral clustering. The features used to build the ML model are peculiar and describe the audiograms better. Different normalization methods are applied and studied statistically to improve the training data set. The proposed Machine Learning (ML) algorithm outperformed the current existing models, where, the accuracy, precision, recall, specificity, and F-score values are higher. The reason for the better performance is the use of multi-stage feature selection to describe the audiograms precisely. This work introduces a novel ML technique to classify audiograms according to the shape, which, can be integrated to the future and existing studies to change the existing practices in classifying audiograms.
Open Access Status
This publication may be available as open access