A Hybrid Machine Learning Approach for Sentiment Analysis of Partially Occluded Faces
2021 IEEE International Conference on Artificial Intelligence and Computer Applications, ICAICA 2021
With millions of images and videos uploaded on social media every day, facial sentiment analysis has gained significant attention as means of gaining large scale insights into people's emotions and sentiments. While several models have been proposed for sentiment and emotion analysis of complete, camera-facing pictures, the analysis of images appearing in natural settings and crowded scenes poses more challenges. In such settings, images typically contain a mix of complete and partially occluded faces (i.e. obstructed faces) presented with different angles, resolutions and distances from the camera. In this paper, we propose a hybrid machine learning model combining convolutional neural networks (CNNs) and support vector machines (SVMs) to achieve accurate facial sentiment and emotion analysis of incomplete and partially occluded facial images. The proposed model was successfully tested using 4, 690 images containing 25, 400 faces, collected from a large-scale public event. The model was able to correctly classify the test dataset containing faces with different angles, camera distances, occlusion areas, and image resolutions. The results show a classification accuracy of 89.9% for facial sentiment analysis, and an accuracy of 87.4% when distinguishing between seven emotions in partially occluded faces. This makes our model suitable for real-life practical applications.
Open Access Status
This publication is not available as open access