Enhancing Privacy Protection for Online Learning Resource Recommendation with Machine Unlearning
Publication Name
Proceedings of the 2024 27th International Conference on Computer Supported Cooperative Work in Design, CSCWD 2024
Abstract
Within the domain of intelligent education, also known as smart education, the recommender system propelled by deep learning strives to attain exemplary model performance. However, deep learning models invariably involve the processing of voluminous user privacy data during training phases, and they subjugate themselves to substantial risks of privacy breaches concerning both students and educators. Traditional approaches involving retraining the entire dataset and classical privacy protection methods such as differential privacy and homomorphic encryption struggle to balance model performance and training time expenditure. This presents significant challenges for individuals and enterprises in managing privacy concerns. While balancing personal and corporate interests in privacy protection, Machine Unlearning reveals its potential as a productive strategy to navigate these challenges. This study compares the time cost and performance of model retraining using Machine Unlearning with those of retraining using conventional approaches. The experiment results show that the use of Machine Unlearning algorithms not only effectively protects privacy but also significantly reduces the time required for model retraining. Furthermore, the performance of models employing Machine Unlearning is essentially congruent with that of retraining a model with an entire dataset.
Open Access Status
This publication is not available as open access
First Page
3282
Last Page
3287
Funding Number
62307008
Funding Sponsor
National Natural Science Foundation of China