Improving Adversarially Robust Sequential Recommendation through Generalizable Perturbations
journal contribution
posted on 2024-11-17, 13:32authored byXun Yao, Ruyi He, Xinrong Hu, Jie Yang, Yi Guo, Zijian Huang
Sequential recommendation is of great importance for a variety of purposes, such as application engineering, resource optimization, and marketing. Yet, existing sequence-based recommendation models are susceptible to adversarial attacks, which aim to perturb input sequences and mislead trained models, resulting in incorrect predictions. Defense methods are accordingly adopted to enhance model robustness. Nevertheless, these methods encounter challenges, such as error propagation (from the model output to generate adversarial samples), the high system complexity, and the difficulty of maintaining the model generalizability. To bridge this gap, this paper introduces a simple yet effective adversarial defense algorithm, termed Perturbation-Driven Sequential Recommendation (PDSR). In the training process, PDSR leverages a simple perturbation-generation module to create adversarial samples, eliminating the need for gradient estimation, thus streamlining the process. Additionally, it also incorporates a robust encoder designed to increase tolerance towards representation variations by ensuring alignment between original and perturbed representations, thereby boosting model generalizability. Comprehensive experiments are conducted based on a combination of five benchmark datasets, two attack methods, and four sequential recommendation models. When compared to four state-of-the-art defense baselines, PDSR demonstrates notable improvements in defense performance.
Funding
Australian Research Council (DP210101426)
History
Journal title
Proceedings - 2023 IEEE International Conference on Big Data, BigData 2023