Improving Adversarially Robust Sequential Recommendation through Generalizable Perturbations

Publication Name

Proceedings - 2023 IEEE International Conference on Big Data, BigData 2023

Abstract

Sequential recommendation is of great importance for a variety of purposes, such as application engineering, resource optimization, and marketing. Yet, existing sequence-based recommendation models are susceptible to adversarial attacks, which aim to perturb input sequences and mislead trained models, resulting in incorrect predictions. Defense methods are accordingly adopted to enhance model robustness. Nevertheless, these methods encounter challenges, such as error propagation (from the model output to generate adversarial samples), the high system complexity, and the difficulty of maintaining the model generalizability. To bridge this gap, this paper introduces a simple yet effective adversarial defense algorithm, termed Perturbation-Driven Sequential Recommendation (PDSR). In the training process, PDSR leverages a simple perturbation-generation module to create adversarial samples, eliminating the need for gradient estimation, thus streamlining the process. Additionally, it also incorporates a robust encoder designed to increase tolerance towards representation variations by ensuring alignment between original and perturbed representations, thereby boosting model generalizability. Comprehensive experiments are conducted based on a combination of five benchmark datasets, two attack methods, and four sequential recommendation models. When compared to four state-of-the-art defense baselines, PDSR demonstrates notable improvements in defense performance.

Open Access Status

This publication is not available as open access

First Page

1299

Last Page

1307

Funding Number

DP210101426

Funding Sponsor

Australian Research Council

Share

COinS