BASR
Black-Box Attacks on Sequential Recommenders via Data-Free Model Extraction
Abstract
We investigate whether model extraction can be used to "steal" the weights of sequential recommender systems, and the potential threats posed to victims of such attacks. This type of risk has attracted attention in image and text classification, but to our knowledge not in recommender systems. We argue that sequential recommender systems are subject to unique vulnerabilities due to the specific autoregressive regimes used to train them. Unlike many existing recommender attackers, which assume the dataset used to train the victim model is exposed to attackers, we consider a data-free setting, where training data are not accessible. Under this setting, we propose an API-based model extraction method via limited-budget synthetic data generation and knowledge distillation. We investigate state-of-the-art models for sequential recommendation and show their vulnerability under model extraction and downstream attacks. We perform attacks in two stages. (1) Model extraction: given different types of synthetic data and their labels retrieved from a black-box recommender, we extract the black-box model to a white-box model via distillation. (2) Downstream attacks: we attack the black-box model with adversarial samples generated by the white-box recommender. Experiments show the effectiveness of our data-free model extraction and downstream attacks on sequential recommenders in both profile pollution and data poisoning settings.

Model extraction attacks try to make a local copy of a machine learning model given only access to a query API. Our framework has two stages: (1) Model extraction: we generate informative synthetic data to train our white-box recommender that can rapidly close the gap between the victim recommender and ours via knowledge distillation; (2) Downstream attacks: we propose gradient-based adversarial sample generation algorithms, which allows us to find effective adversarial sequences in the discrete item space from the white-box recommender and achieve successful profile pollution or data poisoning attacks against the victim recommender.
Profile Pollution Attack
We define profile pollution attacks formally as the problem of finding the optimum injection items (that should be items appended after the original sequence 𝒙) that maximize the target item exposure , which can be characterized with common ranking measures like Recall or NDCG.

Data Poisoning Attack
Similarly, poisoning attacks can be viewed as finding biased injection profiles Z, such that after retraining, the recommender propagates the bias and is more likely to recommend the target.
