论文标题
实用数据中毒攻击针对Next-项目推荐
Practical Data Poisoning Attack against Next-Item Recommendation
论文作者
论文摘要
在线推荐系统利用各种信息来源为用户提供潜在感兴趣的项目。但是,由于在线平台的开放性,推荐系统很容易受到数据中毒攻击的影响。现有的攻击方法要么基于简单的启发式规则,要么是针对特定建议方法设计的。前者经常遭受性能不令人满意,而后者则需要对目标系统的深入了解。在本文中,我们专注于一般的下一项推荐设置,并提出了一种针对BlackBox推荐系统的实用中毒攻击方法。拟议的Loki利用加固学习算法来训练攻击剂,可用于生成用户行为样本以进行数据中毒。在实际推荐系统中,再培训建议模型的成本很高,用户与建议系统之间的相互作用频率受到限制。启动了这些现实世界中的限制,我们建议让代理商与推荐的模拟器相互作用,而不是目标推荐系统,并利用了生成的对手样品的可转移性来中毒目标系统。我们还建议使用影响函数有效估计注射样品对建议结果的影响,而无需重新训练模型。针对四个代表性建议模型的两个数据集上的广泛实验表明,所提出的Loki比现有方法实现了更好的攻击性能。
Online recommendation systems make use of a variety of information sources to provide users the items that users are potentially interested in. However, due to the openness of the online platform, recommendation systems are vulnerable to data poisoning attacks. Existing attack approaches are either based on simple heuristic rules or designed against specific recommendations approaches. The former often suffers unsatisfactory performance, while the latter requires strong knowledge of the target system. In this paper, we focus on a general next-item recommendation setting and propose a practical poisoning attack approach named LOKI against blackbox recommendation systems. The proposed LOKI utilizes the reinforcement learning algorithm to train the attack agent, which can be used to generate user behavior samples for data poisoning. In real-world recommendation systems, the cost of retraining recommendation models is high, and the interaction frequency between users and a recommendation system is restricted.Given these real-world restrictions, we propose to let the agent interact with a recommender simulator instead of the target recommendation system and leverage the transferability of the generated adversarial samples to poison the target system. We also propose to use the influence function to efficiently estimate the influence of injected samples on the recommendation results, without re-training the models within the simulator. Extensive experiments on two datasets against four representative recommendation models show that the proposed LOKI achieves better attacking performance than existing methods.