论文标题
聚焦精心:对鲁棒GNN节点分类进行消毒的中毒图
FocusedCleaner: Sanitizing Poisoned Graphs for Robust GNN-based Node Classification
论文作者
论文摘要
图神经网络(GNN)容易受到数据中毒攻击的影响,该攻击将产生毒图作为GNN模型的输入。我们将聚焦的精心作为中毒的消毒剂提出,以有效地识别攻击者注入的毒药。具体而言,聚焦精心提供了一个由两个模块组成的卫生框架:双层结构学习和受害者节点检测。 In particular, the structural learning module will reverse the attack process to steadily sanitize the graph while the detection module provides ``the focus" -- a narrowed and more accurate search region -- to structural learning. These two modules will operate in iterations and reinforce each other to sanitize a poisoned graph step by step. As an important application, we show that the adversarial robustness of GNNs trained over the sanitized graph for the node classification任务大大改善。
Graph Neural Networks (GNNs) are vulnerable to data poisoning attacks, which will generate a poisoned graph as the input to the GNN models. We present FocusedCleaner as a poisoned graph sanitizer to effectively identify the poison injected by attackers. Specifically, FocusedCleaner provides a sanitation framework consisting of two modules: bi-level structural learning and victim node detection. In particular, the structural learning module will reverse the attack process to steadily sanitize the graph while the detection module provides ``the focus" -- a narrowed and more accurate search region -- to structural learning. These two modules will operate in iterations and reinforce each other to sanitize a poisoned graph step by step. As an important application, we show that the adversarial robustness of GNNs trained over the sanitized graph for the node classification task is significantly improved. Extensive experiments demonstrate that FocusedCleaner outperforms the state-of-the-art baselines both on poisoned graph sanitation and improving robustness.