论文标题

迭代图自鉴定

Iterative Graph Self-Distillation

论文作者

Zhang, Hanlin, Lin, Shuai, Liu, Weiyang, Zhou, Pan, Tang, Jian, Liang, Xiaodan, Xing, Eric P.

论文摘要

最近,人们对如何区分矢量化图的挑战一直在增加。为了解决这个问题,我们提出了一种称为迭代图自distillation(IGSD)的方法,该方法通过实例歧视使用自我监督的对比学习方法以实例歧视以无监督的方式学习图形级表示。 IGSD涉及一个教师蒸馏过程,该过程使用图形扩散增强,并使用学生模型的指数移动平均值构建教师模型。 IGSD背后的直觉是在不同的增强视图下预测图形对的教师网络表示。作为自然扩展,我们还通过与受监督和自我监督的对比损失共同使网络正规化网络,将IGSD应用于半监督场景。最后,我们表明,通过自我训练进行IGSD训练的模型的鉴定可以进一步提高图表能力。从经验上讲,我们在无监督和半监督设置中的各种图形数据集上获得了显着且一致的性能增益,这很好地验证了IGSD的优势。

Recently, there has been increasing interest in the challenge of how to discriminatively vectorize graphs. To address this, we propose a method called Iterative Graph Self-Distillation (IGSD) which learns graph-level representation in an unsupervised manner through instance discrimination using a self-supervised contrastive learning approach. IGSD involves a teacher-student distillation process that uses graph diffusion augmentations and constructs the teacher model using an exponential moving average of the student model. The intuition behind IGSD is to predict the teacher network representation of the graph pairs under different augmented views. As a natural extension, we also apply IGSD to semi-supervised scenarios by jointly regularizing the network with both supervised and self-supervised contrastive loss. Finally, we show that finetuning the IGSD-trained models with self-training can further improve the graph representation power. Empirically, we achieve significant and consistent performance gain on various graph datasets in both unsupervised and semi-supervised settings, which well validates the superiority of IGSD.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源