论文标题

增强跨图的知识转移

Augmenting Knowledge Transfer across Graphs

论文作者

Mao, Yuzhen, Sun, Jianhui, Zhou, Dawei

论文摘要

给定资源丰富的源图和资源范围目标图,我们如何有效地跨图传输知识并确保良好的概括性能?在许多高影响力域(例如,大脑网络和分子图)中,收集和注释数据的昂贵且耗时,这使得域适应成为减轻标签稀缺问题的有吸引力的选择。鉴于此,最新的方法着重于得出最小化域差异的域不变图表示。但是,最近已经表明,小域差异损失可能并不总是保证良好的概括性能,尤其是在存在不同的图形结构和标签分布变化的情况下。在本文中,我们提出了TransNet,这是一个通用学习框架,用于增强跨图的知识转移。特别是,我们引入了一个名为Trinity信号的新颖概念,该概念可以自然地在不同的粒度(例如节点属性,边缘和子图)上表达各种图形信号。因此,我们进一步提出了一个域统一模块以及三位一体信号混合方案,以共同最大程度地减少域差异并增强跨图的知识传递。最后,全面的经验结果表明,TransNet的表现优于七个基准数据集上的所有现有方法。

Given a resource-rich source graph and a resource-scarce target graph, how can we effectively transfer knowledge across graphs and ensure a good generalization performance? In many high-impact domains (e.g., brain networks and molecular graphs), collecting and annotating data is prohibitively expensive and time-consuming, which makes domain adaptation an attractive option to alleviate the label scarcity issue. In light of this, the state-of-the-art methods focus on deriving domain-invariant graph representation that minimizes the domain discrepancy. However, it has recently been shown that a small domain discrepancy loss may not always guarantee a good generalization performance, especially in the presence of disparate graph structures and label distribution shifts. In this paper, we present TRANSNET, a generic learning framework for augmenting knowledge transfer across graphs. In particular, we introduce a novel notion named trinity signal that can naturally formulate various graph signals at different granularity (e.g., node attributes, edges, and subgraphs). With that, we further propose a domain unification module together with a trinity-signal mixup scheme to jointly minimize the domain discrepancy and augment the knowledge transfer across graphs. Finally, comprehensive empirical results show that TRANSNET outperforms all existing approaches on seven benchmark datasets by a significant margin.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源