论文标题

通过测试时间图转换赋予图表表示学习

Empowering Graph Representation Learning with Test-Time Graph Transformation

论文作者

Jin, Wei, Zhao, Tong, Ding, Jiayuan, Liu, Yozen, Tang, Jiliang, Shah, Neil

论文摘要

作为图表上表示的强大工具,图神经网络(GNN)促进了从药物发现到建议系统的各种应用。然而,GNN的有效性受到与数据质量有关的问题的挑战,例如分配转移,异常特征和对抗性攻击。从建模的角度来解决这些问题的最新努力,这需要更改模型体系结构或重新训练模型参数的额外成本。在这项工作中,我们提供了一个以数据为中心的观点来解决这些问题,并提出了一个名为GTRAN的图形转换框架,该框架在测试时间适应和完善了图形数据以实现更好的性能。我们提供有关框架设计的理论分析,并讨论为什么适应图数据比适应模型更好。广泛的实验证明了GTRAN在八个基准数据集中的三种不同情况下的有效性,其中显示了次优数据。值得注意的是,在大多数情况下,Gtrans在三种实验环境中的最佳基线比最佳基线的提高了最佳,高达2.8%,8.2%和3.8%。代码在https://github.com/chandlerbang/gtrans上发布。

As powerful tools for representation learning on graphs, graph neural networks (GNNs) have facilitated various applications from drug discovery to recommender systems. Nevertheless, the effectiveness of GNNs is immensely challenged by issues related to data quality, such as distribution shift, abnormal features and adversarial attacks. Recent efforts have been made on tackling these issues from a modeling perspective which requires additional cost of changing model architectures or re-training model parameters. In this work, we provide a data-centric view to tackle these issues and propose a graph transformation framework named GTrans which adapts and refines graph data at test time to achieve better performance. We provide theoretical analysis on the design of the framework and discuss why adapting graph data works better than adapting the model. Extensive experiments have demonstrated the effectiveness of GTrans on three distinct scenarios for eight benchmark datasets where suboptimal data is presented. Remarkably, GTrans performs the best in most cases with improvements up to 2.8%, 8.2% and 3.8% over the best baselines on three experimental settings. Code is released at https://github.com/ChandlerBang/GTrans.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源