论文标题

可扩展图神经网络通过双向传播

Scalable Graph Neural Networks via Bidirectional Propagation

论文作者

Chen, Ming, Wei, Zhewei, Ding, Bolin, Li, Yaliang, Yuan, Ye, Du, Xiaoyong, Wen, Ji-Rong

论文摘要

图神经网络(GNN)是用于学习非欧盟数据的新兴领域。最近,对设计扩展到大图的GNN的兴趣增加了。大多数现有方法都使用“图形采样”或“层次采样”技术来减少训练时间。但是,这些方法在应用于数十亿个边缘的图表时仍然会降解性能和可伸缩性问题。本文提出了GBP,这是一种可扩展的GNN,它利用特征向量和训练/测试节点的局部双向传播过程。理论分析表明,GBP是第一种实现预成立和训练阶段的亚线性时间复杂性的方法。一项广泛的实证研究表明,GBP以较小的培训/测试时间实现最先进的表现。最值得注意的是,GBP可以在不到半小时的一台机器上的图表上提供超过6000万个节点和18亿边缘的卓越性能。可以在https://github.com/chennnm/gbp上找到GBP的代码。

Graph Neural Networks (GNN) is an emerging field for learning on non-Euclidean data. Recently, there has been increased interest in designing GNN that scales to large graphs. Most existing methods use "graph sampling" or "layer-wise sampling" techniques to reduce training time. However, these methods still suffer from degrading performance and scalability problems when applying to graphs with billions of edges. This paper presents GBP, a scalable GNN that utilizes a localized bidirectional propagation process from both the feature vectors and the training/testing nodes. Theoretical analysis shows that GBP is the first method that achieves sub-linear time complexity for both the precomputation and the training phases. An extensive empirical study demonstrates that GBP achieves state-of-the-art performance with significantly less training/testing time. Most notably, GBP can deliver superior performance on a graph with over 60 million nodes and 1.8 billion edges in less than half an hour on a single machine. The codes of GBP can be found at https://github.com/chennnM/GBP .

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源