论文标题

图形神经网络本质上是良好的概括:通过桥接GNN和MLP的见解

Graph Neural Networks are Inherently Good Generalizers: Insights by Bridging GNNs and MLPs

论文作者

Yang, Chenxiao, Wu, Qitian, Wang, Jiahua, Yan, Junchi

论文摘要

图形神经网络(GNN)作为用于图形上表示学习的DECACTO模型类,构建在具有其他消息传递层的多层感知器(MLP)体系结构上,以允许跨节点流动特征。尽管传统的智慧通常将GNN的成功归因于其高级表达性,但我们猜想这不是GNNS在节点级预测任务中优势的主要原因。本文通过引入称为P(Ropagational)MLP的中级模型类别,将GNNS性能增长的主要来源指定为其内在的概括能力,该类别在培训中与标准MLP相同,但随后采用GNN在测试中的架构。有趣的是,我们观察到,PMLP始终以(甚至超过)GNN的同步性能执行,同时在训练方面的效率更高。这一发现为理解GNN的学习行为提供了新的见解,可以用作解剖各种与GNN相关的研究问题的分析工具。作为分析GNN固有的概括性的初步步骤,我们在无限宽度限制下显示了MLP和PMLP之间的本质差异在于训练后训练阶段的NTK特征图。此外,通过检查其外推行为,我们发现,尽管许多GNN及其PMLP对应物无法推断非线性功能以极大的分布样本,但它们具有更大的潜力,可以推广到在训练数据范围内测试样品作为GNN架构的自然优势附近的样品。

Graph neural networks (GNNs), as the de-facto model class for representation learning on graphs, are built upon the multi-layer perceptrons (MLP) architecture with additional message passing layers to allow features to flow across nodes. While conventional wisdom commonly attributes the success of GNNs to their advanced expressivity, we conjecture that this is not the main cause of GNNs' superiority in node-level prediction tasks. This paper pinpoints the major source of GNNs' performance gain to their intrinsic generalization capability, by introducing an intermediate model class dubbed as P(ropagational)MLP, which is identical to standard MLP in training, but then adopts GNN's architecture in testing. Intriguingly, we observe that PMLPs consistently perform on par with (or even exceed) their GNN counterparts, while being much more efficient in training. This finding sheds new insights into understanding the learning behavior of GNNs, and can be used as an analytic tool for dissecting various GNN-related research problems. As an initial step to analyze the inherent generalizability of GNNs, we show the essential difference between MLP and PMLP at infinite-width limit lies in the NTK feature map in the post-training stage. Moreover, by examining their extrapolation behavior, we find that though many GNNs and their PMLP counterparts cannot extrapolate non-linear functions for extremely out-of-distribution samples, they have greater potential to generalize to testing samples near the training data range as natural advantages of GNN architectures.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源