论文标题

随机图神经网络

Stochastic Graph Neural Networks

论文作者

Gao, Zhan, Isufi, Elvin, Ribeiro, Alejandro

论文摘要

图形神经网络(GNNS)在图形数据中模拟非线性表示,并具有分布式代理协调,控制和计划等的应用程序。当前的GNN体系结构采用理想的场景,而忽略了由于环境,人为因素或外部攻击而发生的链接波动。在这些情况下,如果未考虑拓扑随机性,GNN将无法解决其分布式任务。为了克服此问题,我们提出了随机图神经网络(SGNN)模型:一个GNN,其中分布式图卷积模块解释了随机网络的变化。由于随机性带来了新的学习范式,因此我们对SGNN输出差异进行了统计分析,以识别学习过滤器应满足实现对扰动情景的强大转移的条件,最终揭示了随机链接损失的显式影响。我们进一步开发了基于SGNN的基于随机梯度下降(SGD)的学习过程,并根据该学习过程会收敛到固定点的学习率。数值结果证实了我们的理论发现,并将SGNN鲁棒转移的益处与忽略学习过程中图形扰动的常规GNN进行了比较。

Graph neural networks (GNNs) model nonlinear representations in graph data with applications in distributed agent coordination, control, and planning among others. Current GNN architectures assume ideal scenarios and ignore link fluctuations that occur due to environment, human factors, or external attacks. In these situations, the GNN fails to address its distributed task if the topological randomness is not considered accordingly. To overcome this issue, we put forth the stochastic graph neural network (SGNN) model: a GNN where the distributed graph convolution module accounts for the random network changes. Since stochasticity brings in a new learning paradigm, we conduct a statistical analysis on the SGNN output variance to identify conditions the learned filters should satisfy for achieving robust transference to perturbed scenarios, ultimately revealing the explicit impact of random link losses. We further develop a stochastic gradient descent (SGD) based learning process for the SGNN and derive conditions on the learning rate under which this learning process converges to a stationary point. Numerical results corroborate our theoretical findings and compare the benefits of SGNN robust transference with a conventional GNN that ignores graph perturbations during learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源