论文标题
节点表示通过节点到纽伯格期相互信息最大化
Node Representation Learning in Graph via Node-to-Neighbourhood Mutual Information Maximization
论文作者
论文摘要
图表中学习信息节点表示的关键在于如何从附近获取上下文信息。在这项工作中,我们通过直接最大化节点及其邻居的隐藏表示形式之间的相互信息来提出一个简单的自我监督节点表示策略学习策略,从理论上讲,这可以通过其与图形平滑的链接来证明。 Infonce之后,我们的框架通过替代对比损失进行了优化,在该损失中,积极的选择构成了表示学习的质量和效率。为此,我们提出了一种拓扑感知的阳性抽样策略,该策略通过考虑节点之间的结构依赖性来取代邻里,从而使阳性选择前期。在极端情况下,只有一个阳性被采样时,我们完全避免了昂贵的邻里聚合。我们的方法在各种节点分类数据集上实现了有希望的性能。还值得一提的是,将我们的损失函数应用于基于MLP的节点编码器,我们的方法可以比现有解决方案更快。我们的代码和补充材料可在https://github.com/dongwei156/n2n上找到。
The key towards learning informative node representations in graphs lies in how to gain contextual information from the neighbourhood. In this work, we present a simple-yet-effective self-supervised node representation learning strategy via directly maximizing the mutual information between the hidden representations of nodes and their neighbourhood, which can be theoretically justified by its link to graph smoothing. Following InfoNCE, our framework is optimized via a surrogate contrastive loss, where the positive selection underpins the quality and efficiency of representation learning. To this end, we propose a topology-aware positive sampling strategy, which samples positives from the neighbourhood by considering the structural dependencies between nodes and thus enables positive selection upfront. In the extreme case when only one positive is sampled, we fully avoid expensive neighbourhood aggregation. Our methods achieve promising performance on various node classification datasets. It is also worth mentioning by applying our loss function to MLP based node encoders, our methods can be orders of faster than existing solutions. Our codes and supplementary materials are available at https://github.com/dongwei156/n2n.