论文标题

知识图基于关系对称结构的对比度学习

Knowledge Graph Contrastive Learning Based on Relation-Symmetrical Structure

论文作者

Liang, Ke, Liu, Yue, Zhou, Sihang, Tu, Wenxuan, Wen, Yi, Yang, Xihong, Dong, Xiangjun, Liu, Xinwang

论文摘要

知识图嵌入(KGE)旨在学习强大的表示形式,以使各种人工智能应用受益。同时,对比度学习已被广​​泛利用在图形学习中,这是增强学习表征的歧视能力的有效机制。但是,KG的复杂结构使得很难构建适当的对比对。只有几次尝试将对比度学习策略与KGE整合在一起。但是,他们中的大多数依靠语言模型(例如BERT)来进行对比对构造,而不是完全挖掘图形结构的挖掘信息,从而阻碍了表现力。令人惊讶的是,我们发现关系对称结构中的实体通常相似且相关。为此,我们提出了一个基于关系对称结构KGE-SYMCL的知识图对比度学习框架,该框架在KGS中挖掘了对称的结构信息,以增强KGE模型的歧视能力。具体而言,提出了通过将关系对称位置作为正对的实体提出的插件方法。此外,设计的一对对准损失旨在将正面对结合在一起。链接预测和实体分类数据集的实验结果表明,我们的KGE-SYMCL可以轻松地用于各种KGE模型以改进性能。此外,广泛的实验表明,我们的模型可以胜过其他最先进的基线。

Knowledge graph embedding (KGE) aims at learning powerful representations to benefit various artificial intelligence applications. Meanwhile, contrastive learning has been widely leveraged in graph learning as an effective mechanism to enhance the discriminative capacity of the learned representations. However, the complex structures of KG make it hard to construct appropriate contrastive pairs. Only a few attempts have integrated contrastive learning strategies with KGE. But, most of them rely on language models ( e.g., Bert) for contrastive pair construction instead of fully mining information underlying the graph structure, hindering expressive ability. Surprisingly, we find that the entities within a relational symmetrical structure are usually similar and correlated. To this end, we propose a knowledge graph contrastive learning framework based on relation-symmetrical structure, KGE-SymCL, which mines symmetrical structure information in KGs to enhance the discriminative ability of KGE models. Concretely, a plug-and-play approach is proposed by taking entities in the relation-symmetrical positions as positive pairs. Besides, a self-supervised alignment loss is designed to pull together positive pairs. Experimental results on link prediction and entity classification datasets demonstrate that our KGE-SymCL can be easily adopted to various KGE models for performance improvements. Moreover, extensive experiments show that our model could outperform other state-of-the-art baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源