论文标题
constGCN:基于传输的限制图卷积网络,用于文档级关系提取
ConstGCN: Constrained Transmission-based Graph Convolutional Networks for Document-level Relation Extraction
论文作者
论文摘要
使用图形神经网络的文档级关系提取面临训练和推理之间的基本图构造差距 - 仅在训练过程中可用的金图结构,这会导致大多数方法采用启发式或句法规则来构建先前的图形作为伪代理。在本文中,我们提出了$ \ textbf {constgcn} $,这是一个新颖的图形卷积网络,该网络在实体之间执行基于知识的信息传播以及所有特定的关系空间,而无需任何先前的图形构造。具体而言,它通过汇总所有其他实体以及每个关系空间的信息来更新实体表示形式,从而对关系感知的空间信息进行建模。为了控制通过不确定关系空间的信息流,我们建议使用从事实三元组之间的噪声对比估计中学到的传输分数来限制传播。实验结果表明,我们的方法优于DOCRE数据集上先前的最新方法(SOTA)方法。
Document-level relation extraction with graph neural networks faces a fundamental graph construction gap between training and inference - the golden graph structure only available during training, which causes that most methods adopt heuristic or syntactic rules to construct a prior graph as a pseudo proxy. In this paper, we propose $\textbf{ConstGCN}$, a novel graph convolutional network which performs knowledge-based information propagation between entities along with all specific relation spaces without any prior graph construction. Specifically, it updates the entity representation by aggregating information from all other entities along with each relation space, thus modeling the relation-aware spatial information. To control the information flow passing through the indeterminate relation spaces, we propose to constrain the propagation using transmitting scores learned from the Noise Contrastive Estimation between fact triples. Experimental results show that our method outperforms the previous state-of-the-art (SOTA) approaches on the DocRE dataset.