论文标题
通过协作的关系提取长尾关系提取
Improving Long-Tail Relation Extraction with Collaborating Relation-Augmented Attention
论文作者
论文摘要
错误的标签问题和长尾关系是由遥远的指定遥远的监督造成的两个主要挑战。最近的著作通过多个实体学习通过选择性关注来减轻错误的标签,但即使引入了关系的层次结构以共享知识,也无法很好地处理长尾关系。在这项工作中,我们提出了一个新颖的神经网络,协作了与关系的关注(CORA),以处理错误的标签和长尾关系。特别是,我们首先提出与关系增强的注意网络作为基本模型。它在句子袋上运行,并关注句子,以最大程度地减少错误标签的效果。然后,在拟议的基本模型中促进,我们介绍了层次结构中关系中共享的协作关系特征,以促进关系提升过程并平衡长尾关系的培训数据。除了预测句子袋关系的主要培训目标外,还利用辅助目标指导关系提取过程以进行更准确的袋级表示。在流行的基准数据集NYT的实验中,拟议的Cora从Precision@n,auc和hits@k方面提高了先前的最新性能。与竞争对手相比,进一步分析验证其在处理长尾关系方面的卓越能力。
Wrong labeling problem and long-tail relations are two main challenges caused by distant supervision in relation extraction. Recent works alleviate the wrong labeling by selective attention via multi-instance learning, but cannot well handle long-tail relations even if hierarchies of the relations are introduced to share knowledge. In this work, we propose a novel neural network, Collaborating Relation-augmented Attention (CoRA), to handle both the wrong labeling and long-tail relations. Particularly, we first propose relation-augmented attention network as base model. It operates on sentence bag with a sentence-to-relation attention to minimize the effect of wrong labeling. Then, facilitated by the proposed base model, we introduce collaborating relation features shared among relations in the hierarchies to promote the relation-augmenting process and balance the training data for long-tail relations. Besides the main training objective to predict the relation of a sentence bag, an auxiliary objective is utilized to guide the relation-augmenting process for a more accurate bag-level representation. In the experiments on the popular benchmark dataset NYT, the proposed CoRA improves the prior state-of-the-art performance by a large margin in terms of Precision@N, AUC and Hits@K. Further analyses verify its superior capability in handling long-tail relations in contrast to the competitors.