论文标题
关系提取作为开簿检查:检索增强及时调整
Relation Extraction as Open-book Examination: Retrieval-enhanced Prompt Tuning
论文作者
论文摘要
预训练的语言模型通过证明很少的学习能力来对关系提取产生了重大贡献。但是,迅速调整关系提取的方法可能仍无法推广到这些罕见或硬模式。请注意,以前的参数学习范式可以看作是关于培训数据作为书籍和推断为近书测试的记忆。在几乎没有射击实例的情况下,几乎可以在参数中记住那些长尾或硬模式。为此,我们将RE视为开放式检查,并提出了一种新的半摩擦式范式,以进行检索增强的及时调整以进行关系提取。我们构建一个开放式数据存储数据存储,以获取有关基于及时的实例表示和相应的关系标签,作为记忆的键值对。在推断期间,模型可以通过与非参数最近的邻居分布线性插值PLM的基本输出线性插值来推断关系。这样,我们的模型不仅通过培训期间存储在权重中的知识中的知识来取代关系,而且还通过在开放式数据存储中放松和查询示例来协助决策。基准数据集上的广泛实验表明,我们的方法可以在标准监督和少量设置中实现最新的实验。代码可在https://github.com/zjunlp/promptkg/tree/main/research/retrievalre中找到。
Pre-trained language models have contributed significantly to relation extraction by demonstrating remarkable few-shot learning abilities. However, prompt tuning methods for relation extraction may still fail to generalize to those rare or hard patterns. Note that the previous parametric learning paradigm can be viewed as memorization regarding training data as a book and inference as the close-book test. Those long-tailed or hard patterns can hardly be memorized in parameters given few-shot instances. To this end, we regard RE as an open-book examination and propose a new semiparametric paradigm of retrieval-enhanced prompt tuning for relation extraction. We construct an open-book datastore for retrieval regarding prompt-based instance representations and corresponding relation labels as memorized key-value pairs. During inference, the model can infer relations by linearly interpolating the base output of PLM with the non-parametric nearest neighbor distribution over the datastore. In this way, our model not only infers relation through knowledge stored in the weights during training but also assists decision-making by unwinding and querying examples in the open-book datastore. Extensive experiments on benchmark datasets show that our method can achieve state-of-the-art in both standard supervised and few-shot settings. Code are available in https://github.com/zjunlp/PromptKG/tree/main/research/RetrievalRE.