论文标题

知识在预训练的语言模型中提示自然语言理解

Knowledge Prompting in Pre-trained Language Model for Natural Language Understanding

论文作者

Wang, Jianing, Huang, Wenkang, Shi, Qiuhui, Wang, Hongbin, Qiu, Minghui, Li, Xiang, Gao, Ming

论文摘要

知识增强的预训练的语言模型(PLM)最近受到了极大的关注,该模型旨在将事实知识纳入PLM。但是,大多数现有方法通过堆叠复杂的模块来修改固定类型的PLM的内部结构,并从知识库(KBS)中引入冗余和无关紧要的事实知识。在本文中,为了解决这些问题,我们介绍了促使范式的开创性知识,并进一步提出了基于知识的PLM框架KP-PLM。该框架可以灵活地与现有的主流PLM相结合。具体而言,我们首先在每种情况下从KBS中构建一个知识子图。然后,我们设计了多个连续提示规则,并将知识子图转换为自然语言提示。为了进一步利用这些提示中的事实知识,我们提出了两个新颖的知识意识到的自我监督任务,包括及时相关性检查和掩盖及时建模。关于多种自然语言理解(NLU)任务的广泛实验表明,在全资源和低资源环境中,KP-PLM比其他最先进的方法的优越性。

Knowledge-enhanced Pre-trained Language Model (PLM) has recently received significant attention, which aims to incorporate factual knowledge into PLMs. However, most existing methods modify the internal structures of fixed types of PLMs by stacking complicated modules, and introduce redundant and irrelevant factual knowledge from knowledge bases (KBs). In this paper, to address these problems, we introduce a seminal knowledge prompting paradigm and further propose a knowledge-prompting-based PLM framework KP-PLM. This framework can be flexibly combined with existing mainstream PLMs. Specifically, we first construct a knowledge sub-graph from KBs for each context. Then we design multiple continuous prompts rules and transform the knowledge sub-graph into natural language prompts. To further leverage the factual knowledge from these prompts, we propose two novel knowledge-aware self-supervised tasks including prompt relevance inspection and masked prompt modeling. Extensive experiments on multiple natural language understanding (NLU) tasks show the superiority of KP-PLM over other state-of-the-art methods in both full-resource and low-resource settings.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源