论文标题

对自然语言推断的成对级别监督对比学习

Pair-Level Supervised Contrastive Learning for Natural Language Inference

论文作者

Li, Shu'ang, Hu, Xuming, Lin, Li, Wen, Lijie

论文摘要

自然语言推论(NLI)是自然语言理解的越来越重要的任务,这需要一个人推断句子对之间的关​​系(前提和假设)。许多最近的作品通过结合NLI数据集的句子对的关系来学习句子表示,使用了对比学习。但是,这些方法仅关注与句子级表示的比较。在本文中,我们提出了一种成对的监督对比学习方法(PairSCL)。我们采用交叉注意模块来学习句子对的联合表示。对比度学习目标旨在通过将一个班级中的人拉在一起并将对在其他班级中的对分开来区分各种句子对。我们在NLI的两个公共数据集上评估了PairsCl,PairsCl的准确性平均优于其他方法2.1%。此外,我们的方法在文本分类的七个转移任务上优于先前的最新方法。

Natural language inference (NLI) is an increasingly important task for natural language understanding, which requires one to infer the relationship between the sentence pair (premise and hypothesis). Many recent works have used contrastive learning by incorporating the relationship of the sentence pair from NLI datasets to learn sentence representation. However, these methods only focus on comparisons with sentence-level representations. In this paper, we propose a Pair-level Supervised Contrastive Learning approach (PairSCL). We adopt a cross attention module to learn the joint representations of the sentence pairs. A contrastive learning objective is designed to distinguish the varied classes of sentence pairs by pulling those in one class together and pushing apart the pairs in other classes. We evaluate PairSCL on two public datasets of NLI where the accuracy of PairSCL outperforms other methods by 2.1% on average. Furthermore, our method outperforms the previous state-of-the-art method on seven transfer tasks of text classification.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源