论文标题

回答还是不回答?通过基于跨度的对比度学习改善机器阅读理解模型

To Answer or Not to Answer? Improving Machine Reading Comprehension Model with Span-based Contrastive Learning

论文作者

Ji, Yunjie, Chen, Liangyu, Dou, Chenxiao, Ma, Baochang, Li, Xiangang

论文摘要

使用无法回答的问题的机器阅读理解是一项艰巨的NLP任务,受到无法从段落回答的问题的挑战。据观察,微妙的文字变化通常使一个可回答的问题无法回答,但是,大多数MRC模型无法识别出这种变化。为了解决这个问题,在本文中,我们提出了一种基于跨度的对比度学习方法(SPANCL),该方法在答案跨度上明确对比与他们的回答和无法回答的对应物进行了明确对比。使用SPANCL,MRC模型被迫从微小的字面差异中感知至关重要的语义变化。 Squad 2.0数据集的实验表明,Spancl可以显着改善基准,从而产生0.86-2.14绝对EM的改进。其他实验还表明,Spancl是利用生成问题的有效方法。

Machine Reading Comprehension with Unanswerable Questions is a difficult NLP task, challenged by the questions which can not be answered from passages. It is observed that subtle literal changes often make an answerable question unanswerable, however, most MRC models fail to recognize such changes. To address this problem, in this paper, we propose a span-based method of Contrastive Learning (spanCL) which explicitly contrast answerable questions with their answerable and unanswerable counterparts at the answer span level. With spanCL, MRC models are forced to perceive crucial semantic changes from slight literal differences. Experiments on SQuAD 2.0 dataset show that spanCL can improve baselines significantly, yielding 0.86-2.14 absolute EM improvements. Additional experiments also show that spanCL is an effective way to utilize generated questions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源