论文标题

CMV-bert:Bert的对比度多vocab预处理

CMV-BERT: Contrastive multi-vocab pretraining of BERT

论文作者

Zhu, Wei, Cheung, Daniel

论文摘要

在这项工作中,我们代表CMV-Bert,通过两种要素改善了语言模型的预处理:(a)对比度学习,在计算机视觉领域进行了很好的研究; (b)多个词汇,其中一种是细粒,另一个是粗粒。这两种方法都提供了原始句子的不同视图,并且两者都被证明是有益的。下游任务表明我们提出的CMV-BERT可以有效地改善审慎的语言模型。

In this work, we represent CMV-BERT, which improves the pretraining of a language model via two ingredients: (a) contrastive learning, which is well studied in the area of computer vision; (b) multiple vocabularies, one of which is fine-grained and the other is coarse-grained. The two methods both provide different views of an original sentence, and both are shown to be beneficial. Downstream tasks demonstrate our proposed CMV-BERT are effective in improving the pretrained language models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源