论文标题

ERNIE-VIL 2.0:图像文本预训练的多视图对比度学习

ERNIE-ViL 2.0: Multi-view Contrastive Learning for Image-Text Pre-training

论文作者

Shan, Bin, Yin, Weichong, Sun, Yu, Tian, Hao, Wu, Hua, Wang, Haifeng

论文摘要

最近,基于双重编码器的视觉语言预训练(VLP)模型由于其在各种跨模式任务和高计算效率上的出色表现而引起了学术界和行业的广泛关注。他们试图使用图像文本对的对比度学习学习跨模式表示,但是,构建的模式间相关仅依赖于每种模态的单个视图。实际上,图像或文本包含各种潜在的视图,就像人类可以通过不同的描述或照片捕获现实世界的场景一样。在本文中,我们提出了Ernie-Vi​​l 2.0,这是一个多视图对比学习框架,以同时同时建立模式内和模式间相关性,旨在学习更强大的跨模式表示。具体而言,我们在每种模态内构建多个视图,以学习模式内相关性,以增强单模式表示。除了固有的视觉/文本视图外,我们还构建对象标签的序列,作为一种特殊的文本视图,以缩小嘈杂的图像文本对上的跨模式语义差距。 Ernie-Vi​​l 2.0通过29m公开可用的数据集进行了预培训,可在英语跨模式检索方面取得了竞争成果。此外,为了将我们的方法概括为中国跨模式任务,我们通过将预训练数据集扩展到1.5B中文图像文本对来训练Ernie-Vi​​l 2.0,与以前的SOTA结果相比,在中国跨模式检索中得到了显着改善。我们在https://github.com/paddlepaddle/ernie中发布了预训练的模型。

Recent Vision-Language Pre-trained (VLP) models based on dual encoder have attracted extensive attention from academia and industry due to their superior performance on various cross-modal tasks and high computational efficiency. They attempt to learn cross-modal representation using contrastive learning on image-text pairs, however, the built inter-modal correlations only rely on a single view for each modality. Actually, an image or a text contains various potential views, just as humans could capture a real-world scene via diverse descriptions or photos. In this paper, we propose ERNIE-ViL 2.0, a Multi-View Contrastive learning framework to build intra-modal and inter-modal correlations between diverse views simultaneously, aiming at learning a more robust cross-modal representation. Specifically, we construct multiple views within each modality to learn the intra-modal correlation for enhancing the single-modal representation. Besides the inherent visual/textual views, we construct sequences of object tags as a special textual view to narrow the cross-modal semantic gap on noisy image-text pairs. Pre-trained with 29M publicly available datasets, ERNIE-ViL 2.0 achieves competitive results on English cross-modal retrieval. Additionally, to generalize our method to Chinese cross-modal tasks, we train ERNIE-ViL 2.0 through scaling up the pre-training datasets to 1.5B Chinese image-text pairs, resulting in significant improvements compared to previous SOTA results on Chinese cross-modal retrieval. We release our pre-trained models in https://github.com/PaddlePaddle/ERNIE.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源