论文标题
揭示:通过多源多模式知识记忆的检索 - 启动视觉语言预训练
REVEAL: Retrieval-Augmented Visual-Language Pre-Training with Multi-Source Multimodal Knowledge Memory
论文作者
论文摘要
在本文中,我们提出了一个端到端检索的视觉语言模型(揭示),该模型学会了将世界知识编码为大规模记忆,并从中取回以回答知识密集的查询。揭示由四个关键组成部分组成:内存,编码器,猎犬和发电机。大规模记忆通过统一的编码器编码多模式世界知识的各种来源(例如,图像文本对,问答对,知识图三重态)。检索器在内存中找到了最相关的知识条目,生成器将检索到的知识与输入查询融合以产生输出。我们方法中的一个关键新颖性是,在大量数据上,内存,编码器,猎犬和发电机都是预训练的端到端。此外,我们的方法可以使用各种多模式知识来源,这证明会导致显着增长。我们表明,这揭示了在视觉问题回答和图像字幕上实现最新的结果。
In this paper, we propose an end-to-end Retrieval-Augmented Visual Language Model (REVEAL) that learns to encode world knowledge into a large-scale memory, and to retrieve from it to answer knowledge-intensive queries. REVEAL consists of four key components: the memory, the encoder, the retriever and the generator. The large-scale memory encodes various sources of multimodal world knowledge (e.g. image-text pairs, question answering pairs, knowledge graph triplets, etc) via a unified encoder. The retriever finds the most relevant knowledge entries in the memory, and the generator fuses the retrieved knowledge with the input query to produce the output. A key novelty in our approach is that the memory, encoder, retriever and generator are all pre-trained end-to-end on a massive amount of data. Furthermore, our approach can use a diverse set of multimodal knowledge sources, which is shown to result in significant gains. We show that REVEAL achieves state-of-the-art results on visual question answering and image captioning.