论文标题

具有通用视觉指导的准确单词表示

Accurate Word Representations with Universal Visual Guidance

论文作者

Zhang, Zhuosheng, Yu, Haojie, Zhao, Hai, Wang, Rui, Utiyama, Masao

论文摘要

单词表示是神经语言理解模型中的基本组成部分。最近,预先训练的语言模型(PRLMS)通过利用序列级别的上下文进行建模提供了一种新的表演方法,以进行上下文化的单词表示。尽管PRLM通常比非上下文化模型给出了更准确的上下文化单词表示,但它们仍然受到一系列文本上下文的影响,而没有多种暗示的多种模态。因此,本文提出了一种视觉表示方法,以从视觉引导中具有多个敏感的感觉明确增强传统的单词嵌入。详细说明,我们从多模式种子数据集构建了一个小规模的单词图像词典,每个单词都对应于各种相关的图像。文本和配对图像并行编码,然后是注意力层以整合多模式表示。我们表明该方法显着提高了歧义的准确性。对12种自然语言理解和机器翻译任务进行的实验进一步验证了拟议方法的有效性和概括能力。

Word representation is a fundamental component in neural language understanding models. Recently, pre-trained language models (PrLMs) offer a new performant method of contextualized word representations by leveraging the sequence-level context for modeling. Although the PrLMs generally give more accurate contextualized word representations than non-contextualized models do, they are still subject to a sequence of text contexts without diverse hints for word representation from multimodality. This paper thus proposes a visual representation method to explicitly enhance conventional word embedding with multiple-aspect senses from visual guidance. In detail, we build a small-scale word-image dictionary from a multimodal seed dataset where each word corresponds to diverse related images. The texts and paired images are encoded in parallel, followed by an attention layer to integrate the multimodal representations. We show that the method substantially improves the accuracy of disambiguation. Experiments on 12 natural language understanding and machine translation tasks further verify the effectiveness and the generalization capability of the proposed approach.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源