论文标题
视力和语言导航的本地插槽关注
Local Slot Attention for Vision-and-Language Navigation
论文作者
论文摘要
旨在为通用机器人铺平道路的边界研究,视觉和语言导航(VLN)一直是计算机视觉和自然语言处理社区的热门话题。 VLN任务要求代理在不熟悉的环境中按照自然语言说明导航到目标位置。 最近,基于变压器的模型已在VLN任务上获得了重大改进。由于变压器体系结构中的注意力机制可以更好地整合视觉和语言的模式内和模式信息。 但是,当前基于变压器的模型中存在两个问题。 1)模型独立处理每个视图,而无需考虑对象的完整性。 2)在视觉模态的自我注意操作期间,在空间上远处的视图可以彼此交织而无需明确的限制。这种混合可能会引入额外的噪音而不是有用的信息。 为了解决这些问题,我们建议1)基于插槽注意的模块,以合并来自同一对象的分割的信息。 2)局部注意力掩膜机制限制视觉注意力跨度。所提出的模块可以轻松地插入任何VLN体系结构中,我们将Recurrent VLN-Bert作为基本模型。 R2R数据集的实验表明,我们的模型已达到最新结果。
Vision-and-language navigation (VLN), a frontier study aiming to pave the way for general-purpose robots, has been a hot topic in the computer vision and natural language processing community. The VLN task requires an agent to navigate to a goal location following natural language instructions in unfamiliar environments. Recently, transformer-based models have gained significant improvements on the VLN task. Since the attention mechanism in the transformer architecture can better integrate inter- and intra-modal information of vision and language. However, there exist two problems in current transformer-based models. 1) The models process each view independently without taking the integrity of the objects into account. 2) During the self-attention operation in the visual modality, the views that are spatially distant can be inter-weaved with each other without explicit restriction. This kind of mixing may introduce extra noise instead of useful information. To address these issues, we propose 1) A slot-attention based module to incorporate information from segmentation of the same object. 2) A local attention mask mechanism to limit the visual attention span. The proposed modules can be easily plugged into any VLN architecture and we use the Recurrent VLN-Bert as our base model. Experiments on the R2R dataset show that our model has achieved the state-of-the-art results.