论文标题
通过跨模式相互知识转移的视觉答案本地化
Visual Answer Localization with Cross-modal Mutual Knowledge Transfer
论文作者
论文摘要
视频中视觉回答本地化(Val)的目的是从视频中获取一个相关,简洁的时间剪辑,以作为给定自然语言问题的答案。早期方法基于视频和文本之间的相互作用建模,以通过视觉预测变量预测视觉答案。后来,将带有字幕的文本预测器用于阀门被证明更精确。但是,这些现有方法仍然具有与视觉框架或文本字幕的跨模式知识偏差。在本文中,我们提出了一种跨模式相互知识转移跨度(MutualSL)方法来减少知识偏差。 MutualSL具有视觉预测指标和文本预测指标,我们期望这两者都一致的预测结果,以促进交叉模式之间的语义知识理解。在此基础上,我们设计了一个单向动态损耗函数,以动态调整知识传递的比例。我们已经在三个公共数据集上进行了广泛的实验以进行评估。实验结果表明,我们的方法优于其他竞争性最先进(SOTA)方法,证明了其有效性。
The goal of visual answering localization (VAL) in the video is to obtain a relevant and concise time clip from a video as the answer to the given natural language question. Early methods are based on the interaction modelling between video and text to predict the visual answer by the visual predictor. Later, using the textual predictor with subtitles for the VAL proves to be more precise. However, these existing methods still have cross-modal knowledge deviations from visual frames or textual subtitles. In this paper, we propose a cross-modal mutual knowledge transfer span localization (MutualSL) method to reduce the knowledge deviation. MutualSL has both visual predictor and textual predictor, where we expect the prediction results of these both to be consistent, so as to promote semantic knowledge understanding between cross-modalities. On this basis, we design a one-way dynamic loss function to dynamically adjust the proportion of knowledge transfer. We have conducted extensive experiments on three public datasets for evaluation. The experimental results show that our method outperforms other competitive state-of-the-art (SOTA) methods, demonstrating its effectiveness.