论文标题

多模式融合的层次三角道

Hierachical Delta-Attention Method for Multimodal Fusion

论文作者

Panchal, Kunjal

论文摘要

在视觉和语言学上;主要的输入方式是面部表情,语音模式和所说的单词。分析任何一种表达方式(视觉,口头或人声)的问题是,许多上下文信息可能会丢失。这要求研究人员检查多种方式,以彻底了解跨模式依赖性和情况的时间上下文,以分析表达。这项工作试图保留不同方式内部和跨不同方式内的远程依赖性,这将通过使用经常性网络来挑剔,并添加了delta oppection的概念,以专注于每种模式的局部差异,以捕获不同人的特质。我们探索了一种跨注意的融合技术,以使通过这些三角洲自我的模式表达的情绪的全球观点,以便将所有本地细微差别和全球环境融合在一起。增加注意力是多模式融合场的新事物,目前正在仔细检查应在哪个阶段使用注意机制,这项工作可实现竞争精度的整体和人均分类,该分类与当前的最新阶段相近,几乎具有一半的参数。

In vision and linguistics; the main input modalities are facial expressions, speech patterns, and the words uttered. The issue with analysis of any one mode of expression (Visual, Verbal or Vocal) is that lot of contextual information can get lost. This asks researchers to inspect multiple modalities to get a thorough understanding of the cross-modal dependencies and temporal context of the situation to analyze the expression. This work attempts at preserving the long-range dependencies within and across different modalities, which would be bottle-necked by the use of recurrent networks and adds the concept of delta-attention to focus on local differences per modality to capture the idiosyncrasy of different people. We explore a cross-attention fusion technique to get the global view of the emotion expressed through these delta-self-attended modalities, in order to fuse all the local nuances and global context together. The addition of attention is new to the multi-modal fusion field and currently being scrutinized for on what stage the attention mechanism should be used, this work achieves competitive accuracy for overall and per-class classification which is close to the current state-of-the-art with almost half number of parameters.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源