论文标题
多模式的多模式分析的多尺度合作多模式变压器
Multi-scale Cooperative Multimodal Transformers for Multimodal Sentiment Analysis in Videos
论文作者
论文摘要
视频中的多模式情感分析是许多现实世界应用中的关键任务,这通常需要集成多模式流,包括视觉,言语和声学行为。为了提高多模式融合的鲁棒性,某些现有方法使不同的模态相互通信,并通过变压器模态跨模式相互作用。但是,这些方法仅在交互期间使用单尺度表示,但忘记利用包含不同语义信息级别的多尺度表示。结果,对于未对齐的多模式数据,变压器学到的表示形式可能会偏差。在本文中,我们提出了多模式情感分析的多尺度合作多模式变压器(MCMULT)体系结构。总体而言,“多尺度”机制能够利用每种模式的不同语义信息级别,用于细粒度的跨模式相互作用。同时,每种模式通过从其源模式的多个级别特征集成了交叉模式的交互来学习其特征层次结构。这样,每对方式分别以合作的方式逐步构建特征层次结构。经验结果表明,我们的MCMULT模型不仅在未对齐的多模式序列上胜过现有的方法,而且在对齐的多模式序列上的性能很强。
Multimodal sentiment analysis in videos is a key task in many real-world applications, which usually requires integrating multimodal streams including visual, verbal and acoustic behaviors. To improve the robustness of multimodal fusion, some of the existing methods let different modalities communicate with each other and modal the crossmodal interaction via transformers. However, these methods only use the single-scale representations during the interaction but forget to exploit multi-scale representations that contain different levels of semantic information. As a result, the representations learned by transformers could be biased especially for unaligned multimodal data. In this paper, we propose a multi-scale cooperative multimodal transformer (MCMulT) architecture for multimodal sentiment analysis. On the whole, the "multi-scale" mechanism is capable of exploiting the different levels of semantic information of each modality which are used for fine-grained crossmodal interactions. Meanwhile, each modality learns its feature hierarchies via integrating the crossmodal interactions from multiple level features of its source modality. In this way, each pair of modalities progressively builds feature hierarchies respectively in a cooperative manner. The empirical results illustrate that our MCMulT model not only outperforms existing approaches on unaligned multimodal sequences but also has strong performance on aligned multimodal sequences.