论文标题

基于注意力的多模式分析的门控机制

Gated Mechanism for Attention Based Multimodal Sentiment Analysis

论文作者

Kumar, Ayush, Vepa, Jithendra

论文摘要

多模式的情感分析最近因与社交媒体帖子,客户服务电话和视频博客的关系而闻名。在本文中,我们介绍了多模式情感分析的三个方面; 1。交叉模态相互作用学习,即多种模态如何促进情感,2。学习多模式相互作用的长期依赖性和3。单峰和交叉模态提示的融合。在这三个中,我们发现学习交叉模态相互作用对此问题有益。我们在两个基准数据集(CMU多模式意见水平强度(CMU-MOSI)和CMU多模式意见情感和情感强度(CMU-MOSEI)语料库上执行实验。我们对这两个任务的方法分别产生83.9%和81.1%的准确性,比当前最新面临的方法为1.6%和1.34%。

Multimodal sentiment analysis has recently gained popularity because of its relevance to social media posts, customer service calls and video blogs. In this paper, we address three aspects of multimodal sentiment analysis; 1. Cross modal interaction learning, i.e. how multiple modalities contribute to the sentiment, 2. Learning long-term dependencies in multimodal interactions and 3. Fusion of unimodal and cross modal cues. Out of these three, we find that learning cross modal interactions is beneficial for this problem. We perform experiments on two benchmark datasets, CMU Multimodal Opinion level Sentiment Intensity (CMU-MOSI) and CMU Multimodal Opinion Sentiment and Emotion Intensity (CMU-MOSEI) corpus. Our approach on both these tasks yields accuracies of 83.9% and 81.1% respectively, which is 1.6% and 1.34% absolute improvement over current state-of-the-art.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源