论文标题

Co-Gat:联合对话框ACT ACT识别和情感分类的共同相互关联注意网络

Co-GAT: A Co-Interactive Graph Attention Network for Joint Dialog Act Recognition and Sentiment Classification

论文作者

Qin, Libo, Li, Zhouyang, Che, Wanxiang, Ni, Minheng, Liu, Ting

论文摘要

在对话框系统中,对话框ACT识别和情感分类是捕获说话者意图的两个相关任务,在该任务中,对话行为和情感可以分别表示明确和隐式意图。对话框上下文信息(上下文信息)和相互交互信息是两个关键因素,这些因素有助于两个相关任务。不幸的是,现有的方法都没有同时考虑两个重要信息来源。在本文中,我们提出了一个共同交互的图形注意网络(Co-GAT),以共同执行这两个任务。核心模块是提出的共同相互交互相互作用层,在该层中,构造了交叉的连接和交叉任务连接并相互迭代更新,可以同时考虑两种类型的信息。两个公共数据集的实验结果表明,我们的模型成功捕获了两个信息来源并实现最先进的性能。 此外,我们发现来自上下文和相互交互信息的贡献并不能完全与上下文化的单词表示(Bert,Roberta,XLNet)重叠。

In a dialog system, dialog act recognition and sentiment classification are two correlative tasks to capture speakers intentions, where dialog act and sentiment can indicate the explicit and the implicit intentions separately. The dialog context information (contextual information) and the mutual interaction information are two key factors that contribute to the two related tasks. Unfortunately, none of the existing approaches consider the two important sources of information simultaneously. In this paper, we propose a Co-Interactive Graph Attention Network (Co-GAT) to jointly perform the two tasks. The core module is a proposed co-interactive graph interaction layer where a cross-utterances connection and a cross-tasks connection are constructed and iteratively updated with each other, achieving to consider the two types of information simultaneously. Experimental results on two public datasets show that our model successfully captures the two sources of information and achieve the state-of-the-art performance. In addition, we find that the contributions from the contextual and mutual interaction information do not fully overlap with contextualized word representations (BERT, Roberta, XLNet).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源