论文标题
共同优化对话状态跟踪的国家运营预测和价值产生
Jointly Optimizing State Operation Prediction and Value Generation for Dialogue State Tracking
论文作者
论文摘要
我们研究了具有开放词汇的多域对话状态跟踪(DST)的问题。现有方法利用了Bert编码器和基于复制的RNN解码器,编码器可以预测状态操作,而解码器会生成新的插槽值。但是,在如此堆叠的编码器结构中,操作预测目标仅影响伯特编码器,而价值生成目标主要影响RNN解码器。在本文中,我们提出了一个纯粹的基于变压器的框架,其中单个BERT既可以用作编码器和解码器。这样,操作预测目标和价值产生目标可以共同优化DST的BERT。在解码步骤中,我们将编码器的隐藏状态重新使用相应解码器层的自发机制,以构建一个平面编码器架构来进行有效的参数更新。实验结果表明,我们的方法基本上要优于现有的最新框架,并且还以最佳的基于本体的方法实现了非常有竞争力的绩效。
We investigate the problem of multi-domain Dialogue State Tracking (DST) with open vocabulary. Existing approaches exploit BERT encoder and copy-based RNN decoder, where the encoder predicts the state operation, and the decoder generates new slot values. However, in such a stacked encoder-decoder structure, the operation prediction objective only affects the BERT encoder and the value generation objective mainly affects the RNN decoder. In this paper, we propose a purely Transformer-based framework, where a single BERT works as both the encoder and the decoder. In so doing, the operation prediction objective and the value generation objective can jointly optimize this BERT for DST. At the decoding step, we re-use the hidden states of the encoder in the self-attention mechanism of the corresponding decoder layers to construct a flat encoder-decoder architecture for effective parameter updating. Experimental results show that our approach substantially outperforms the existing state-of-the-art framework, and it also achieves very competitive performance to the best ontology-based approaches.