论文标题
可解释的NLG用于与任务的对话系统,具有异质渲染机
Interpretable NLG for Task-oriented Dialogue Systems with Heterogeneous Rendering Machines
论文作者
论文摘要
端到端的神经网络在自然语言产生(NLG)中取得了有希望的表演。但是,它们被视为黑匣子,缺乏解释性。为了解决这个问题,我们提出了一个新颖的框架,异质渲染机(HRM),该框架将神经发电机如何将输入对话法(DA)呈现为话语。 HRM由渲染器集和模式切换器组成。该渲染器集包含多个在结构和功能上都不同的解码器。对于每个一代步骤,模式切换器从渲染器集中选择适当的解码器来生成项目(单词或短语)。为了验证我们方法的有效性,我们在5个基准数据集上进行了广泛的实验。就自动指标(例如BLEU)而言,我们的模型具有当前最新方法的竞争力。定性分析表明,我们的模型可以很好地解释神经发生器的渲染过程。人类评估还证实了我们提出的方法的解释性。
End-to-end neural networks have achieved promising performances in natural language generation (NLG). However, they are treated as black boxes and lack interpretability. To address this problem, we propose a novel framework, heterogeneous rendering machines (HRM), that interprets how neural generators render an input dialogue act (DA) into an utterance. HRM consists of a renderer set and a mode switcher. The renderer set contains multiple decoders that vary in both structure and functionality. For every generation step, the mode switcher selects an appropriate decoder from the renderer set to generate an item (a word or a phrase). To verify the effectiveness of our method, we have conducted extensive experiments on 5 benchmark datasets. In terms of automatic metrics (e.g., BLEU), our model is competitive with the current state-of-the-art method. The qualitative analysis shows that our model can interpret the rendering process of neural generators well. Human evaluation also confirms the interpretability of our proposed approach.