论文标题

参加,记忆和生成:几次迈向忠实的桌面到文本

Attend, Memorize and Generate: Towards Faithful Table-to-Text Generation in Few Shots

论文作者

Zhao, Wenting, Liu, Ye, Wan, Yao, Yu, Philip S.

论文摘要

几乎没有射击表到文本生成是用有限的数据来传达表内容的流利而忠实的句子的任务。尽管通过微调强大的预训练的语言模型来为产生令人印象深刻的流利句子做出了许多努力,但仍需要改善产生内容的忠诚。为此,本文提出了一种新颖的方法参加,记住和产生(称为AMG),灵感来自于人类的文本生成过程。特别是,AMG(1)使用基于表插槽级别和传统令牌级别关注的新型策略来参加上下文的多晶格,以利用表结构和自然语言信息; (2)动态记住表插槽分配状态; (3)根据上下文和内存分配状态生成忠实的句子。 Wiki数据集的三个领域(即人类,歌曲和书籍)进行人体评估的全面实验表明,与几个最先进的基线相比,我们的模型可以在流利和忠诚度中产生更高的合格文本。

Few-shot table-to-text generation is a task of composing fluent and faithful sentences to convey table content using limited data. Despite many efforts having been made towards generating impressive fluent sentences by fine-tuning powerful pre-trained language models, the faithfulness of generated content still needs to be improved. To this end, this paper proposes a novel approach Attend, Memorize and Generate (called AMG), inspired by the text generation process of humans. In particular, AMG (1) attends over the multi-granularity of context using a novel strategy based on table slot level and traditional token-by-token level attention to exploit both the table structure and natural linguistic information; (2) dynamically memorizes the table slot allocation states; and (3) generates faithful sentences according to both the context and memory allocation states. Comprehensive experiments with human evaluation on three domains (i.e., humans, songs, and books) of the Wiki dataset show that our model can generate higher qualified texts when compared with several state-of-the-art baselines, in both fluency and faithfulness.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源