论文标题
杂技诗的产生
Acrostic Poem Generation
论文作者
论文摘要
我们在计算创造力领域提出了一项新任务:英语杂技诗的产生。杂技诗是包含隐藏信息的诗。通常,每行的首字母拼写出一个单词或简短的短语。我们将任务定义为具有多个约束的生成任务:给定输入词,1)每行的初始字母应阐明所提供的单词,2)诗的语义也应与之相关,3)3)诗应该符合押韵方案。我们进一步为任务提供了基线模型,该模型由有条件的神经语言模型与神经押韵模型结合使用。由于没有为accrostic Poem生成的专用数据集,因此我们通过先培训一个单独的主题预测模型来为我们的任务创建培训数据,并在一组主题宣布的诗歌上,然后预测其他诗歌的主题。我们的实验表明,我们的基线产生的杂技诗是人类的良好收到的,并且由于额外的限制不会失去太多质量。最后,我们确认我们的模型产生的诗确实与所提供的提示密切相关,并且对Wikipedia进行预处理可以提高性能。
We propose a new task in the area of computational creativity: acrostic poem generation in English. Acrostic poems are poems that contain a hidden message; typically, the first letter of each line spells out a word or short phrase. We define the task as a generation task with multiple constraints: given an input word, 1) the initial letters of each line should spell out the provided word, 2) the poem's semantics should also relate to it, and 3) the poem should conform to a rhyming scheme. We further provide a baseline model for the task, which consists of a conditional neural language model in combination with a neural rhyming model. Since no dedicated datasets for acrostic poem generation exist, we create training data for our task by first training a separate topic prediction model on a small set of topic-annotated poems and then predicting topics for additional poems. Our experiments show that the acrostic poems generated by our baseline are received well by humans and do not lose much quality due to the additional constraints. Last, we confirm that poems generated by our model are indeed closely related to the provided prompts, and that pretraining on Wikipedia can boost performance.