论文标题
关于语言生成的概率质量悖论
On the probability-quality paradox in language generation
论文作者
论文摘要
当从神经概率模型中产生自然语言时,高概率并不总是与高质量相吻合:经常观察到,寻求模式的解码方法,即在模型下产生高概率的文本的方法,导致不自然的语言。另一方面,由随机方法产生的较低概率文本被认为是更像人类的。在本说明中,我们通过通过信息理论镜头分析语言生成来为这种现象提供了解释。具体而言,我们认为类似人类的语言应包含一定数量的信息(量化为负对数概率),这些信息接近于天然字符串上分布的熵。此外,我们认为具有更多(或更少)信息的语言是不可取的。我们提供了有利于这一假设的初步经验证据。人类和机器生成的文本的质量评级 - 涵盖了多个任务和常见的解码策略 - 表明高质量文本的信息内容比我们偶然预期的要接近熵。
When generating natural language from neural probabilistic models, high probability does not always coincide with high quality: It has often been observed that mode-seeking decoding methods, i.e., those that produce high-probability text under the model, lead to unnatural language. On the other hand, the lower-probability text generated by stochastic methods is perceived as more human-like. In this note, we offer an explanation for this phenomenon by analyzing language generation through an information-theoretic lens. Specifically, we posit that human-like language should contain an amount of information (quantified as negative log-probability) that is close to the entropy of the distribution over natural strings. Further, we posit that language with substantially more (or less) information is undesirable. We provide preliminary empirical evidence in favor of this hypothesis; quality ratings of both human and machine-generated text -- covering multiple tasks and common decoding strategies -- suggest high-quality text has an information content significantly closer to the entropy than we would expect by chance.