论文标题

朗诵语言模型

Recitation-Augmented Language Models

论文作者

Sun, Zhiqing, Wang, Xuezhi, Tay, Yi, Yang, Yiming, Zhou, Denny

论文摘要

我们提出了一个新的范式来帮助大型语言模型(LLMS)产生更准确的事实知识,而无需从外部语料库中检索出称为朗诵的发电(Recite)。不同于检索提示的语言模型,这些模型在生成输出之前检索相关文档(给定输入),首先背诵LLMS通过采样从LLMS自己的内存中朗诵一个或几个相关段落,然后产生最终答案。我们表明,Reitite是知识密集型NLP任务的强大范式。具体而言,我们表明,通过利用朗诵作为中间步骤,朗诵和撤回的方案可以在各种闭幕词答案(CBQA)任务中实现新的最新性能。在实验中,我们验证了\方法〜对四个预训练模型(Palm,UL2,OPT和Codex)和三个CBQA任务(自然问题,Triviaqa和HotPotQA)的有效性。我们的代码可在“ https://github.com/edward-sun/recite”上获得。

We propose a new paradigm to help Large Language Models (LLMs) generate more accurate factual knowledge without retrieving from an external corpus, called RECITation-augmented gEneration (RECITE). Different from retrieval-augmented language models that retrieve relevant documents before generating the outputs, given an input, RECITE first recites one or several relevant passages from LLMs' own memory via sampling, and then produces the final answers. We show that RECITE is a powerful paradigm for knowledge-intensive NLP tasks. Specifically, we show that by utilizing recitation as the intermediate step, a recite-and-answer scheme can achieve new state-of-the-art performance in various closed-book question answering (CBQA) tasks. In experiments, we verify the effectiveness of \method~on four pre-trained models (PaLM, UL2, OPT, and Codex) and three CBQA tasks (Natural Questions, TriviaQA, and HotpotQA). Our code is available at "https://github.com/Edward-Sun/RECITE".

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源