论文标题

分解何时有助于机器阅读?

When Do Decompositions Help for Machine Reading?

论文作者

Wei, Kangda, Lawrie, Dawn, Van Durme, Benjamin, Chen, Yunmo, Weller, Orion

论文摘要

回答复杂的问题通常需要多步推理才能获得最终答案。大多数对复杂问题分解的研究都涉及开放域系统,这些系统在使用这些分解以改善检索方面已经成功。但是,在机器读取设置中,要了解分解何时有帮助的工作。我们对机器阅读中的分解进行实验,以使用一系列模型和数据集统一该领域的最新工作。我们发现,分解在几种情况下可能会有所帮助,从而在精确的匹配分数方面有几个改进。但是,我们还表明,当模型可以访问大约几百个或更多示例的数据集时,分解并没有帮助(实际上可能是有害的)。因此,我们的分析意味着即使数据有限,模型也可以隐含地学习分解。

Answering complex questions often requires multi-step reasoning in order to obtain the final answer. Most research into decompositions of complex questions involves open-domain systems, which have shown success in using these decompositions for improved retrieval. In the machine reading setting, however, work to understand when decompositions are helpful is understudied. We conduct experiments on decompositions in machine reading to unify recent work in this space, using a range of models and datasets. We find that decompositions can be helpful in the few-shot case, giving several points of improvement in exact match scores. However, we also show that when models are given access to datasets with around a few hundred or more examples, decompositions are not helpful (and can actually be detrimental). Thus, our analysis implies that models can learn decompositions implicitly even with limited data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源