论文标题
只是披肩!一个新颖的框架,用于评估抽象性摘要中更快的事实一致性
Just ClozE! A Novel Framework for Evaluating the Factual Consistency Faster in Abstractive Summarization
论文作者
论文摘要
近年来,抽象性摘要中事实一致性的问题受到了广泛的关注,对摘要和文件之间的事实一致性的评估已成为一项重要而紧迫的任务。当前的大多数评估指标都是从问题回答(QA)或自然语言推理(NLI)任务中采用的。但是,在实践中,基于质量检查的指标的应用非常耗时,而基于NLI的指标缺乏可解释性。在本文中,我们提出了一个名为Cloze的基于披肩的评估框架,并显示了基于披肩的度量的巨大潜力。它继承了质量检查的强大解释性,同时保持了NLI级推理的速度。我们证明,相对于基于QA的指标,Cloze可以将评估时间减少近96%,同时通过在六个人类注销的数据集中进行实验以及一个元评估基准测试基准GO(Gabriel等人,2021年)。最后,我们在实践中讨论了披肩的三个重要方面,与其他指标相比,这些方面进一步显示了披肩的总体表现。
The issue of factual consistency in abstractive summarization has received extensive attention in recent years, and the evaluation of factual consistency between summary and document has become an important and urgent task. Most of the current evaluation metrics are adopted from the question answering (QA) or natural language inference (NLI) task. However, the application of QA-based metrics is extremely time-consuming in practice while NLI-based metrics are lack of interpretability. In this paper, we propose a cloze-based evaluation framework called ClozE and show the great potential of the cloze-based metric. It inherits strong interpretability from QA, while maintaining the speed of NLI- level reasoning. We demonstrate that ClozE can reduce the evaluation time by nearly 96% relative to QA-based metrics while retaining their interpretability and performance through experiments on six human-annotated datasets and a meta-evaluation benchmark GO FIGURE (Gabriel et al., 2021). Finally, we discuss three important facets of ClozE in practice, which further shows better overall performance of ClozE compared to other metrics.