论文标题

关于学习从数据推理的悖论

On the Paradox of Learning to Reason from Data

论文作者

Zhang, Honghua, Li, Liunian Harold, Meng, Tao, Chang, Kai-Wei, Broeck, Guy Van den

论文摘要

在广泛的NLP任务中需要逻辑推理。可以端对端训练BERT模型以解决自然语言提出的逻辑推理问题吗?我们试图在存在的一组参数中可以完美模拟逻辑推理的一组参数中回答这个问题。我们做出的观察似乎彼此矛盾:伯特在分布测试示例上达到了几乎完美的精度,而未能在完全相同的问题空间上推广到其他数据分布。我们的研究为这种悖论提供了解释:伯特实际上学习了统计特征,而不是学会模仿正确的推理功能,而是在逻辑推理问题中固有地存在的统计特征。我们还表明,从数据中共同删除统计特征是不可行的,这说明了学习的困难。我们的结果自然扩展到其他神经模型,并揭示了学习推理和学习使用统计特征在NLP基准上实现高性能之间的基本差异。

Logical reasoning is needed in a wide range of NLP tasks. Can a BERT model be trained end-to-end to solve logical reasoning problems presented in natural language? We attempt to answer this question in a confined problem space where there exists a set of parameters that perfectly simulates logical reasoning. We make observations that seem to contradict each other: BERT attains near-perfect accuracy on in-distribution test examples while failing to generalize to other data distributions over the exact same problem space. Our study provides an explanation for this paradox: instead of learning to emulate the correct reasoning function, BERT has in fact learned statistical features that inherently exist in logical reasoning problems. We also show that it is infeasible to jointly remove statistical features from data, illustrating the difficulty of learning to reason in general. Our result naturally extends to other neural models and unveils the fundamental difference between learning to reason and learning to achieve high performance on NLP benchmarks using statistical features.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源