论文标题

超越近似值:通过可区分的组合求解器编码可解释的多跳推断的约束

Going Beyond Approximation: Encoding Constraints for Explainable Multi-hop Inference via Differentiable Combinatorial Solvers

论文作者

Thayaparan, Mokanarangan, Valentino, Marco, Freitas, André

论文摘要

整数线性编程(ILP)提供了一种可行的机制,可以用自然语言编码有关可解释的多跳推断的明确和可控制的假设。但是,ILP公式是不可差异的,不能集成到更广泛的深度学习体系结构中。最近,Thayaparan等人。 (2021a)提出了一种新的方法,将ILP与变压器集成在一起,以实现复杂多跳推断的端到端的可不同性。尽管已经证明了该混合动力框架比基于变压器和现有的ILP求解器提供更好的答案和解释选择,但神经符号的整合仍然依赖于ILP配方的凸放松弛,这可以产生亚最佳溶液。为了改善这些局限性,我们提出了DIFF-BOMP解释器,这是一种基于可区分的黑框组合求解器(DBCS)的新型神经符号结构(Pogančić等,2019)。与现有的可区分求解器不同,提出的模型不需要对明确的语义约束的转换和放松,从而可以直接,更有效地整合ILP公式。 DIFF-COMP解释器表明,与非差异性求解器,变压器和现有的基于可区分约束的多跳推理框架相比,准确性和解释性提高了。

Integer Linear Programming (ILP) provides a viable mechanism to encode explicit and controllable assumptions about explainable multi-hop inference with natural language. However, an ILP formulation is non-differentiable and cannot be integrated into broader deep learning architectures. Recently, Thayaparan et al. (2021a) proposed a novel methodology to integrate ILP with Transformers to achieve end-to-end differentiability for complex multi-hop inference. While this hybrid framework has been demonstrated to deliver better answer and explanation selection than transformer-based and existing ILP solvers, the neuro-symbolic integration still relies on a convex relaxation of the ILP formulation, which can produce sub-optimal solutions. To improve these limitations, we propose Diff-Comb Explainer, a novel neuro-symbolic architecture based on Differentiable BlackBox Combinatorial solvers (DBCS) (Pogančić et al., 2019). Unlike existing differentiable solvers, the presented model does not require the transformation and relaxation of the explicit semantic constraints, allowing for direct and more efficient integration of ILP formulations. Diff-Comb Explainer demonstrates improved accuracy and explainability over non-differentiable solvers, Transformers and existing differentiable constraint-based multi-hop inference frameworks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源