论文标题

通过完全积极的编程对基于SDP的神经网络验证的统一观点

A Unified View of SDP-based Neural Network Verification through Completely Positive Programming

论文作者

Brown, Robin, Schmerling, Edward, Azizan, Navid, Pavone, Marco

论文摘要

验证神经网络的投入输出关系符合规定的操作规范是将这些网络部署在安全至关重要应用中的关键推动力。半决赛编程(SDP)基于整流线性单元(RELU)网络验证的方法将此问题转录为优化问题,在此问题中,任何此类公式的准确性都反映了如何表示神经网络计算的保真度水平,以及可怕的约束的放松。尽管文献在维持障碍性的同时改善了SDP配方的紧密度,但仍有很大的进步,但相对较少的工作专门用于另一个极端,即如何在SDP松弛之前最准确地捕获原始的验证问题。在这项工作中,我们将验证作为一个完全积极的程序(CPP)开发出确切的凸制,并提供了分析表明我们的配方很小 - 从根本上删除了任何约束,从根本上歪曲了神经网络计算。我们利用我们的公式来提供对现有方法的统一观点,并深入了解在某些情况下观察到的较大放松差距的来源。

Verifying that input-output relationships of a neural network conform to prescribed operational specifications is a key enabler towards deploying these networks in safety-critical applications. Semidefinite programming (SDP)-based approaches to Rectified Linear Unit (ReLU) network verification transcribe this problem into an optimization problem, where the accuracy of any such formulation reflects the level of fidelity in how the neural network computation is represented, as well as the relaxations of intractable constraints. While the literature contains much progress on improving the tightness of SDP formulations while maintaining tractability, comparatively little work has been devoted to the other extreme, i.e., how to most accurately capture the original verification problem before SDP relaxation. In this work, we develop an exact, convex formulation of verification as a completely positive program (CPP), and provide analysis showing that our formulation is minimal -- the removal of any constraint fundamentally misrepresents the neural network computation. We leverage our formulation to provide a unifying view of existing approaches, and give insight into the source of large relaxation gaps observed in some cases.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源