论文标题

通过运行时局部鲁棒性验证对神经网络的输入验证

Input Validation for Neural Networks via Runtime Local Robustness Verification

论文作者

Liu, Jiangchao, Chen, Liqian, Mine, Antoine, Wang, Ji

论文摘要

局部鲁棒性验证可以验证神经网络是否稳健。在一定距离内对特定输入的任何扰动。我们称此距离鲁棒性半径。我们观察到,正确分类的输入的鲁棒性半径比包括对抗性示例,尤其是来自强烈的对抗性攻击的输入的错误分类输入要大得多。另一个观察结果是,正确分类的输入的鲁棒性半径通常遵循正态分布。基于这两个观察结果,我们建议通过运行时局部鲁棒性验证来验证神经网络的输入。实验表明,我们的方法可以保护神经网络免受对抗示例并提高其准确性。

Local robustness verification can verify that a neural network is robust wrt. any perturbation to a specific input within a certain distance. We call this distance Robustness Radius. We observe that the robustness radii of correctly classified inputs are much larger than that of misclassified inputs which include adversarial examples, especially those from strong adversarial attacks. Another observation is that the robustness radii of correctly classified inputs often follow a normal distribution. Based on these two observations, we propose to validate inputs for neural networks via runtime local robustness verification. Experiments show that our approach can protect neural networks from adversarial examples and improve their accuracies.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源