论文标题

神经网络的持续安全验证

Continuous Safety Verification of Neural Networks

论文作者

Cheng, Chih-Hong, Yan, Rongjie

论文摘要

将深度神经网络(DNN)部署为自主驾驶中的核心功能会产生独特的验证和验证挑战。特别是,逐渐完善基于DNN的感知的连续工程范式可以使先前确定的安全验证结果不再有效。这可能是由于新遇到的示例(即输入域扩大)在操作设计域内或由于DNN的后续参数微调活动。本文考虑了以前的DNN安全验证问题中建立的转移结果的方法,以设置为修改的问题设置。通过考虑国家抽象,网络抽象和Lipschitz常数的重复使用,我们开发了几个足够的条件,这些条件只需要在新问题中正式分析DNN的一小部分。总体概念以$ 1/10 $尺寸的车辆进行评估,该车辆配备DNN控制器,以确定可感知图像的视觉航路点。

Deploying deep neural networks (DNNs) as core functions in autonomous driving creates unique verification and validation challenges. In particular, the continuous engineering paradigm of gradually perfecting a DNN-based perception can make the previously established result of safety verification no longer valid. This can occur either due to the newly encountered examples (i.e., input domain enlargement) inside the Operational Design Domain or due to the subsequent parameter fine-tuning activities of a DNN. This paper considers approaches to transfer results established in the previous DNN safety verification problem to the modified problem setting. By considering the reuse of state abstractions, network abstractions, and Lipschitz constants, we develop several sufficient conditions that only require formally analyzing a small part of the DNN in the new problem. The overall concept is evaluated in a $1/10$-scaled vehicle that equips a DNN controller to determine the visual waypoint from the perceived image.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源