论文标题
通过基于抽象的培训对DNN控制系统的驯服性分析
Taming Reachability Analysis of DNN-Controlled Systems via Abstraction-Based Training
论文作者
论文摘要
深神经网络(DNN)的内在复杂性不仅要验证网络本身,还要验证托管DNN控制的系统。这些系统的可及性分析面临同样的挑战。现有方法依赖于使用更简单的多项式模型过度x氧DNN。但是,它们的效率低和高估,并且仅限于特定类型的DNN。本文提出了一种基于抽象的新方法,以绕过可及性分析中过度Ximating DNN的关键。具体而言,我们通过插入一个额外的抽象层来扩展常规DNN,该抽象层将实际数字提取到训练间隔。插入的抽象层可确保以间隔表示的值与网络无法区分培训和决策。利用这一点,我们为DNN控制的系统设计了第一种黑盒可及性分析方法,在该系统中,训练有素的DNN仅作为黑盒甲骨文对抽象状态的动作进行查询。我们的方法是任何DNN类型和大小的声音,紧密,高效和不可知论。对广泛基准的实验结果表明,使用我们的方法训练的DNN表现出可比的性能,而对相应系统的可及性分析变得更加适应,并且对最先进的白色盒子方法的显着紧密度和效率提高。
The intrinsic complexity of deep neural networks (DNNs) makes it challenging to verify not only the networks themselves but also the hosting DNN-controlled systems. Reachability analysis of these systems faces the same challenge. Existing approaches rely on over-approximating DNNs using simpler polynomial models. However, they suffer from low efficiency and large overestimation, and are restricted to specific types of DNNs. This paper presents a novel abstraction-based approach to bypass the crux of over-approximating DNNs in reachability analysis. Specifically, we extend conventional DNNs by inserting an additional abstraction layer, which abstracts a real number to an interval for training. The inserted abstraction layer ensures that the values represented by an interval are indistinguishable to the network for both training and decision-making. Leveraging this, we devise the first black-box reachability analysis approach for DNN-controlled systems, where trained DNNs are only queried as black-box oracles for the actions on abstract states. Our approach is sound, tight, efficient, and agnostic to any DNN type and size. The experimental results on a wide range of benchmarks show that the DNNs trained by using our approach exhibit comparable performance, while the reachability analysis of the corresponding systems becomes more amenable with significant tightness and efficiency improvement over the state-of-the-art white-box approaches.