论文标题

确保神经符号模型与自然限制的一致性

Guaranteed Conformance of Neurosymbolic Models to Natural Constraints

论文作者

Sridhar, Kaustubh, Dutta, Souradeep, Weimer, James, Lee, Insup

论文摘要

深度神经网络已成为大量机器人技术和控制应用的主力,尤其是作为动态系统的模型。此类数据驱动的模型又用于设计和验证自主系统。它们在建模可以利用数据个性化治疗的医疗系统的建模时特别有用。在安全至关重要的应用中,重要的是,数据驱动的模型符合自然科学的既定知识。这种知识通常可用,或者通常可以将其提炼成(可能是黑盒)模型。例如,F1赛车应符合牛顿的定律(在独轮车模型中编码)。从这个角度来看,我们考虑以下问题 - 鉴于$ m $和一个州过渡数据集,我们希望最好地近似系统模型,同时距离$ m $。我们提出了一种保证这种符合性的方法。我们的第一步是使用日益增长的神经气体的想法将数据集提炼成一些称为记忆的代表性样本。接下来,使用这些记忆,我们将状态空间分为不相交的子集,并计算每个子集中神经网络应尊重的界限。这是保证一致性的象征性包装。从理论上讲,我们认为这仅导致近似误差的有限增加。可以通过增加记忆数来控制。我们通过实验表明,在三个案例研究(CAR模型,无人机和人造胰腺)上,我们受约束的神经符号模型符合指定模型(每个编码各种约束),并与增强的Lagrangian和Vanilla训练方法相比,具有改善的指定模型。我们的代码可以在以下网址找到:https://github.com/kaustubhsridhar/constrained_models

Deep neural networks have emerged as the workhorse for a large section of robotics and control applications, especially as models for dynamical systems. Such data-driven models are in turn used for designing and verifying autonomous systems. They are particularly useful in modeling medical systems where data can be leveraged to individualize treatment. In safety-critical applications, it is important that the data-driven model is conformant to established knowledge from the natural sciences. Such knowledge is often available or can often be distilled into a (possibly black-box) model. For instance, an F1 racing car should conform to Newton's laws (which are encoded within a unicycle model). In this light, we consider the following problem - given a model $M$ and a state transition dataset, we wish to best approximate the system model while being a bounded distance away from $M$. We propose a method to guarantee this conformance. Our first step is to distill the dataset into a few representative samples called memories, using the idea of a growing neural gas. Next, using these memories we partition the state space into disjoint subsets and compute bounds that should be respected by the neural network in each subset. This serves as a symbolic wrapper for guaranteed conformance. We argue theoretically that this only leads to a bounded increase in approximation error; which can be controlled by increasing the number of memories. We experimentally show that on three case studies (Car Model, Drones, and Artificial Pancreas), our constrained neurosymbolic models conform to specified models (each encoding various constraints) with order-of-magnitude improvements compared to the augmented Lagrangian and vanilla training methods. Our code can be found at: https://github.com/kaustubhsridhar/Constrained_Models

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源