论文标题
揭示动态系统的物理信息图形神经网络的性能
Unravelling the Performance of Physics-informed Graph Neural Networks for Dynamical Systems
论文作者
论文摘要
最近,图形神经网络由于其感应性而导致零射概括性而引起了人们对模拟动力学系统的广泛关注。同样,已经证明,深入学习框架中的物理知识的诱导偏见可以在学习物理系统的动态方面具有出色的性能。越来越多的文献试图结合这两种方法。在这里,我们评估了13个不同的图神经网络的性能,即哈密顿和拉格朗日图神经网络,图神经ode及其具有明确约束和不同架构的变体。我们简要解释了理论表述,突出了这些系统的归纳偏见和图形结构的相似性和差异。我们在春季,摆,重力和3D可变形的固体系统上评估了这些模型,以比较推出误差,保守数量(例如能量和动量)的性能,以及无法看见的系统大小的概括性。我们的研究表明,具有其他电感性偏见的GNN,例如显式约束和动力学和势能的脱钩,表现出显着增强的性能。此外,所有具有物理信息的GNN均表现出零发出的概括性,可以使系统大小比训练系统大的数量级大小,从而提供了一个有希望的途径来模拟大规模逼真的系统。
Recently, graph neural networks have been gaining a lot of attention to simulate dynamical systems due to their inductive nature leading to zero-shot generalizability. Similarly, physics-informed inductive biases in deep-learning frameworks have been shown to give superior performance in learning the dynamics of physical systems. There is a growing volume of literature that attempts to combine these two approaches. Here, we evaluate the performance of thirteen different graph neural networks, namely, Hamiltonian and Lagrangian graph neural networks, graph neural ODE, and their variants with explicit constraints and different architectures. We briefly explain the theoretical formulation highlighting the similarities and differences in the inductive biases and graph architecture of these systems. We evaluate these models on spring, pendulum, gravitational, and 3D deformable solid systems to compare the performance in terms of rollout error, conserved quantities such as energy and momentum, and generalizability to unseen system sizes. Our study demonstrates that GNNs with additional inductive biases, such as explicit constraints and decoupling of kinetic and potential energies, exhibit significantly enhanced performance. Further, all the physics-informed GNNs exhibit zero-shot generalizability to system sizes an order of magnitude larger than the training system, thus providing a promising route to simulate large-scale realistic systems.