论文标题
学习带有神经普通微分方程的亚网格尺度模型
Learning Subgrid-scale Models with Neural Ordinary Differential Equations
论文作者
论文摘要
我们提出了一种新的方法来学习亚网格尺度模型时,通过基于神经普通的微分方程(节点)来模拟通过线方法及其在混沌普通微分方程中求解的部分微分方程(PDE)。具有精细的时间和空间网格尺度的解决系统是一个持续的计算挑战,封闭模型通常很难调节。机器学习方法提高了计算流体动力学求解器的准确性和效率。在这种方法中,神经网络用于学习粗网格地图,可以将其视为亚网格尺度参数化。我们提出了一种策略,该策略使用节点和部分知识在连续级别学习源动力学。我们的方法继承了节点的优势,可用于参数化子网格尺度,近似耦合算子并提高低阶求解器的效率。两尺度Lorenz 96 Ode,对流扩散PDE和粘性汉堡的PDE的数值结果用于说明这种方法。
We propose a new approach to learning the subgrid-scale model when simulating partial differential equations (PDEs) solved by the method of lines and their representation in chaotic ordinary differential equations, based on neural ordinary differential equations (NODEs). Solving systems with fine temporal and spatial grid scales is an ongoing computational challenge, and closure models are generally difficult to tune. Machine learning approaches have increased the accuracy and efficiency of computational fluid dynamics solvers. In this approach neural networks are used to learn the coarse- to fine-grid map, which can be viewed as subgrid-scale parameterization. We propose a strategy that uses the NODE and partial knowledge to learn the source dynamics at a continuous level. Our method inherits the advantages of NODEs and can be used to parameterize subgrid scales, approximate coupling operators, and improve the efficiency of low-order solvers. Numerical results with the two-scale Lorenz 96 ODE, the convection-diffusion PDE, and the viscous Burgers' PDE are used to illustrate this approach.