论文标题

神经参数fokker-Planck方程

Neural Parametric Fokker-Planck Equations

论文作者

Liu, Shu, Li, Wuchen, Zha, Hongyuan, Zhou, Haomin

论文摘要

在本文中,我们通过利用深度学习的生成模型来开发和分析高维fokker-planck方程的数值方法。我们的起点是将fokker-planck方程作为有限维参数空间上的普通微分方程(ODE)的系统的公式,该系统具有从归一化流量等生成模型中继承的参数。我们称此类ODES神经参数fokker-Planck方程。 Fokker-Planck方程可以被视为Kullback-Leibler(KL)差异的$ l^2 $ -Wasserstein梯度流动,使我们能够得出ODES,因为受可能产生的概率密度生成的kl差异的约束$ l^2 $ -Wasserstein梯度流。对于数值计算,我们为提出的ode的时间离散化设计了一个变异的半密度方案。这样的算法是基于抽样的,它可以轻松地在较高维空间中处理fokker-Planck方程。此外,我们还为神经参数fokker-Planck方程以及连续版本和离散版本的误差分析建立了渐近收敛分析的界限。提供了几个数值示例,以说明所提出的算法和分析的性能。

In this paper, we develop and analyze numerical methods for high dimensional Fokker-Planck equations by leveraging generative models from deep learning. Our starting point is a formulation of the Fokker-Planck equation as a system of ordinary differential equations (ODEs) on finite-dimensional parameter space with the parameters inherited from generative models such as normalizing flows. We call such ODEs neural parametric Fokker-Planck equations. The fact that the Fokker-Planck equation can be viewed as the $L^2$-Wasserstein gradient flow of Kullback-Leibler (KL) divergence allows us to derive the ODEs as the constrained $L^2$-Wasserstein gradient flow of KL divergence on the set of probability densities generated by neural networks. For numerical computation, we design a variational semi-implicit scheme for the time discretization of the proposed ODE. Such an algorithm is sampling-based, which can readily handle the Fokker-Planck equations in higher dimensional spaces. Moreover, we also establish bounds for the asymptotic convergence analysis of the neural parametric Fokker-Planck equation as well as the error analysis for both the continuous and discrete versions. Several numerical examples are provided to illustrate the performance of the proposed algorithms and analysis.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源