论文标题
线性二阶椭圆形PDE的无监督Legendre-Galerkin神经网络的收敛分析
Convergence analysis of unsupervised Legendre-Galerkin neural networks for linear second-order elliptic PDEs
论文作者
论文摘要
在本文中,我们对无监督的Legendre-Galerkin神经网络(ULGNET)进行了收敛分析,这是一种基于深度学习的数值方法,用于求解部分微分方程(PDES)。与现有的基于深度学习的PDE的数值方法不同,ULGNET将解决方案表示为相对于Legendre基础的光谱扩展,并通过解决各种残余最小化问题来预测具有深神经网络的系数。由于相应的损耗函数等于由线性代数系统诱导的残留函数,具体取决于基本函数的选择,我们证明离散损耗函数的最小化器会收敛到PDES的弱解。还将提供数值证据以支持理论结果。关键的技术工具包括有界神经网络的通用近似定理的变体,刚度和质量矩阵的分析以及大量的统一定律,从rademacher的复杂性方面。
In this paper, we perform the convergence analysis of unsupervised Legendre--Galerkin neural networks (ULGNet), a deep-learning-based numerical method for solving partial differential equations (PDEs). Unlike existing deep learning-based numerical methods for PDEs, the ULGNet expresses the solution as a spectral expansion with respect to the Legendre basis and predicts the coefficients with deep neural networks by solving a variational residual minimization problem. Since the corresponding loss function is equivalent to the residual induced by the linear algebraic system depending on the choice of basis functions, we prove that the minimizer of the discrete loss function converges to the weak solution of the PDEs. Numerical evidence will also be provided to support the theoretical result. Key technical tools include the variant of the universal approximation theorem for bounded neural networks, the analysis of the stiffness and mass matrices, and the uniform law of large numbers in terms of the Rademacher complexity.