论文标题
可分离的PINN:减轻物理知识神经网络中维度的诅咒
Separable PINN: Mitigating the Curse of Dimensionality in Physics-Informed Neural Networks
论文作者
论文摘要
物理知识的神经网络(PINN)已成为用于远期和反问题的新数据驱动的PDE求解器。在有希望的同时,获得解决方案的昂贵计算成本通常会限制其更广泛的适用性。我们证明,在训练PINN时利用前向模式AD可以显着降低自动分化(AD)的计算。但是,向前模式AD在常规PINNS上的幼稚应用会导致更高的计算,从而失去了实际收益。因此,我们提出了一个称为可分离PINN(SPINN)的网络体系结构,该架构可以促进前向模式AD以进行更有效的计算。 Spinn以每轴为基础,而不是传统的PINN中的点处理,从而减少了网络向前传递的数量。此外,尽管标准PINN的计算和记忆成本随着电网分辨率的指数增长,但我们的模型的计算和记忆成本非常易感性,从而减轻了维度的诅咒。我们通过显着降低训练时间,同时实现可比的精度,从而证明了我们在各种PDE系统中的有效性。项目页面:https://jwcho5576.github.io/spinn/
Physics-informed neural networks (PINNs) have emerged as new data-driven PDE solvers for both forward and inverse problems. While promising, the expensive computational costs to obtain solutions often restrict their broader applicability. We demonstrate that the computations in automatic differentiation (AD) can be significantly reduced by leveraging forward-mode AD when training PINN. However, a naive application of forward-mode AD to conventional PINNs results in higher computation, losing its practical benefit. Therefore, we propose a network architecture, called separable PINN (SPINN), which can facilitate forward-mode AD for more efficient computation. SPINN operates on a per-axis basis instead of point-wise processing in conventional PINNs, decreasing the number of network forward passes. Besides, while the computation and memory costs of standard PINNs grow exponentially along with the grid resolution, that of our model is remarkably less susceptible, mitigating the curse of dimensionality. We demonstrate the effectiveness of our model in various PDE systems by significantly reducing the training run-time while achieving comparable accuracy. Project page: https://jwcho5576.github.io/spinn/