论文标题

稳定的等级自适应动态正交runge-kutta方案

Stable rank-adaptive Dynamically Orthogonal Runge-Kutta schemes

论文作者

Charous, Aaron, Lermusiaux, Pierre F. J.

论文摘要

我们开发了两组新的稳定的,秩适应性的动态正交runge-kutta(Dork)方案,它们捕获了非线性低级别歧管的高阶曲率。 dork方案渐近地近似于截短的奇异值分解,以大大降低的成本分解,同时使用新得出的缩回来保存模式连续性。我们表明,可以获得任意高阶最佳扰动缩回,并证明这些新缩回是稳定的。此外,我们证明,反复施加缩回会在低级别的歧管上产生一种梯度散发算法,该算法在近似低级别基质时会收敛超线。当近似较高的矩阵时,迭代线性收敛至最佳的低级别近似值。然后,我们开发出一种对过度应用的自适应缩回。在这些缩回的基础上,我们得出了两个级别自适应集成方案,它们会动态更新系统动态在每个时间步骤中投影的子空间:稳定的,最佳的动态正交runge-kutta(so-dork)(so-dork)和梯度 - 渐变,并动态呈正流runge-kutge-kutta(gd-dork)(gd-dork)。这些集成方案在数值评估和比较上,并在不良条件的矩阵微分方程,对流扩散部分微分方程以及非线性的,随机反应 - 反应偏差部分微分方程中进行了比较。结果表明,新的稳定,最佳和梯度散发积分器的误差累积率降低。此外,我们发现等级适应允许高度准确的解决方案,同时保留计算效率。

We develop two new sets of stable, rank-adaptive Dynamically Orthogonal Runge-Kutta (DORK) schemes that capture the high-order curvature of the nonlinear low-rank manifold. The DORK schemes asymptotically approximate the truncated singular value decomposition at a greatly reduced cost while preserving mode continuity using newly derived retractions. We show that arbitrarily high-order optimal perturbative retractions can be obtained, and we prove that these new retractions are stable. In addition, we demonstrate that repeatedly applying retractions yields a gradient-descent algorithm on the low-rank manifold that converges superlinearly when approximating a low-rank matrix. When approximating a higher-rank matrix, iterations converge linearly to the best low-rank approximation. We then develop a rank-adaptive retraction that is robust to overapproximation. Building off of these retractions, we derive two rank-adaptive integration schemes that dynamically update the subspace upon which the system dynamics are projected within each time step: the stable, optimal Dynamically Orthogonal Runge-Kutta (so-DORK) and gradient-descent Dynamically Orthogonal Runge-Kutta (gd-DORK) schemes. These integration schemes are numerically evaluated and compared on an ill-conditioned matrix differential equation, an advection-diffusion partial differential equation, and a nonlinear, stochastic reaction-diffusion partial differential equation. Results show a reduced error accumulation rate with the new stable, optimal and gradient-descent integrators. In addition, we find that rank adaptation allows for highly accurate solutions while preserving computational efficiency.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源