论文标题

关于随机梯度下降的差异原理

On the Discrepancy Principle for Stochastic Gradient Descent

论文作者

Jahn, Tim, Jin, Bangti

论文摘要

随机梯度下降(SGD)是解决大规模反问题的有前途的数值方法。但是,其理论特性在古典正规化理论的镜头中基本上仍未得到充实。在本说明中,我们研究了经典的差异原理,这是最流行的\ textit {a后验}选择规则之一,作为SGD的停止标准,并证明有限的迭代终止属性和迭代性的趋于性,因为噪声水平倾向于零。理论结果与广泛的数值实验相辅相成。

Stochastic gradient descent (SGD) is a promising numerical method for solving large-scale inverse problems. However, its theoretical properties remain largely underexplored in the lens of classical regularization theory. In this note, we study the classical discrepancy principle, one of the most popular \textit{a posteriori} choice rules, as the stopping criterion for SGD, and prove the finite iteration termination property and the convergence of the iterate in probability as the noise level tends to zero. The theoretical results are complemented with extensive numerical experiments.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源