论文标题
KL优化问题的一类非单调下降方法的收敛
Convergence of a class of nonmonotone descent methods for KL optimization problems
论文作者
论文摘要
本文与一类非单调下降方法有关,以最大程度地减少适当的较低的半连续KL函数$φ$,该方法生成满足非单调降低条件和相对误差耐受性的序列。在适当的假设下,我们证明整个序列收敛到$φ$的限制临界点,当$φ$是[0,1)$的指数$θ\的Kl函数时,如果$θ\在[0,1/2] $相关的$ que $ que $ the(1/2)中,融合承认$θ\ in [0,1/2] $相关的$θ。如果$φ$在固定点设置的附近也弱凸,则表明所需的假设足以且必要。我们的结果解决了由非单调线搜索算法生成的迭代序列上的收敛问题,用于非凸和非滑动问题,还扩展了单调下降方法的收敛结果KL优化问题。作为应用程序,我们实现了具有外推的非单调线搜索近端梯度方法的迭代序列的收敛,而非单极线线搜索近端交替的最小化方法则具有外推。进行数值实验,用于零净值和列$ \ ell_ {2,0} $ - 规范正规化问题以验证其效率。
This paper is concerned with a class of nonmonotone descent methods for minimizing a proper lower semicontinuous KL function $Φ$, which generates a sequence satisfying a nonmonotone decrease condition and a relative error tolerance. Under suitable assumptions, we prove that the whole sequence converges to a limiting critical point of $Φ$ and, when $Φ$ is a KL function of exponent $θ\in[0,1)$, the convergence admits a linear rate if $θ\in[0,1/2]$ and a sublinear rate associated to $θ$ if $θ\in(1/2,1)$. The required assumptions are shown to be sufficient and necessary if $Φ$ is also weakly convex on a neighborhood of stationary point set. Our results resolve the convergence problem on the iterate sequence generated by a class of nonmonotone line search algorithms for nonconvex and nonsmooth problems, and also extend the convergence results of monotone descent methods for KL optimization problems. As the applications, we achieve the convergence of the iterate sequence for the nonmonotone line search proximal gradient method with extrapolation and the nonmonotone line search proximal alternating minimization method with extrapolation. Numerical experiments are conducted for zero-norm and column $\ell_{2,0}$-norm regularized problems to validate their efficiency.