论文标题
新的牛顿近端型凸优化方法
New Proximal Newton-Type Methods for Convex Optimization
论文作者
论文摘要
在本文中,我们提出了新的牛顿型近端方法,用于复合形式的凸优化问题。应用程序包括模型预测控制(MPC)和嵌入式MPC。我们的新方法在计算上很有吸引力,因为它们不需要在每次迭代中评估Hessian,同时保持快速收敛速度。更具体地说,我们证明保证了全局收敛性,并且在最佳解决方案的附近实现了超线性收敛。我们还通过纳入了准Newton和不精确的子问题解决方案,并在某些条件下为它们提供理论保证,从而开发了几种实用变体。现实世界数据集的实验结果证明了新方法的有效性和效率。
In this paper, we propose new proximal Newton-type methods for convex optimization problems in composite form. The applications include model predictive control (MPC) and embedded MPC. Our new methods are computationally attractive since they do not require evaluating the Hessian at each iteration while keeping fast convergence rate. More specifically, we prove the global convergence is guaranteed and the superlinear convergence is achieved in the vicinity of an optimal solution. We also develop several practical variants by incorporating quasi-Newton and inexact subproblem solving schemes and provide theoretical guarantee for them under certain conditions. Experimental results on real-world datasets demonstrate the effectiveness and efficiency of new methods.