论文标题
加速随机镜下降的高概率收敛性
High Probability Convergence for Accelerated Stochastic Mirror Descent
论文作者
论文摘要
在这项工作中,我们描述了一种通用方法,可以显示出具有随机凸优化概率的高概率的收敛性。在以前的工作中,收敛性仅在预期中,或者结合取决于域的直径。取而代之的是,根据与域直径相反的最佳解决方案的初始距离,我们显示出高概率收敛性。算法使用的步骤大小类似于标准设置,并且与Lipschitz函数,光滑函数及其线性组合具有通用。
In this work, we describe a generic approach to show convergence with high probability for stochastic convex optimization. In previous works, either the convergence is only in expectation or the bound depends on the diameter of the domain. Instead, we show high probability convergence with bounds depending on the initial distance to the optimal solution as opposed to the domain diameter. The algorithms use step sizes analogous to the standard settings and are universal to Lipschitz functions, smooth functions, and their linear combinations.