论文标题

机器学习中自适应梯度优化器的控制理论框架

A Control Theoretic Framework for Adaptive Gradient Optimizers in Machine Learning

论文作者

Chakrabarti, Kushal, Chopra, Nikhil

论文摘要

自适应梯度方法在优化深层神经网络方面变得流行。最近的例子包括Adagrad和Adam。尽管亚当通常会收敛的速度更快,但是与经典随机梯度方法相比,已经提出了Adam的变化,例如Adam的变体,以增强Adam的概括能力。本文为解决非凸优化问题的自适应梯度方法开发了一个通用框架。我们首先在状态空间框架中对自适应梯度方法进行建模,这使我们能够提供自适应优化器(例如Adagrad,Adam和Adabelief)的更简单的收敛证明。然后,我们利用从经典控制理论的转移函数范式提出了Adam的新变体Adamssm。我们在从平方梯度到第二次估算的传输函数中添加了合适的零对。我们证明了拟议的ADAMSSM算法的收敛性。使用CNN体系结构和使用LSTM体系结构的语言建模的图像分类的应用程序在基准计算机学习任务上表明,与最近的自适应梯度方法相比,ADAMSSSM算法改善了泛化精度和更快的收敛之间的差距。

Adaptive gradient methods have become popular in optimizing deep neural networks; recent examples include AdaGrad and Adam. Although Adam usually converges faster, variations of Adam, for instance, the AdaBelief algorithm, have been proposed to enhance Adam's poor generalization ability compared to the classical stochastic gradient method. This paper develops a generic framework for adaptive gradient methods that solve non-convex optimization problems. We first model the adaptive gradient methods in a state-space framework, which allows us to present simpler convergence proofs of adaptive optimizers such as AdaGrad, Adam, and AdaBelief. We then utilize the transfer function paradigm from classical control theory to propose a new variant of Adam, coined AdamSSM. We add an appropriate pole-zero pair in the transfer function from squared gradients to the second moment estimate. We prove the convergence of the proposed AdamSSM algorithm. Applications on benchmark machine learning tasks of image classification using CNN architectures and language modeling using LSTM architecture demonstrate that the AdamSSM algorithm improves the gap between generalization accuracy and faster convergence than the recent adaptive gradient methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源