论文标题

AMOS:具有自适应重量衰减对模型量表的Adam风格优化器

Amos: An Adam-style Optimizer with Adaptive Weight Decay towards Model-Oriented Scale

论文作者

Tian, Ran, Parikh, Ankur P.

论文摘要

我们提出AMOS,这是一种基于随机梯度的优化器,旨在训练深层神经网络。可以将其视为ADAM优化器,具有理论上支持的,自适应的学习率衰减和重量衰减。 AMO背后的一个关键见解是,它利用特定于模型的信息来确定初始学习率和衰减时间表。当用于训练前BERT变体和T5时,AMO的收敛速度比Adamw的最新设置更快,从而在<= 70%的训练步骤和时间内实现了更好的验证损失,同时需要<= 51%的插槽变量存储器。我们的代码是开源的:https://github.com/google-research/jestimator

We present Amos, a stochastic gradient-based optimizer designed for training deep neural networks. It can be viewed as an Adam optimizer with theoretically supported, adaptive learning-rate decay and weight decay. A key insight behind Amos is that it leverages model-specific information to determine the initial learning-rate and decaying schedules. When used for pre-training BERT variants and T5, Amos consistently converges faster than the state-of-the-art settings of AdamW, achieving better validation loss within <=70% training steps and time, while requiring <=51% memory for slot variables. Our code is open-sourced at: https://github.com/google-research/jestimator

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源