论文标题

深度学习优化者的调查 - 一阶和二阶方法

A survey of deep learning optimizers -- first and second order methods

论文作者

Kashyap, Rohan

论文摘要

深度学习优化涉及最大程度地减少体重空间中的高维损失函数,由于其固有的困难,例如马鞍点,局部最小值,对黑森的不良条件和有限的计算资源,通常被认为是困难的。在本文中,我们对在深度学习研究中成功使用的$ 14 $标准优化方法进行了全面评论,并对优化文献中数值优化的困难进行了理论评估。

Deep Learning optimization involves minimizing a high-dimensional loss function in the weight space which is often perceived as difficult due to its inherent difficulties such as saddle points, local minima, ill-conditioning of the Hessian and limited compute resources. In this paper, we provide a comprehensive review of $14$ standard optimization methods successfully used in deep learning research and a theoretical assessment of the difficulties in numerical optimization from the optimization literature.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源