论文标题
两层relu网络的快速凸优化:等效模型类和锥分解
Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model Classes and Cone Decompositions
论文作者
论文摘要
我们开发了快速算法和鲁棒软件,用于凸出具有Relu激活功能的两层神经网络的优化。我们的工作利用了标准重量限制训练问题作为一组组的凸面重新制定-USH_1 $登记的数据本地模型,其中局部是由多面体锥体约束强制执行的。在零规范化的特殊情况下,我们表明,这个问题完全等同于具有非单明门门的凸“门控”网络的不受约束的优化。对于非零正则化的问题,我们表明凸面式relu模型获得了RELU训练问题的数据依赖性近似范围。为了优化凸的重新制定,我们开发了一种加速的近端梯度方法和实用的增强拉格朗日求解器。我们表明,这些方法比标准训练启发式方法要快,而不是标准的训练术,例如SGD和胜过商业内点求解器。在实验上,我们验证我们的理论结果,探索组-ELL_1 $正则化路径,并对神经网络进行比例凸的优化,以在MNIST和CIFAR-10上进行图像分类。
We develop fast algorithms and robust software for convex optimization of two-layer neural networks with ReLU activation functions. Our work leverages a convex reformulation of the standard weight-decay penalized training problem as a set of group-$\ell_1$-regularized data-local models, where locality is enforced by polyhedral cone constraints. In the special case of zero-regularization, we show that this problem is exactly equivalent to unconstrained optimization of a convex "gated ReLU" network with non-singular gates. For problems with non-zero regularization, we show that convex gated ReLU models obtain data-dependent approximation bounds for the ReLU training problem. To optimize the convex reformulations, we develop an accelerated proximal gradient method and a practical augmented Lagrangian solver. We show that these approaches are faster than standard training heuristics for the non-convex problem, such as SGD, and outperform commercial interior-point solvers. Experimentally, we verify our theoretical results, explore the group-$\ell_1$ regularization path, and scale convex optimization for neural networks to image classification on MNIST and CIFAR-10.