论文标题

随机的三块拆分算法及其在量化深度神经网络的应用

A stochastic three-block splitting algorithm and its application to quantized deep neural networks

论文作者

Bian, Fengmiao, Liu, Ren, Zhang, Xiaoqun

论文摘要

深度神经网络(DNN)在各个领域取得了长足的进步。尤其是,量化的神经网络是一种有前途的技术,使DNN在资源有限的设备上兼容以进行内存和计算节省。在本文中,我们主要考虑一个具有三个块的非凸最小化模型来训练量化的DNN,并提出了一种新的随机三块交替最小化(StAM)算法来解决它。我们为Stam算法开发收敛理论,并获得具有最佳收敛速率$ \ MATHCAL {O}(ε^{ - 4})$的$ε$ - 稳定点。此外,我们将Stam算法应用于轻松的二进制重量来训练DNN。实验是在三个不同的网络结构上进行的,即VGG-11,VGG-16和RESNET-18。这些DNN分别使用CIFAR-10和CIFAR-100的两个不同数据集训练。我们将Stam算法与一些经典的有效算法进行比较,以训练量化的神经网络。测试精度表明Stam算法在训练松弛的二进制量化DNNS中的有效性。

Deep neural networks (DNNs) have made great progress in various fields. In particular, the quantized neural network is a promising technique making DNNs compatible on resource-limited devices for memory and computation saving. In this paper, we mainly consider a non-convex minimization model with three blocks to train quantized DNNs and propose a new stochastic three-block alternating minimization (STAM) algorithm to solve it. We develop a convergence theory for the STAM algorithm and obtain an $ε$-stationary point with optimal convergence rate $\mathcal{O}(ε^{-4})$. Furthermore, we apply our STAM algorithm to train DNNs with relaxed binary weights. The experiments are carried out on three different network structures, namely VGG-11, VGG-16 and ResNet-18. These DNNs are trained using two different data sets, CIFAR-10 and CIFAR-100, respectively. We compare our STAM algorithm with some classical efficient algorithms for training quantized neural networks. The test accuracy indicates the effectiveness of STAM algorithm for training relaxed binary quantization DNNs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源