论文标题

自适应低级分解以正规化浅和深神经网络

Adaptive Low-Rank Factorization to regularize shallow and deep neural networks

论文作者

Bejani, Mohammad Mahdi, Ghatee, Mehdi

论文摘要

过度拟合是深度学习领域中的诅咒主题之一。为了解决这一挑战,提出了许多方法来正规化学习模型。它们在模型中添加了一些超参数以扩展概括。但是,确定这些超参数和不良环境的训练过程是一项艰巨的任务。此外,大多数正规化方案都降低了学习速度。最近,Tai等人。 [1]提出的低级张量分解是消除CNN卷积内核中冗余的约束过滤器。以不同的观点,我们使用低级矩阵分解(LRF)在训练过程中删除学习模型的某些参数。但是,当试图减少操作数量时,该方案类似于[1]可能会降低训练精度。取而代之的是,当层的复杂性高时,我们会自适应地使用此正则化方案。任何一层的复杂性都可以通过其学习系统的非线性条件数来评估。所得的标题为“ ApativelRF”的方法既不会降低训练速度,也不会消失层的准确性。 AdaptivelRF的行为在嘈杂的数据集上可视化。然后,在一些小型和大型数据集上提出了改进。展示了AdaptivelRF在浅网络上著名的辍学器上的偏爱。此外,AdaptivelRF还可以在各种深层网络上与辍学和适应性辍学竞争,包括Mobilenet V2,Resnet V2,Densenet和Xception。 AdaptivelRF在SVHN和CIFAR-10数据集上的最佳结果为98%,94.1%的F量和97.9%,精度为94%。最后,我们陈述了基于LRF的损失功能的用法,以提高学习模型的质量。

The overfitting is one of the cursing subjects in the deep learning field. To solve this challenge, many approaches were proposed to regularize the learning models. They add some hyper-parameters to the model to extend the generalization; however, it is a hard task to determine these hyper-parameters and a bad setting diverges the training process. In addition, most of the regularization schemes decrease the learning speed. Recently, Tai et al. [1] proposed low-rank tensor decomposition as a constrained filter for removing the redundancy in the convolution kernels of CNN. With a different viewpoint, we use Low-Rank matrix Factorization (LRF) to drop out some parameters of the learning model along the training process. However, this scheme similar to [1] probably decreases the training accuracy when it tries to decrease the number of operations. Instead, we use this regularization scheme adaptively when the complexity of a layer is high. The complexity of any layer can be evaluated by the nonlinear condition numbers of its learning system. The resulted method entitled "AdaptiveLRF" neither decreases the training speed nor vanishes the accuracy of the layer. The behavior of AdaptiveLRF is visualized on a noisy dataset. Then, the improvements are presented on some small-size and large-scale datasets. The preference of AdaptiveLRF on famous dropout regularizers on shallow networks is demonstrated. Also, AdaptiveLRF competes with dropout and adaptive dropout on the various deep networks including MobileNet V2, ResNet V2, DenseNet, and Xception. The best results of AdaptiveLRF on SVHN and CIFAR-10 datasets are 98%, 94.1% F-measure, and 97.9%, 94% accuracy. Finally, we state the usage of the LRF-based loss function to improve the quality of the learning model.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源