论文标题

引入一个单方面的边缘损失,以解决深层网络中的分类问题

Introducing One Sided Margin Loss for Solving Classification Problems in Deep Networks

论文作者

Karimi, Ali, Kouzehkanan, Zahra Mousavi, Hosseini, Reshad, Asheri, Hadi

论文摘要

本文介绍了一个新的损失函数OSM(单侧边距),以有效地解决最大修订分类问题。与铰链损耗不同,在OSM中,边缘通过相应的超参数明确确定,然后解决了分类问题。在实验中,我们观察到,在几种常用的深层模型中,用于分类和光学特征识别问题,使用OSM损失会导致比二元和分类跨透镜更快的训练速度和更好的精度。 OSM始终显示出对小型至大神经网络的跨凝性和铰链损失的更好分类精度。这也导致了更有效的培训程序。我们在CIFAR10(98.82 \%),CIFAR100(91.56 \%),Flowers(98.04 \%),Stanford Cars(93.91 \%)的几个基准数据集上实现了小型网络的最新精确度,比其他损失功能相当改进。此外,这些精度比大型网络的跨凝集和铰链损失更好。因此,我们坚信,OSM是铰链和跨凝结损失的有力替代方法,可以在分类任务上训练深层神经网络。

This paper introduces a new loss function, OSM (One-Sided Margin), to solve maximum-margin classification problems effectively. Unlike the hinge loss, in OSM the margin is explicitly determined with corresponding hyperparameters and then the classification problem is solved. In experiments, we observe that using OSM loss leads to faster training speeds and better accuracies than binary and categorical cross-entropy in several commonly used deep models for classification and optical character recognition problems. OSM has consistently shown better classification accuracies over cross-entropy and hinge losses for small to large neural networks. it has also led to a more efficient training procedure. We achieved state-of-the-art accuracies for small networks on several benchmark datasets of CIFAR10(98.82\%), CIFAR100(91.56\%), Flowers(98.04\%), Stanford Cars(93.91\%) with considerable improvements over other loss functions. Moreover, the accuracies are rather better than cross-entropy and hinge loss for large networks. Therefore, we strongly believe that OSM is a powerful alternative to hinge and cross-entropy losses to train deep neural networks on classification tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源