论文标题
通过特征距离损失学习的判别特征学习
Discriminative Feature Learning through Feature Distance Loss
论文作者
论文摘要
卷积神经网络的合奏在学习图像分类任务的歧视语义特征方面表现出了显着的结果。但是,合奏中的模型通常集中在图像中的相似区域上。这项工作提出了一种新颖的方法,该方法迫使一组基本模型学习分类任务的不同功能。这些模型结合在一起以进行集体分类。关键发现是,通过强迫模型专注于不同的特征,分类精度就会提高。为了学习不同的特征概念,在特征地图上实现了所谓的特征距离损失。基准卷积神经网络(VGG16,Resnet,Alexnet),流行数据集(CIFAR10,CIFAR100,MINIIMAGENET,NEU,BSD,TEX)和不同的培训样本(3、5、5、5、10、20、50、100)所提出的方法的表现优于基本模型的经典集合版本。类激活图明确证明了学习不同特征概念的能力。该代码可在以下网址找到:https://github.com/2obe/feature-distance-loss.git
Ensembles of Convolutional neural networks have shown remarkable results in learning discriminative semantic features for image classification tasks. Though, the models in the ensemble often concentrate on similar regions in images. This work proposes a novel method that forces a set of base models to learn different features for a classification task. These models are combined in an ensemble to make a collective classification. The key finding is that by forcing the models to concentrate on different features, the classification accuracy is increased. To learn different feature concepts, a so-called feature distance loss is implemented on the feature maps. The experiments on benchmark convolutional neural networks (VGG16, ResNet, AlexNet), popular datasets (Cifar10, Cifar100, miniImageNet, NEU, BSD, TEX), and different training samples (3, 5, 10, 20, 50, 100 per class) show the effectiveness of the proposed feature loss. The proposed method outperforms classical ensemble versions of the base models. The Class Activation Maps explicitly prove the ability to learn different feature concepts. The code is available at: https://github.com/2Obe/Feature-Distance-Loss.git