论文标题
通过对抗数据增强重新思考成本敏感的分类
Rethinking Cost-sensitive Classification in Deep Learning via Adversarial Data Augmentation
论文作者
论文摘要
成本敏感的分类对于错误分类错误的成本差异很大,至关重要。但是,过度参数化对深神经网络(DNNS)的成本敏感建模构成了基本挑战。 DNN完全插值训练数据集的能力可以渲染DNN,纯粹在训练集上进行评估,无效地将对成本敏感的解决方案与其整体准确性最大化对应物进行区分。这需要在DNN中重新思考成本敏感的分类。为了应对这一挑战,本文提出了一种成本敏感的对抗数据增强(CSADA)框架,以使过度参数化的模型成本敏感。总体想法是生成针对性的对抗示例,以推动成本感知方向的决策边界。这些有针对性的对抗样本是通过最大化关键错误分类的可能性而产生的,并用于训练一个模型,以更加保守的成对决定。公开可用的有关著名数据集和药物药物图像(PMI)数据集的实验表明,我们的方法可以有效地最大程度地减少整体成本并减少关键错误,同时在整体准确性方面达到可比的性能。
Cost-sensitive classification is critical in applications where misclassification errors widely vary in cost. However, over-parameterization poses fundamental challenges to the cost-sensitive modeling of deep neural networks (DNNs). The ability of a DNN to fully interpolate a training dataset can render a DNN, evaluated purely on the training set, ineffective in distinguishing a cost-sensitive solution from its overall accuracy maximization counterpart. This necessitates rethinking cost-sensitive classification in DNNs. To address this challenge, this paper proposes a cost-sensitive adversarial data augmentation (CSADA) framework to make over-parameterized models cost-sensitive. The overarching idea is to generate targeted adversarial examples that push the decision boundary in cost-aware directions. These targeted adversarial samples are generated by maximizing the probability of critical misclassifications and used to train a model with more conservative decisions on costly pairs. Experiments on well-known datasets and a pharmacy medication image (PMI) dataset made publicly available show that our method can effectively minimize the overall cost and reduce critical errors, while achieving comparable performance in terms of overall accuracy.