论文标题
易于批次归一化
Easy Batch Normalization
论文作者
论文摘要
结果表明,对抗性示例可以改善对象识别。但是,他们的另一面简单的例子呢?简单的例子是机器学习模型以高信心正确分类的样本。在我们的论文中,我们迈出了探索在神经网络的培训程序中使用简单示例的潜在好处的第一步。我们建议使用辅助批准归一化,以便于标准和稳健的精度提高示例。
It was shown that adversarial examples improve object recognition. But what about their opposite side, easy examples? Easy examples are samples that the machine learning model classifies correctly with high confidence. In our paper, we are making the first step toward exploring the potential benefits of using easy examples in the training procedure of neural networks. We propose to use an auxiliary batch normalization for easy examples for the standard and robust accuracy improvement.