论文标题

测试时间增加的学习损失

Learning Loss for Test-Time Augmentation

论文作者

Kim, Ildoo, Kim, Younghoon, Kim, Sungwoong

论文摘要

数据增强已被积极研究以用于强大的神经网络。在培训阶段,大多数最近的数据增强方法都集中在增强数据集上。在测试阶段,简单的转换仍被广泛用于测试时间增强。本文提出了一种新颖的实例级测试时间增强,该测试时间增加有效地选择了用于测试输入的合适转换。我们提出的方法涉及一个辅助模块,以预测给定输入的每种可能转换的丢失。然后,将具有较低预测损失的转换应用于输入。该网络通过平均增强输入的预测结果来获得结果。几个图像分类基准的实验结果表明,拟议的实例感知测试时间增加可改善模型对各种损坏的稳健性。

Data augmentation has been actively studied for robust neural networks. Most of the recent data augmentation methods focus on augmenting datasets during the training phase. At the testing phase, simple transformations are still widely used for test-time augmentation. This paper proposes a novel instance-level test-time augmentation that efficiently selects suitable transformations for a test input. Our proposed method involves an auxiliary module to predict the loss of each possible transformation given the input. Then, the transformations having lower predicted losses are applied to the input. The network obtains the results by averaging the prediction results of augmented inputs. Experimental results on several image classification benchmarks show that the proposed instance-aware test-time augmentation improves the model's robustness against various corruptions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源