论文标题

了解您的限制:RELU分类器的不确定性估计在可靠的OOD检测时失败

Know Your Limits: Uncertainty Estimation with ReLU Classifiers Fails at Reliable OOD Detection

论文作者

Ulmer, Dennis, Cinà, Giovanni

论文摘要

可靠地部署深度学习模型来安全至关重要的应用程序的关键要求是识别分布外(OOD)数据点的能力,与培训数据不同的样本以及模型可能表现不佳的样本。以前的工作试图使用不确定性估计技术解决此问题。但是,有经验证据表明,这些技术中的大型家族在分类任务中并未可靠地检测到OOD。 本文为上述实验发现提供了理论上的解释,并在合成数据上进行了说明。我们证明,这种技术无法在分类环境中可靠地识别OOD样本,因为它们的信心水平被推广到特征空间的看不见领域。该结果源于Relu网络表示为零件仿射变换的相互作用,激活功能(如SoftMax)的饱和性质以及最广泛使用的不确定性指标。

A crucial requirement for reliable deployment of deep learning models for safety-critical applications is the ability to identify out-of-distribution (OOD) data points, samples which differ from the training data and on which a model might underperform. Previous work has attempted to tackle this problem using uncertainty estimation techniques. However, there is empirical evidence that a large family of these techniques do not detect OOD reliably in classification tasks. This paper gives a theoretical explanation for said experimental findings and illustrates it on synthetic data. We prove that such techniques are not able to reliably identify OOD samples in a classification setting, since their level of confidence is generalized to unseen areas of the feature space. This result stems from the interplay between the representation of ReLU networks as piece-wise affine transformations, the saturating nature of activation functions like softmax, and the most widely-used uncertainty metrics.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源