论文标题

审计对象探测器中不确定性校准的综述

A Review of Uncertainty Calibration in Pretrained Object Detectors

论文作者

Huseljic, Denis, Herde, Marek, Muejde, Mehmet, Sick, Bernhard

论文摘要

在基于深度学习的计算机视觉的领域中,深度对象检测的开发导致了独特的范式(例如,基于两阶段或基于集合)和体系结构(例如,更快的rcnn或detr),在挑战性基准数据集上具有出色的性能。尽管如此,训练有素的对象探测器通常不会可靠地评估其知识的不确定性,而概率预测的质量通常很差。由于通常用于做出后续决策,因此必须避免这种概率预测。在这项工作中,我们研究了多级设置中不同审慎的对象检测体系结构的不确定性校准属性。我们提出了一个框架,以确保进行公平,公正,可重复的评估,并进行详细分析,以评估分布变化下的校准(例如,分配变化和对分布数据外数据的应用)。此外,通过研究不同探测器范式,后处理步骤和合适的指标选择的影响,我们就探测器校准会出现较差的原因提供了新的见解。基于这些见解,我们能够通过简单地鉴定其最后一层来改善检测器的校准。

In the field of deep learning based computer vision, the development of deep object detection has led to unique paradigms (e.g., two-stage or set-based) and architectures (e.g., Faster-RCNN or DETR) which enable outstanding performance on challenging benchmark datasets. Despite this, the trained object detectors typically do not reliably assess uncertainty regarding their own knowledge, and the quality of their probabilistic predictions is usually poor. As these are often used to make subsequent decisions, such inaccurate probabilistic predictions must be avoided. In this work, we investigate the uncertainty calibration properties of different pretrained object detection architectures in a multi-class setting. We propose a framework to ensure a fair, unbiased, and repeatable evaluation and conduct detailed analyses assessing the calibration under distributional changes (e.g., distributional shift and application to out-of-distribution data). Furthermore, by investigating the influence of different detector paradigms, post-processing steps, and suitable choices of metrics, we deliver novel insights into why poor detector calibration emerges. Based on these insights, we are able to improve the calibration of a detector by simply finetuning its last layer.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源