论文标题

间隔神经网络:不确定性得分

Interval Neural Networks: Uncertainty Scores

论文作者

Oala, Luis, Heiß, Cosmas, Macdonald, Jan, März, Maximilian, Samek, Wojciech, Kutyniok, Gitta

论文摘要

我们提出了一种使用数据驱动的间隔网络的预训练深神经网络(DNN)输出中产生不确定性得分的快速,非bayesian方法。该间隔神经网络(INN)具有间隔有价值的参数,并使用间隔算术传播其输入。该旅馆产生了涵盖地面真理的明智的下层和上限。我们为这些界限的有效性提供了理论上的理由。此外,其不对称的不确定性得分提供了超出基于高斯的对称方差估计的其他方向信息。我们发现数据中的噪声通过我们的方法产生的间隔充分捕获。在图像重建任务的数值实验中,我们证明了Inns的实际实用性,作为与两种最新的不确定性量化方法相比的预测误差的代理。总而言之,Inns为DNN提供了快速的,理论上的不确定性得分,这些不确定性得分易于解释,并随附附加信息并构成改进的错误代理,这些功能可能对推进DNN的可用性,尤其是在诸如医疗保健等敏感应用中的可用性。

We propose a fast, non-Bayesian method for producing uncertainty scores in the output of pre-trained deep neural networks (DNNs) using a data-driven interval propagating network. This interval neural network (INN) has interval valued parameters and propagates its input using interval arithmetic. The INN produces sensible lower and upper bounds encompassing the ground truth. We provide theoretical justification for the validity of these bounds. Furthermore, its asymmetric uncertainty scores offer additional, directional information beyond what Gaussian-based, symmetric variance estimation can provide. We find that noise in the data is adequately captured by the intervals produced with our method. In numerical experiments on an image reconstruction task, we demonstrate the practical utility of INNs as a proxy for the prediction error in comparison to two state-of-the-art uncertainty quantification methods. In summary, INNs produce fast, theoretically justified uncertainty scores for DNNs that are easy to interpret, come with added information and pose as improved error proxies - features that may prove useful in advancing the usability of DNNs especially in sensitive applications such as health care.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源