论文标题

重新访问神经网络中的显式正则化,以实现良好的预测不确定性

Revisiting Explicit Regularization in Neural Networks for Well-Calibrated Predictive Uncertainty

论文作者

Joo, Taejong, Chung, Uijung

论文摘要

从统计学习的角度来看,通过显式正则化的复杂性控制是改善过度参数化模型的概括的必要性。但是,只有隐式正则化的神经网络的令人印象深刻的概括性能可能与这种传统的观点不一致。在这项工作中,我们重新审视了明确正规化对于获得良好校准的预测不确定性的重要性。具体而言,我们引入了校准性能的概率度量,该度量由对数似然的限制。然后,我们探索明确的正则化技术,以改善看不见的样品对数的样本,从而提供了良好的预测不确定性。我们的发现提出了一个新的方向,可以提高确定性神经网络的预测概率质量,这可能是贝叶斯神经网络和集合方法的有效且可扩展的替代方法。

From the statistical learning perspective, complexity control via explicit regularization is a necessity for improving the generalization of over-parameterized models. However, the impressive generalization performance of neural networks with only implicit regularization may be at odds with this conventional wisdom. In this work, we revisit the importance of explicit regularization for obtaining well-calibrated predictive uncertainty. Specifically, we introduce a probabilistic measure of calibration performance, which is lower bounded by the log-likelihood. We then explore explicit regularization techniques for improving the log-likelihood on unseen samples, which provides well-calibrated predictive uncertainty. Our findings present a new direction to improve the predictive probability quality of deterministic neural networks, which can be an efficient and scalable alternative to Bayesian neural networks and ensemble methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源