论文标题

可证明可以通过学习的平滑密度来证明可靠的分类

Provable Robust Classification via Learned Smoothed Densities

论文作者

Saremi, Saeed, Srivastava, Rupesh

论文摘要

与高斯内核的平滑分类器和概率密度函数似乎无关,但是在这项工作中,它们是统一的,以解决可靠的分类问题。关键的构建块与随机变量的$ \ textIt {engial函数} $ $ y = x+n(0,σ^2 i_d)$具有神经网络,我们用来以$ \ widehat {x}(x}(y)$ contear {$ \ textIt cor vextit con yis $ yis $ x $ x $ os os of of of of $ \ textit {x}(x}(x}(y),我们用来以$ \ widehat {x}(x}(x}(x})的形式进行强有力的分类问题。我们在$ \ textit {随机平滑} $的框架内介绍$ \ textit {经验贝叶斯平滑分类器} $,并理论上对两级线性分类器进行了研究,我们在其中显示一个人可以在$ \ textit {the Margin} $上提高其稳健性。我们测试了关于MNIST的理论,并表明,通过学习的平滑能量功能和线性分类器,我们可以实现可证明的$ \ ell_2 $稳健精度,这些精度具有与经验防御能力的竞争力。通过$ \ textit {Learning} $通过对抗性培训使分类器平滑分类器可以显着改善此设置,并且在MNIST上我们表明,我们可以在一系列Radii中获得比最先进的经验防御能力更高的可证明的鲁棒精度。我们讨论了基于高维高s浓度的几何解释,基于几何解释的一些基本挑战,我们通过使用步行跳跃采样的建议本身基于学习的平滑密度来完成纸张,以实现稳健的分类。

Smoothing classifiers and probability density functions with Gaussian kernels appear unrelated, but in this work, they are unified for the problem of robust classification. The key building block is approximating the $\textit{energy function}$ of the random variable $Y=X+N(0,σ^2 I_d)$ with a neural network which we use to formulate the problem of robust classification in terms of $\widehat{x}(Y)$, the $\textit{Bayes estimator}$ of $X$ given the noisy measurements $Y$. We introduce $\textit{empirical Bayes smoothed classifiers}$ within the framework of $\textit{randomized smoothing}$ and study it theoretically for the two-class linear classifier, where we show one can improve their robustness above $\textit{the margin}$. We test the theory on MNIST and we show that with a learned smoothed energy function and a linear classifier we can achieve provable $\ell_2$ robust accuracies that are competitive with empirical defenses. This setup can be significantly improved by $\textit{learning}$ empirical Bayes smoothed classifiers with adversarial training and on MNIST we show that we can achieve provable robust accuracies higher than the state-of-the-art empirical defenses in a range of radii. We discuss some fundamental challenges of randomized smoothing based on a geometric interpretation due to concentration of Gaussians in high dimensions, and we finish the paper with a proposal for using walk-jump sampling, itself based on learned smoothed densities, for robust classification.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源