论文标题

稀疏性诱导分类先验提高了信息瓶颈的鲁棒性

Sparsity-Inducing Categorical Prior Improves Robustness of the Information Bottleneck

论文作者

Samaddar, Anirban, Madireddy, Sandeep, Balaprakash, Prasanna, Maiti, Tapabrata, Campos, Gustavo de los, Fischer, Ian

论文摘要

信息瓶颈框架为学习表示形式提供了系统的方法,该方法可以在输入中压缩滋扰信息,并提取有关预测的语义有意义的信息。但是,在所有数据中修复维度的先前分布的选择可以限制这种方法来学习强大表示的灵活性。我们提出了一种新颖的稀疏性刺激性尖峰类别的先验,该先验使用稀疏性作为一种机制来提供灵活性,使每个数据点都可以学习其自身的维度分布。此外,它提供了一种学习潜在变量的联合分布的机制,因此稀疏性及其可以解释潜在空间的完全不确定性。通过一系列实验,使用分布式和分布外的学习方案在MNIST,CIFAR-10和IMAGENET数据上,我们表明,与传统的固定维度先验以及其他稀疏诱导机制相比,提出的方法可提高准确性和鲁棒性,以及在文献中提出的潜在可变性模型。

The information bottleneck framework provides a systematic approach to learning representations that compress nuisance information in the input and extract semantically meaningful information about predictions. However, the choice of a prior distribution that fixes the dimensionality across all the data can restrict the flexibility of this approach for learning robust representations. We present a novel sparsity-inducing spike-slab categorical prior that uses sparsity as a mechanism to provide the flexibility that allows each data point to learn its own dimension distribution. In addition, it provides a mechanism for learning a joint distribution of the latent variable and the sparsity and hence can account for the complete uncertainty in the latent space. Through a series of experiments using in-distribution and out-of-distribution learning scenarios on the MNIST, CIFAR-10, and ImageNet data, we show that the proposed approach improves accuracy and robustness compared to traditional fixed-dimensional priors, as well as other sparsity induction mechanisms for latent variable models proposed in the literature.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源