论文标题

在神经网络中学习不变

Learning Invariances in Neural Networks

论文作者

Benton, Gregory, Finzi, Marc, Izmailov, Pavel, Wilson, Andrew Gordon

论文摘要

对翻译的不变使卷积神经网络具有强大的概括属性。但是,我们通常不知道数据中存在哪些不变性,或者模型在多大程度上应该在给定的对称组中不变。我们通过将分布在增强上进行参数化并相对于网络参数和增强参数同时优化训练损失,从而展示了如何\ emph {Learn}的不变和等效性。通过此简单的程序,我们可以仅在训练数据上恢复图像分类,回归,分割和分子性质预测的正确集和范围。

Invariances to translations have imbued convolutional neural networks with powerful generalization properties. However, we often do not know a priori what invariances are present in the data, or to what extent a model should be invariant to a given symmetry group. We show how to \emph{learn} invariances and equivariances by parameterizing a distribution over augmentations and optimizing the training loss simultaneously with respect to the network parameters and augmentation parameters. With this simple procedure we can recover the correct set and extent of invariances on image classification, regression, segmentation, and molecular property prediction from a large space of augmentations, on training data alone.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源