论文标题

通过模棱两可的强大感知

Robust Perception through Equivariance

论文作者

Mao, Chengzhi, Zhang, Lingyu, Joshi, Abhishek, Yang, Junfeng, Wang, Hao, Vondrick, Carl

论文摘要

当遇到对抗性示例时,用于计算机视觉的深网是不可靠的。在本文中,我们介绍了一个框架,该框架使用自然图像中的密集固有约束来鲁棒性推理。通过在推理时间引入约束,我们可以将鲁棒性的负担从训练转移到推理算法,从而使模型可以在推理时在推理时动态调整每个单个图像的独特且潜在的新型特征。在不同的约束中,我们发现基于均衡的约束最有效,因为它们允许在特征空间中密集的约束,而不会在细粒度级别上过度约束表示。我们的理论结果证明了在推理时具有如此密集的约束的重要性。我们的经验实验表明,在推理时间时恢复特征肩variance剂会防御最坏的对抗扰动。该方法在图像识别,语义细分和实例分割任务上获得了四个数据集(ImageNet,CityScapes,Pascal VOC和MS-Coco)的改进的对抗鲁棒性。项目页面可从equi4robust.cs.columbia.edu获得。

Deep networks for computer vision are not reliable when they encounter adversarial examples. In this paper, we introduce a framework that uses the dense intrinsic constraints in natural images to robustify inference. By introducing constraints at inference time, we can shift the burden of robustness from training to the inference algorithm, thereby allowing the model to adjust dynamically to each individual image's unique and potentially novel characteristics at inference time. Among different constraints, we find that equivariance-based constraints are most effective, because they allow dense constraints in the feature space without overly constraining the representation at a fine-grained level. Our theoretical results validate the importance of having such dense constraints at inference time. Our empirical experiments show that restoring feature equivariance at inference time defends against worst-case adversarial perturbations. The method obtains improved adversarial robustness on four datasets (ImageNet, Cityscapes, PASCAL VOC, and MS-COCO) on image recognition, semantic segmentation, and instance segmentation tasks. Project page is available at equi4robust.cs.columbia.edu.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源