论文标题
加密启发了视觉分类的对抗防御
Encryption Inspired Adversarial Defense for Visual Classification
论文作者
论文摘要
常规的对抗防御降低了分类准确性,无论模型是否受到攻击。此外,由于混淆梯度的问题,大多数基于图像处理的防御措施被击败。在本文中,我们提出了一种新的对抗防御,这是受感知图像加密方法启发的训练和测试图像的防御转换。所提出的方法利用具有秘密钥匙的块像素洗牌方法。实验是在自适应和非自适应最大 - 最大界的白盒攻击上进行的,同时考虑混淆的梯度。结果表明,在干净的图像上,提议的防御在CIFAR-100数据集上的噪声距离为8/255的对抗性示例上可实现高精度(91.55%),而(89.66%)的精度(89.66%)。因此,拟议的防御表现优于最先进的对抗防御能力,包括潜在的对抗训练,对抗训练和温度计编码。
Conventional adversarial defenses reduce classification accuracy whether or not a model is under attacks. Moreover, most of image processing based defenses are defeated due to the problem of obfuscated gradients. In this paper, we propose a new adversarial defense which is a defensive transform for both training and test images inspired by perceptual image encryption methods. The proposed method utilizes a block-wise pixel shuffling method with a secret key. The experiments are carried out on both adaptive and non-adaptive maximum-norm bounded white-box attacks while considering obfuscated gradients. The results show that the proposed defense achieves high accuracy (91.55 %) on clean images and (89.66 %) on adversarial examples with noise distance of 8/255 on CIFAR-10 dataset. Thus, the proposed defense outperforms state-of-the-art adversarial defenses including latent adversarial training, adversarial training and thermometer encoding.