论文标题
利用深神网络的脆弱性以保护隐私保护
Exploiting vulnerabilities of deep neural networks for privacy protection
论文作者
论文摘要
可以将对抗性扰动添加到图像中,以保护其内容免受不必要的推论。但是,这些扰动可能是对在扰动过程中未{}的分类器或反对防御{基于重新量化,中值过滤或JPEG压缩的分类器的无效。为了解决这些局限性,我们提出了一种针对{未见}分类器和已知防御的对抗性攻击{} {}。我们使用基于快速梯度签名方法的迭代过程制作扰动,并在每次迭代中随机选择分类器和防御}。这种随机性防止了对特定分类器或防御的不合适的过度拟合。我们验证了Placs365-Standard数据集的私人类别的针对性和非目标设置中的拟议攻击。使用Resnet18,Resnet50,Alexnet和Densenet161 {作为分类器},拟议攻击的性能超过了11个最先进的攻击。该实现可在https://github.com/smartcameras/rp-fgsm/上获得。
Adversarial perturbations can be added to images to protect their content from unwanted inferences. These perturbations may, however, be ineffective against classifiers that were not {seen} during the generation of the perturbation, or against defenses {based on re-quantization, median filtering or JPEG compression. To address these limitations, we present an adversarial attack {that is} specifically designed to protect visual content against { unseen} classifiers and known defenses. We craft perturbations using an iterative process that is based on the Fast Gradient Signed Method and {that} randomly selects a classifier and a defense, at each iteration}. This randomization prevents an undesirable overfitting to a specific classifier or defense. We validate the proposed attack in both targeted and untargeted settings on the private classes of the Places365-Standard dataset. Using ResNet18, ResNet50, AlexNet and DenseNet161 {as classifiers}, the performance of the proposed attack exceeds that of eleven state-of-the-art attacks. The implementation is available at https://github.com/smartcameras/RP-FGSM/.