论文标题
可分开的注意力转移对象检测的物理攻击
Transferable Physical Attack against Object Detection with Separable Attention
论文作者
论文摘要
由于深度学习模型被证明容易受到对抗性样本的影响,因此可转移的对抗攻击始终引起了人们的关注。但是,现有的物理攻击方法对可转移性的转移性不足,从而导致黑盒攻击的性能不佳。在本文中,我们提出了一种新的方法,即产生可实现的对抗性伪装,以实现针对检测模型的可转移攻击。更具体地说,我们首先根据检测模型引入多尺度注意图,以捕获具有各种分辨率的对象的特征。同时,我们采用一系列复合变换来获得平均注意图,这可以抑制注意力中的模型特异性噪声,从而进一步提高可传递性。与一般的可视化解释方法不同,应尽可能将模型注意力放在前景对象上,我们从相反的角度对可分离的注意进行攻击,即抑制前景的注意力并增强背景的注意力。因此,可以通过我们新颖的基于注意力的损失功能有效地产生可转移的对抗伪装。广泛的比较实验验证了我们方法对最先进方法的优越性。
Transferable adversarial attack is always in the spotlight since deep learning models have been demonstrated to be vulnerable to adversarial samples. However, existing physical attack methods do not pay enough attention on transferability to unseen models, thus leading to the poor performance of black-box attack.In this paper, we put forward a novel method of generating physically realizable adversarial camouflage to achieve transferable attack against detection models. More specifically, we first introduce multi-scale attention maps based on detection models to capture features of objects with various resolutions. Meanwhile, we adopt a sequence of composite transformations to obtain the averaged attention maps, which could curb model-specific noise in the attention and thus further boost transferability. Unlike the general visualization interpretation methods where model attention should be put on the foreground object as much as possible, we carry out attack on separable attention from the opposite perspective, i.e. suppressing attention of the foreground and enhancing that of the background. Consequently, transferable adversarial camouflage could be yielded efficiently with our novel attention-based loss function. Extensive comparison experiments verify the superiority of our method to state-of-the-art methods.