论文标题

多任务学习增强了对抗性的鲁棒性

Multitask Learning Strengthens Adversarial Robustness

论文作者

Mao, Chengzhi, Gupta, Amogh, Nitin, Vikram, Ray, Baishakhi, Song, Shuran, Yang, Junfeng, Vondrick, Carl

论文摘要

尽管深层网络在一系列计算机视觉基准上达到了强大的精度,但它们仍然容易受到对抗性攻击的影响,在这种攻击中,不可察觉的输入扰动会欺骗网络。我们介绍了将模型的对抗鲁棒性与训练的任务数量联系起来的理论和经验分析。两个数据集的实验表明,随着目标任务的数量增加,攻击难度会增加。此外,我们的结果表明,当模型立即对多个任务进行培训时,它们会对对各个任务的对抗性攻击变得更加强大。尽管对抗性防御仍然是一个悬而未决的挑战,但我们的结果表明,深层网络部分是脆弱的,部分原因是他们接受了太少的任务。

Although deep networks achieve strong accuracy on a range of computer vision benchmarks, they remain vulnerable to adversarial attacks, where imperceptible input perturbations fool the network. We present both theoretical and empirical analyses that connect the adversarial robustness of a model to the number of tasks that it is trained on. Experiments on two datasets show that attack difficulty increases as the number of target tasks increase. Moreover, our results suggest that when models are trained on multiple tasks at once, they become more robust to adversarial attacks on individual tasks. While adversarial defense remains an open challenge, our results suggest that deep networks are vulnerable partly because they are trained on too few tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源