论文标题

二阶NLP对抗示例

Second-Order NLP Adversarial Examples

论文作者

Morris, John X.

论文摘要

NLP中的对抗性示例生成方法依赖于语言模型或句子编码器之类的模型来确定潜在的对抗示例是否有效。在这些方法中,一个有效的对手示例欺骗了被攻击的模型,并被确定为通过第二个模型在语义上或语法上有效。迄今为止的研究将所有这些示例视为攻击模型的错误。我们认为,这些对抗性示例可能不是攻击模型中的缺陷,而是确定有效性的模型中的缺陷。我们称这种无效的输入二阶对抗示例。我们提出约束鲁棒性曲线和相关的度量ACC,作为评估二阶对抗示例约束的鲁棒性的工具。为了生成此曲线,我们设计了一种对抗性攻击,以直接在语义相似性模型上运行。我们测试两个约束,即通用句子编码器(使用)和bertscore。我们的发现表明,这种二阶示例存在,但通常不如最先进模型中的一阶对抗示例常见。他们还表明,用作对NLP对抗性示例有效,而Bertscore几乎是无效的。可以在本文中运行实验的代码,请访问https://github.com/jxmorris12/second-order-order-vers-versarial-examples。

Adversarial example generation methods in NLP rely on models like language models or sentence encoders to determine if potential adversarial examples are valid. In these methods, a valid adversarial example fools the model being attacked, and is determined to be semantically or syntactically valid by a second model. Research to date has counted all such examples as errors by the attacked model. We contend that these adversarial examples may not be flaws in the attacked model, but flaws in the model that determines validity. We term such invalid inputs second-order adversarial examples. We propose the constraint robustness curve and associated metric ACCS as tools for evaluating the robustness of a constraint to second-order adversarial examples. To generate this curve, we design an adversarial attack to run directly on the semantic similarity models. We test on two constraints, the Universal Sentence Encoder (USE) and BERTScore. Our findings indicate that such second-order examples exist, but are typically less common than first-order adversarial examples in state-of-the-art models. They also indicate that USE is effective as constraint on NLP adversarial examples, while BERTScore is nearly ineffectual. Code for running the experiments in this paper is available at https://github.com/jxmorris12/second-order-adversarial-examples.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源