论文标题

SOK:计算机安全应用程序的可解释的机器学习

SoK: Explainable Machine Learning for Computer Security Applications

论文作者

Nadeem, Azqa, Vos, Daniël, Cao, Clinton, Pajola, Luca, Dieck, Simon, Baumgartner, Robert, Verwer, Sicco

论文摘要

可解释的人工智能(XAI)旨在提高机器学习(ML)管道的透明度。我们将开发和利用XAI方法用于防御和进攻性网络安全任务的研究越来越多的研究(但分散)。我们确定了3个网络安全利益相关者,即模型用户,设计师和对手,他们在ML管道中使用XAI用于4个不同的目标,即1)XAI启用用户帮助,2)启用XAI的模型验证,3)解释性验证和鲁棒性,以及4)令人反感的说明的进攻性使用。我们对文献的分析表明,许多XAI应用程序的设计都几乎没有了解如何将其集成到分析师工作流程中 - 仅在14%的情况下进行了解释评估的用户研究。安全文献有时也无法通过为建模用户和设计师提供解释,同时也将其暴露于对手身上,从而无法解开各种利益相关者的角色。此外,在安全文献中,模型设计师的作用尤其最小。为此,我们为模型设计师提供了说明性教程,展示了XAI如何帮助模型验证。我们还讨论了设计可能是更好的替代方案的方案。系统化和教程使我们能够挑战几个假设,并提出开放问题,可以帮助塑造网络安全内XAI研究的未来。

Explainable Artificial Intelligence (XAI) aims to improve the transparency of machine learning (ML) pipelines. We systematize the increasingly growing (but fragmented) microcosm of studies that develop and utilize XAI methods for defensive and offensive cybersecurity tasks. We identify 3 cybersecurity stakeholders, i.e., model users, designers, and adversaries, who utilize XAI for 4 distinct objectives within an ML pipeline, namely 1) XAI-enabled user assistance, 2) XAI-enabled model verification, 3) explanation verification & robustness, and 4) offensive use of explanations. Our analysis of the literature indicates that many of the XAI applications are designed with little understanding of how they might be integrated into analyst workflows -- user studies for explanation evaluation are conducted in only 14% of the cases. The security literature sometimes also fails to disentangle the role of the various stakeholders, e.g., by providing explanations to model users and designers while also exposing them to adversaries. Additionally, the role of model designers is particularly minimized in the security literature. To this end, we present an illustrative tutorial for model designers, demonstrating how XAI can help with model verification. We also discuss scenarios where interpretability by design may be a better alternative. The systematization and the tutorial enable us to challenge several assumptions, and present open problems that can help shape the future of XAI research within cybersecurity.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源