论文标题
关于AI分解中的解释性:跨域调查
On Explainability in AI-Solutions: A Cross-Domain Survey
论文作者
论文摘要
人工智能(AI)越来越多地显示出胜过谓语逻辑算法和人类控制的潜力。在自动得出系统模型时,AI算法学习在人类无法检测到的数据中的关系。然而,这种强大的优势也利用了AI方法可疑。模型越复杂,人类就越难理解决策的推理。目前,完全自动化的AI算法很少,每种算法都必须为人类运营商提供推理。对于数据工程师而言,诸如准确性和灵敏度之类的指标就足够了。但是,如果模型与非专家相互作用,则必须可以理解解释。这项工作提供了有关该主题的文献的广泛调查,在很大程度上,该文献包括其他调查。这些发现映射到解释决策的决定和原因的方法。它表明,理由和方法的异质性和解释性的异质性导致了个人的解释框架。
Artificial Intelligence (AI) increasingly shows its potential to outperform predicate logic algorithms and human control alike. In automatically deriving a system model, AI algorithms learn relations in data that are not detectable for humans. This great strength, however, also makes use of AI methods dubious. The more complex a model, the more difficult it is for a human to understand the reasoning for the decisions. As currently, fully automated AI algorithms are sparse, every algorithm has to provide a reasoning for human operators. For data engineers, metrics such as accuracy and sensitivity are sufficient. However, if models are interacting with non-experts, explanations have to be understandable. This work provides an extensive survey of literature on this topic, which, to a large part, consists of other surveys. The findings are mapped to ways of explaining decisions and reasons for explaining decisions. It shows that the heterogeneity of reasons and methods of and for explainability lead to individual explanatory frameworks.