论文标题

这个解释是谁?人类的智能和知识图,可解释的AI

Who is this Explanation for? Human Intelligence and Knowledge Graphs for eXplainable AI

论文作者

Celino, Irene

论文摘要

可解释的AI专注于为用户(通常是决策者)生成AI算法输出的解释。这样的用户需要解释AI系统,以决定是否信任机器结果。因此,在解决这一挑战时,应给予适当的注意,以产生由用户目标社区解释的解释。在本章中,我们声称有必要更好地研究人类的解释是什么构成人类的解释,即对人类决策者可以解释和可行的机器行为的理由。特别是,我们专注于人类智能可以为AI带来的贡献,尤其是与知识图的剥削结合在一起。的确,我们呼吁在知识表示与推理,社会科学,人类计算和人机合作研究之间进行更好的相互作用 - 正如其他AI分支机构所探讨的那样,以支持采用人类在循环方法的方法。

eXplainable AI focuses on generating explanations for the output of an AI algorithm to a user, usually a decision-maker. Such user needs to interpret the AI system in order to decide whether to trust the machine outcome. When addressing this challenge, therefore, proper attention should be given to produce explanations that are interpretable by the target community of users. In this chapter, we claim for the need to better investigate what constitutes a human explanation, i.e. a justification of the machine behaviour that is interpretable and actionable by the human decision makers. In particular, we focus on the contributions that Human Intelligence can bring to eXplainable AI, especially in conjunction with the exploitation of Knowledge Graphs. Indeed, we call for a better interplay between Knowledge Representation and Reasoning, Social Sciences, Human Computation and Human-Machine Cooperation research -- as already explored in other AI branches -- in order to support the goal of eXplainable AI with the adoption of a Human-in-the-Loop approach.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源