论文标题
为什么我们确实需要可解释的AI进行医疗保健
Why we do need Explainable AI for Healthcare
论文作者
论文摘要
最近在认证的人工智能(AI)工具上,用于医疗保健的兴奋剂已重新涉及采用该技术的辩论。此类辩论的一个线索涉及可解释的AI及其希望使AI设备更透明和值得信赖的承诺。在医疗AI领域中活跃的一些声音对可解释的AI技术的可靠性表示关注,并质疑其在准则和标准中的使用和包容性。重新批评此类批评时,本文对可解释的AI的实用性提供了平衡,全面的观点,重点关注AI的临床应用的特殊性,并将其置于医疗干预措施中。我们认为,尽管有着有效的关注,但我们认为,可解释的AI研究计划仍然是人机相互作用的核心,最终是我们反对失去控制的主要工具,仅通过严格的临床验证就无法阻止这种危险。
The recent spike in certified Artificial Intelligence (AI) tools for healthcare has renewed the debate around adoption of this technology. One thread of such debate concerns Explainable AI and its promise to render AI devices more transparent and trustworthy. A few voices active in the medical AI space have expressed concerns on the reliability of Explainable AI techniques, questioning their use and inclusion in guidelines and standards. Revisiting such criticisms, this article offers a balanced and comprehensive perspective on the utility of Explainable AI, focusing on the specificity of clinical applications of AI and placing them in the context of healthcare interventions. Against its detractors and despite valid concerns, we argue that the Explainable AI research program is still central to human-machine interaction and ultimately our main tool against loss of control, a danger that cannot be prevented by rigorous clinical validation alone.