论文标题

衡量对AI的信任的价值 - 一种社会技术系统的观点

The Value of Measuring Trust in AI - A Socio-Technical System Perspective

论文作者

Benk, Michaela, Tolmeijer, Suzanne, von Wangenheim, Florian, Ferrario, Andrea

论文摘要

建立基于AI的系统的信任对于其采用和适当使用至关重要。因此,最近的研究试图评估这些系统的各种属性如何影响用户信任。但是,关于AI信任的定义和衡量的局限性阻碍了该领域的进步,从而导致结果不一致或难以比较。在这项工作中,我们概述了定义和衡量AI信任的主要局限性。我们专注于使对AI的数值价值及其在为现实世界人类互动设计的效用中提供信任的尝试。我们对AI进行了社会技术系统的观点,我们探索了解决这些挑战的两种不同的方法。我们为如何在实践中实施这些方法提供了可行的建议,并为人类互动的设计提供了信息。因此,我们旨在为研究人员和设计师提供一个起点,以重新评估当前对AI的信任的关注,从而改善经验研究范式可能提供的一致性和对现实世界人类互动的期望。

Building trust in AI-based systems is deemed critical for their adoption and appropriate use. Recent research has thus attempted to evaluate how various attributes of these systems affect user trust. However, limitations regarding the definition and measurement of trust in AI have hampered progress in the field, leading to results that are inconsistent or difficult to compare. In this work, we provide an overview of the main limitations in defining and measuring trust in AI. We focus on the attempt of giving trust in AI a numerical value and its utility in informing the design of real-world human-AI interactions. Taking a socio-technical system perspective on AI, we explore two distinct approaches to tackle these challenges. We provide actionable recommendations on how these approaches can be implemented in practice and inform the design of human-AI interactions. We thereby aim to provide a starting point for researchers and designers to re-evaluate the current focus on trust in AI, improving the alignment between what empirical research paradigms may offer and the expectations of real-world human-AI interactions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源