论文标题

通过交互式反事实示例揭示神经网络偏见对非专家

Revealing Neural Network Bias to Non-Experts Through Interactive Counterfactual Examples

论文作者

Myers, Chelsea M., Freed, Evan, Pardo, Luis Fernando Laris, Furqan, Anushay, Risi, Sebastian, Zhu, Jichen

论文摘要

AI算法不能免疫偏见。传统上,非专家几乎无法控制可能影响其生活的算法中潜在的社会偏见(例如性别偏见)。我们为交互式可视化工具CEB提供了初步设计,以揭示常用的AI方法神经网络(NN)中的偏见。 CEB结合了反事实示例和NN决策过程的抽象,以赋予非专家检测偏见。本文介绍了与AI,HCI和社会科学专家的专家小组(n = 6)的CEB的设计和初步发现。

AI algorithms are not immune to biases. Traditionally, non-experts have little control in uncovering potential social bias (e.g., gender bias) in the algorithms that may impact their lives. We present a preliminary design for an interactive visualization tool CEB to reveal biases in a commonly used AI method, Neural Networks (NN). CEB combines counterfactual examples and abstraction of an NN decision process to empower non-experts to detect bias. This paper presents the design of CEB and initial findings of an expert panel (n=6) with AI, HCI, and Social science experts.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源