论文标题

从飞机失事到算法危害:安全工程框架的适用性

From plane crashes to algorithmic harm: applicability of safety engineering frameworks for responsible ML

论文作者

Rismani, Shalaleh, Shelby, Renee, Smart, Andrew, Jatho, Edgar, Kroll, Joshua, Moon, AJung, Rostamzadeh, Negar

论文摘要

对于用户,社会和环境,机器学习(ML)系统的不当设计和部署会导致下游的社会和道德影响负面影响 - 在这里被描述为社会和道德风险。尽管需要调节ML系统,但目前评估和减轻风险的过程是脱节和不一致的。我们就当前的社会和道德风险管理实践采访了30名行业从业人员,并在将安全工程框架调整为实践中的第一个反应 - 即系统理论过程分析(STPA)以及失败模式和效果分析(FMEA)。我们的发现表明,STPA/FMEA可以为社会和道德风险评估和缓解过程提供适当的结构。但是,我们还发现将这种框架融入ML行业的快节奏文化中时,我们发现了非平凡的挑战。我们呼吁ML研究社区加强现有框架并评估其功效,以确保ML系统对所有人更安全。

Inappropriate design and deployment of machine learning (ML) systems leads to negative downstream social and ethical impact -- described here as social and ethical risks -- for users, society and the environment. Despite the growing need to regulate ML systems, current processes for assessing and mitigating risks are disjointed and inconsistent. We interviewed 30 industry practitioners on their current social and ethical risk management practices, and collected their first reactions on adapting safety engineering frameworks into their practice -- namely, System Theoretic Process Analysis (STPA) and Failure Mode and Effects Analysis (FMEA). Our findings suggest STPA/FMEA can provide appropriate structure toward social and ethical risk assessment and mitigation processes. However, we also find nontrivial challenges in integrating such frameworks in the fast-paced culture of the ML industry. We call on the ML research community to strengthen existing frameworks and assess their efficacy, ensuring that ML systems are safer for all people.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源