论文标题

分布式ML系统的准确性效率折衷和问责制

Accuracy-Efficiency Trade-Offs and Accountability in Distributed ML Systems

论文作者

Cooper, A. Feder, Levy, Karen, De Sa, Christopher

论文摘要

准确性和效率之间的权衡遍布法律,公共卫生和其他非计算领域,这些领域制定了政策,以指导如何在不确定性条件下平衡两者。尽管计算机科学也通常研究准确性效率的权衡,但其政策含义仍然很差。在美国的风险评估实践中,我们认为,由于检查这些权衡对于指导其他领域的治理很有用,因此我们需要类似地考虑到管理计算机系统中的这些权衡。我们将分析重点放在分布式机器学习系统上。了解该领域的政策含义特别紧急,因为此类系统(包括自动驾驶汽车)往往是高风险和关键性的。我们1)描述这些系统的权衡是如何形成的,2)突出显示现有的美国风险评估标准与正确评估这些系统所需的差距; 3)当关于准确性效率折衷的假设风险在现实世界中实现出关于准确性效率折衷的假设风险时,采取特定的采取措施来促进问责制。我们通过讨论这种问责机制如何鼓励与公共价值观保持一致的公正,透明的治理如何结束。

Trade-offs between accuracy and efficiency pervade law, public health, and other non-computing domains, which have developed policies to guide how to balance the two in conditions of uncertainty. While computer science also commonly studies accuracy-efficiency trade-offs, their policy implications remain poorly examined. Drawing on risk assessment practices in the US, we argue that, since examining these trade-offs has been useful for guiding governance in other domains, we need to similarly reckon with these trade-offs in governing computer systems. We focus our analysis on distributed machine learning systems. Understanding the policy implications in this area is particularly urgent because such systems, which include autonomous vehicles, tend to be high-stakes and safety-critical. We 1) describe how the trade-off takes shape for these systems, 2) highlight gaps between existing US risk assessment standards and what these systems require to be properly assessed, and 3) make specific calls to action to facilitate accountability when hypothetical risks concerning the accuracy-efficiency trade-off become realized as accidents in the real world. We close by discussing how such accountability mechanisms encourage more just, transparent governance aligned with public values.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源