论文标题

人工智能和武器控制

Artificial Intelligence and Arms Control

论文作者

Scharre, Paul, Lamberth, Megan

论文摘要

人工智能(AI)的潜在进步可能对国家如何研究和开发武器系统以及军队如何在战场上部署这些系统有深远的影响。支持AI的军事系统的想法激发了一些激进主义者要求对某些武器系统进行限制或禁止,而其他人则认为AI可能太散射而无法控制。本文认为,尽管对AI的所有军事应用的禁令可能是不可行的,但可能在某些情况下进行了武器控制。在整个历史上,国际社会出于各种原因试图禁止或规范武器或军事系统。本文分析了成功和失败,并提供了几个标准,这些标准似乎影响了为什么在某些情况下而不是其他情况下武器控制作用的原因。我们认为,成功或失败取决于可取性(即武器的军事价值与其可怕的可怕性)和可行性(即影响其成功成功的社会政治因素)。根据这些标准以及过去对武器控制尝试的历史记录,我们分析了将来AI武器控制的潜力,并为政策制定者今天所能做的建议提供了建议。

Potential advancements in artificial intelligence (AI) could have profound implications for how countries research and develop weapons systems, and how militaries deploy those systems on the battlefield. The idea of AI-enabled military systems has motivated some activists to call for restrictions or bans on some weapon systems, while others have argued that AI may be too diffuse to control. This paper argues that while a ban on all military applications of AI is likely infeasible, there may be specific cases where arms control is possible. Throughout history, the international community has attempted to ban or regulate weapons or military systems for a variety of reasons. This paper analyzes both successes and failures and offers several criteria that seem to influence why arms control works in some cases and not others. We argue that success or failure depends on the desirability (i.e., a weapon's military value versus its perceived horribleness) and feasibility (i.e., sociopolitical factors that influence its success) of arms control. Based on these criteria, and the historical record of past attempts at arms control, we analyze the potential for AI arms control in the future and offer recommendations for what policymakers can do today.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源