Lethal AI weapons are here: how can we control them?
致命自主武器來了:我們要如何控制它們?
Sciences
The development of lethal autonomous weapons (LAWs), including AI-equipped drones, is on the rise. The US Department of Defense, for example, has earmarked US$1 billion so far for its Replicator programme, which aims to build a fleet of small, weaponized autonomous vehicles. Experimental submarines, tanks and ships have been made that use AI to pilot themselves and shoot. Commercially available drones can use AI image recognition to zero in on targets and blow them up. LAWs do not need AI to operate, but the technology adds speed, specificity and the ability to evade defences. Some observers fear a future in which swarms of cheap AI drones could be dispatched by any faction to take out a specific person, using facial recognition. Warfare is a relatively simple application for AI. “The technical capability for a system to find a human being and kill them is much easier than to develop a self-driving car. It’s a graduate-student project,” says Stuart Russell, a computer scientist at the University of California, Berkeley, and a prominent campaigner against AI weapons. The emergence of AI on the battlefield has spurred debate among researchers, legal experts and ethicists. Some argue that AI-assisted weapons could be more accurate than human-guided ones, potentially reducing both collateral damage — such as civilian casualties and damage to residential areas — and the numbers of soldiers killed and maimed, while helping vulnerable nations and groups to defend themselves. Others emphasize that autonomous weapons could make catastrophic mistakes. And many observers have overarching ethical concerns about passing targeting decisions to an algorithm.
---from Nature
致命自主武器(LAW)的開發正在不斷發展,其中包括配備人工智慧的無人機。例如,美國國防部迄今已經為其Replicator 計劃撥出10億美元,該計劃旨在打造一支小型武器化自動駕駛車隊。實驗用的潛艇、坦克和船都使用人工智慧來駕駛和發射。商用無人機可以使用人工智慧圖像辨識來鎖定目標並將其炸毀。致命自主武器不需要人工智慧來操作,但這項技術增加了速度、特異性和規避防禦的能力。一些觀察家擔心,未來任何軍派都可以派出大量廉價人工智慧無人機,利用臉部辨識技術,除掉某個特定的人。 戰爭是人工智慧的一個相對簡單的應用。「系統找到一個人並殺死他的技術能力比開發一輛自動駕駛汽車要容易得多。這只是一個研究生等級的研究,」加州大學柏克萊分校的電腦科學家、反對人工智慧武器的知名活動家Stuart Russell說。 人工智慧在戰場上的出現引發了研究人員、法律專家和倫理學家之間的爭論。有些人認為,人工智慧輔助武器可能比人類操作的武器更精準,可以減少附帶損害(例如平民傷亡和住宅區的破壞)以及士兵死亡和肢體傷殘的人數,同時幫助脆弱國家和團體的自我防衛。其他人則強調自主武器可能會犯下災難性的錯誤。許多觀察家對將目標決策交給演算法存在著全面性的道德隱憂。
---摘錄翻譯自Nature