18 July, 2018
Tech leaders, including Elon Musk and the three co-founders of Google’s AI subsidiary DeepMind, have signed a pledge promising to not develop “lethal autonomous weapons.”
The pledge warns that weapon systems that use AI to “[select] and [engage] targets without human intervention” pose moral and pragmatic threats.Morally, the signatories argue, the decision to take a human life “should never be delegated to a machine.”
The pledge was published today at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm, and it was organized by the Future of Life Institute, a research institute that aims to “mitigate existential risk” to humanity.
Signatories include SpaceX and Tesla CEO Elon Musk; the three co-founders of Google’s DeepMind subsidiary, Shane Legg, Mustafa Suleyman, and Demis Hassabis; Skype founder Jaan Tallinn; and some of the world’s most respected and prominent AI researchers, including Stuart Russell, Yoshua Bengio, and Jürgen Schmidhuber.
Max Tegmark, professor of physics at MIT, said in a statement that the pledge showed AI leaders “shifting from talk to action.” This is a commendable step towards world peace.
“Weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons and should be dealt with in the same way,” said Tegmark.
They also point out that enforcing such laws would be a huge challenge, as the technology to develop AI weaponry is already widespread.
Paul Scharre, a military analyst who has wrriten a book on the future of warfare and AI, said “What seems to be lacking is sustained engagement from AI researchers in explaining to policymakers why they are concerned about autonomous weapons.”
“At least 30 nations have supervised autonomous weapons used to defend against rocket and missile attack,” said Scharre. “The real debate is in the middle space, which the press release is somewhat ambiguous on.”