Jamie Smyth — FT.com Aug 21, 2017
The founders of 116 robotics and artificial intelligence companies have called for a ban on the use of lethal autonomous weapons in the wake of the postponement of UN talks on regulating “killer robots”.
Elon Musk, founder of SpaceX and OpenAI, and Mustafa Suleyman, founder and head of applied AI at Google’s Deepmind, are among the signatories of an open letter warning that lethal autonomous weapons would permit “armed conflict to be fought on a scale greater than ever, and at timescales faster than humans can comprehend”.
“This new arms race has already begun in every sphere of the battlefield — air, at sea and on land,” said Toby Walsh, professor of AI at University of New South Wales and one of the signatories of the letter. “These autonomous weapons threaten to industrialise war and the way we kill people. If they become part of the military industrial complex they will end up being used against civilians.”
In December, 123 member nations of the UN’s review conference of the convention on conventional weapons agreed to begin formal discussions on autonomous weapons. Of these, 19 have already called for an outright ban. But the talks, which were due to begin on Monday, have been postponed as several member nations have not paid their financial contributions to the UN.
The letter indicates regrets that the UN’s first meeting has been cancelled and urges all parties to double their efforts in advance of a planned November meeting. The authors of the letter said it was the first time that AI and robotics companies have taken a joint stance on the issue, although in 2015 a group of technology leaders called for a ban on autonomous weapons.
The US, China and Israel are among nations rushing to develop autonomous weapons technology, which has the capability to independently compose and select particular courses of action without relying on the judgment of a human controller.
Advances in AI technology are making it possible to cede to computers many tasks long regarded as impossible to be undertaken by machines, including in modern warfare.
But the adoption of the type of AI technology that featured in the Terminator film franchise, for instance, remains fraught with practical and ethical issues and has alarmed experts.
“We should not lose sight of the fact that, unlike other potential manifestations of AI that still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now,” said Ryan Gariepy, founder of Clearpath Robotics, who was the first of the 116 company founders to sign the letter.
“The development of lethal autonomous weapons systems is unwise, unethical and should be banned on an international scale.”
Some autonomous weapons systems have already been developed, including a robot sentry gun designed by a South Korean arms manufacturer Dodaam Systems. Its Super aEgis II machine gun is capable of identifying, tracking and destroying a target at a great distance without human operators.
However, the company has told media that all of its customers so far have asked for safeguards to be embedded in the robot that requires human operators to authorise any lethal shooting.
Defence experts say autonomous weapons systems have the potential to save lives on the battlefield. Israel’s Iron Dome anti-rocket system has been credited with shooting down scores of rockets fired from Gaza into Israel. Iron Dome uses technology that is programmed to respond automatically to threatening artillery or rockets.
The UK opposes banning autonomous weapons systems. A US department of defence report last year recommended accelerating exploitation of autonomous technology to “realise the potential military value and to remain ahead of adversaries who also will exploit its operational benefits”.