“The decision to take human life should never be delegated to a machine.” That is according to 2,300 scientists and 140 technology companies who have promised never to take part in the development, production, trade or use of lethal autonomous weapons, better known as killer robots.
Their promise was published this week by the International Joint Conference on Artificial Intelligence in Stockholm, a major event in the world of Artificial Intelligence. Signatories include Google’s DeepMind, Elon Musk (Tesla and Space-X), Professor Stuart Russel (UC Berkeley), the European Association for Artificial Intelligence and the University College London.
“This is an important signal. In 2015 and 2017, scientists and technology companies started warning about these weapons. They called on the United Nations to take measures to stop developments toward killer robots. Now they’re looking at what they themselves can do to reign in these developments,” says Daan Kayser, programme leader on killer robots at PAX.
In August 2017, 116 robotics CEOs called on the United Nations to find a way to protect people from the dangers of lethal autonomous weapons. This followed a similar call in 2015 from nearly 4,000 artificial intelligence and robotics experts. “It’s not often that scientists and entrepreneurs call for regulation in their own sector. It’s also a signal that we should take seriously,” says Kayser.
Since 2014, diplomats at the UN have been discussing how to deal with autonomous weapons. Since diplomacy is often slow, such pressure from the sector itself is crucial in preventing killer robots from becoming a reality.
From August 27 to 31, the UN will spend a second week discussing killer robots in Geneva (following a week in April of this year). PAX and the umbrella group Campaign to Stop Killer Robots will be there.