“To prevent a future of killer robots, we must have a legally-binding treaty. Nothing less will do,” says Miriam Struyk of PAX. She is in Geneva to attend a week of UN meetings on lethal autonomous weapons, also known as killer robots.
“We expect states to show their determination to avoid dehumanizing the use of force by moving to negotiate a new international legal framework ensuring meaningful human control over selecting and attacking targets.”
The question is: will people continue to have a meaningful role when robots kill, or will robots be able to kill without any human intervention? And how bad would that be? This is what UN diplomats are discussing this week in Geneva. This is the first time the UN has spent so much time discussing killer robots (a second session will take place in August). The attention is justified: the technology is developing quickly, while diplomacy is taking its time. Last year, 22 countries called for an outright ban on killer robots, and a growing group of countries have expressed concern about the weapons. Experts in robotics and artificial intelligence have also called for a ban.
Problem and solution
The image of a rational robot which only makes good decisions is naive. The ethical dilemma is should machines be able to decide between life and death? The legal dilemma is who is responsible for a decision made by a robot? The programmer, the commander who deploys the robot, or the machine itself? The security dilemma is what if a dictator or terrorists get hold of these weapons? What is the solution? People should always have meaningful control when it comes to selecting and eliminating a target. After all, killer robots will not just be used by ‘our’ forces, they will also be used against us. In addition, a code of conduct is needed which will govern developers.
Recent developments
Two recent examples demonstrate the need for regulation of killer robots. Google was involved in a project for the US Defence Department, which is energetically looking for expertise on AI. Some Google staff objected. “We believe that Google should not be in the business of war” (NYT, 4 April 2018). In another development, a group of international scientists announced a boycott of a Korean university because it is cooperating with a weapons producer in developing AI for weapons systems. The scientists are concerned that this will lead to killer robots (The Guardian, 5 April 2018).
More about killer robots and why they should be banned.