What are killer robots?Killer robots are autonomous weapon systems that can select and attack targets without meaningful human control. Which means the weapon system can use lethal force without a direct instruction from a human operator. This could apply to various weapon systems, for instance a battle tank, a fighter jet or a warship.
A common misunderstanding is that killer robots are the same as armed drones. With current drones however there is still a human operator who selects and attacks targets from a distance. Another misunderstanding is that killer robots are the same as the Terminator or Robocop. These are science fiction concepts, which are unlikely to become a reality in the coming decades, if ever at all.
Weapon systems that can autonomously select and attack targets raise many legal, ethical and security concerns. This is why PAX aims for a ban on the development, production and deployment of autonomous weapons. Already in 2011 PAX warned about this development, and with other organisations co-founded the Campaign to Stop Killer Robots in 2013.
Do killer robots exist?
Killer robots do not yet exist, but precursors clearly show the trend of increasing autonomy. Examples include the Harpy, a loitering munition that searches and attacks enemy radars, as well as the SGR-1 an armed robot on the border between North- and South-Korea. More recent examples include the KARGU, a small drone that can attack targets based on facial recognition. The technology required to produce these weapons is developing incredibly quick. Countries such as China, Russia, Israel, the United States and the United Kingdom are involved in the development of increasingly autonomous weapon systems.
What are the concerns?
For PAX the most important concern is an ethical one. A machine should never be allowed to make the decision over life and death. This decision cannot be reduced to an algorithm.
Then there are legal concerns. Autonomous weapons are unlikely to be able to comply with International Humanitarian Law, as it is unlikely that they will be able to properly distinguish between civilians and combatants, or to make a proportionality assessment. Autonomous weapons also create an accountability vacuum. Who would be responsible for an unlawful act: the robot, the developer or the military commander?
PAX also has security concerns. Autonomous weapons could lower the threshold to use force and reduce the incentive to find political solutions to end conflicts. This new technology could lead to a new international arms race, which would have destabilising effects and threaten international peace and security. What happens when dictators or terrorists get their hands on these weapons? Read more about the concerns in this short PAX publication: Ten reasons to ban killer robots
What does PAX want?
PAX wants states to create a pre-emptive ban on the development, production and use of killer robots. Or in other terms, PAX wants an international legally binding instrument safeguarding meaningful human control over the critical functions of the selection and attack of targets.
Currently there are 30 countries that have called for a ban. Moreover, there is growing consensus among most states that meaningful human control over the use of force is essential.
In 2015 over 3.000 Artificial Intelligence experts and in 2017 116 CEO’s from robotics companies warned us for these weapons and called on the UN to take action. The European Parliament, Twenty Nobel Peace Laureates and over 160 religious leaders have also called for a ban on autonomous weapons. UN Secretary-General Guterres has said these weapons are “politically unacceptable, morally repugnant and should be prohibited by international law”.
What is the Dutch position?
The Dutch national position on autonomous weapons is based on a 2015 report by two advisory councils ‘Autonomous weapon systems; the need for meaningful control’. PAX has been critical about this report. PAX’s greatest concern is that the report states that human control in the programming phase before deployment is sufficient and that human control over the selection and attack of targets is not necessary. Also the report lacks a sense of urgency and pays little attention to the international context and the ethical concerns. PAX continues to work towards a shift in Dutch policy.
Lethal autonomous weapons were first discussed at the Human Rights Council in 2013 after a report by UN Special Rapporteur Christof Heyns. Then the issue was taken up by the UN Convention for Conventional Weapons (CCW) where informal expert meetings took place in 2014, 2015, and 2016. In 2017 the first meeting of the Group of Governmental Experts (GGE), which has a more formal mandate, took place at the CCW and have since taken place every year. PAX takes part in the meetings at the CCW, where we meet with diplomats, make statements, and speak at side-events.
Private sector egagement
PAX also focuses on private sector engagement, working to prevent the development of killer robots. This work aims to engage with the tech sector, academics and researchers, as well as arms producers and the financial sector.
For more information, see Reprogramming War.
- European positions on lethal autonomous weapon systems - Update 2018
- Killer Robots - What are they and what are the concerns?
- The State of AI - Artificial Intelligence, the Military and Increasingly Autonomous Weapons
- Les Robots Tueurs: De quoi sagit-il et quelles sont les préoccupations?
- Robots Asesinos: ¿Qué son y por qué resultan preocupantes?
- Dont be Evil - A survey of the tech sector’s stance on lethal autonomous weapons
- Convergence? - European positions on lethal autonomous weapon systems Update 2019
- Slippery Slope - The arms industry and increasingly autonomous weapons (2019)
- Conflicted Intelligence - How universities can help prevent the development of lethal autonomous weapons (2020)
- Action kit: Save your university from killer robots (2020)