Increasing autonomy in weapons systems: Ten examples of weapon systems with increasing autonomyPhoto by Clare Conboy
Earlier this year UN rapporteurs reported on the use of autonomous weapons in Libya. The weapon in question is the Kargu, a small drone with an explosive charge that can operate as a group in a swarm. Normally, these weapons are controlled remotely by a soldier, but the UN report states that the weapons were programmed to attack targets autonomously, without any contact needed between the soldier and the weapon. This raises questions about how meaningful human control was retained. Another concern is that the Kargu can be used with facial recognition against human targets. It shows that military technology is developing at a rapid pace and that international regulations must be put in place as soon as possible. There is a great danger of a new generation of autonomous weapons being deployed without clear rules on how they should be used, and where we draw the line of fundamentally unacceptable autonomous weapons.
Human approval can easily be removed
Autonomous weapons are weapon systems that can select and attack targets based on sensor inputs, without direct human input. In our new report “increasing autonomy in weapon systems” we show there is a new range of weapon systems are being deployed with increasing levels of autonomy. The fast-paced developments in artificial intelligence and other technical advancements have led to an increase in the capabilities of weapon systems with autonomous functions. These include a larger geographical area and duration of operation, the and the potential for the identification of more types of targets, including humans. Also, there is an increase in the number of systems that can operate together in swarms and the types of tasks that can be performed without human involvement. With the ten weapon systems described in the report there is still a human operator approving an attack, but the human approval could technically be removed.
The big danger lies within the fact that there is a new generation of autonomous weapons coming with increasing capabilities. Yet, clear international rules on how they should be used in line with legal and ethical norms are still missing. Furthermore, no red lines have been established of which technologies would be fundamentally unacceptable and need to be prohibited in its entirety. Autonomous weapons for example that target humans would be the ultimate form of digital dehumanisation. This lack of rules is a huge threat for our collective security.
United Nations Convention on Conventional Weapons
This week, 6th Review Conference of the Convention on Conventional Weapons (CCW) is taking place at the UN in Geneva. This Conference, which happens every five years, is an opportunity for states to take steps and commit themselves to safeguarding human control over the use of force, and preventing further digital dehumanisation. The majority of states - including from Latin America, the Middle East, Asia and Africa, but also European states such as Norway, Germany, and Austria – want new rules for autonomous weapons. However, a small number of highly militarised states, including the United Stated and Russia, are blocking progress at the UN. This is a great risk for our collective security and the majority of states should look for ways to urgently address this problem.
In our report Increasing autonomy in weapons systems: Ten examples of weapon systems with increasing autonomy we show that there is a new generation of weapon systems being deployed with increasing levels of autonomy. The fast-paced developments in artificial intelligence and other technical advancements have led to an increase in the capabilities of weapon systems with autonomous functions. Ten weapon systems are described to highlight the diversities of the systems, including the diversities in capabilities, scale of functioning, of operating environment and of country of origin.