The Trump administration’s Department of War and tech company Anthropic have gotten into conflict. The two parties had been collaborating until CEO Dario Amodei refused to give the Trump administration unrestricted freedom in using the company’s AI technology. Anthropic had set a condition that its AI could not be used by the U.S. for mass surveillance or autonomous weapon systems. Washington subsequently blacklisted the company last weekend, as it did not want to accept such restrictions. OpenAI, another AI company, appears more than willing to fill this gap.
In the United States, the debate about autonomy in weapon systems now seems to have been left to the Secretary of War and tech companies. Yet, aside from the question of whether the technology is reliable enough at all, the discussion should focus on the ethical and legal issues raised by the increasing use of AI in the military domain.
Its use is extremely dangerous: autonomous weapons can independently select and attack targets (which may be people or buildings) without meaningful human control. Such weapon systems can therefore use lethal force without the intervention of military personnel. This type of autonomy can be integrated into various weapons systems, such as tanks, drones, and warships. This new way of warfare threatens to undermine international law.
This week, diplomats and civil society organizations are discussing autonomous weapons at the United Nations’ Convention on Conventional Weapons in Geneva. The Netherlands is chairing the talks on restricting autonomous weapons. Our colleague Roos Boer is present at the discussions:
Autonomous weapons are systems that can select and attack a target without human intervention. The current debate in the U.S. shows that international agreements on autonomy in weapons systems are not only urgent, but highly necessary. A growing group of countries is speaking out in favor of legally binding agreements, and the urgency of concluding them as soon as possible cannot be overstated. Here in Geneva, states must make clear that after years of deliberations, they are ready for the next step: an international ban on autonomous weapons systems without meaningful human control.
Roos Boer, expert Humanitarian Disarmament
We are a co-founder of the Stop Killer Robots campaign, which has previously responded to developments in the U.S. and has warned for years about the digital dehumanization of warfare. Our presence in Geneva is therefore aimed at urging states to adopt legally binding agreements on this issue.
The growing use of artificial intelligence in warfare and the increasing autonomy of weapons systems is an extremely worrying development. This is not an abstract or distant issue: in all current conflicts, we see the increasing use of artificial intelligence in the deployment of weapons.
Decisions about life and death must not be left to machines. States must begin negotiations as soon as possible on a treaty that sets clear limits.