Autonomous weapons systems are making life-or-death decisions in conflicts without human intervention. These algorithmic systems select and engage targets through pattern recognition, raising serious ethical concerns and accountability challenges. While existing international humanitarian law applies, no specific treaty regulates the new weapons. After over a decade of stalled negotiations within the UN Convention on Certain Conventional Weapons, the UN General Assembly has mandated a process to address the issue. Civil society, led by the Campaign to Stop Killer Robots coalition, advocates for a two-tiered approach that combines prohibitions with regulations. The international community faces a crucial deadline to conclude treaty negotiations in 2026.

In Gaza, algorithmic systems have generated kill lists of up to 37,000 targets, and machines with no conscience are making split-second life-or-death decisions. This isn’t science fiction: autonomous weapons systems, better known as killer robots, are part of today’s reality. They’re also being deployed in Ukraine and were on display at China’s recent military parade. States are using them to gain a competitive advantage in warfare, and as they integrate them into military kill chains, they appear overconfident they remain fully in control. Civil society is demanding a ban of the weapons that pose the highest risks and strict regulations of all others.

The technology and its deployment

Autonomous weapons systems are weapons that, once activated by a human, select and engage targets without further intervention. Unlike drones, operated by a human who pulls the trigger, these systems make life-and-death decisions through algorithms, processing sensor data such as facial recognition, heat signatures and movement patterns to identify pre-programmed target profiles and fire automatically when they find a match.

Autonomous weapons systems can operate across air, land, water and space, and can be deployed in armed conflicts, border control and law enforcement operations. Their impacts are already apparent. Israel’s military campaign has seen the first case of AI-assisted genocide. Israeli armed forces have deployed multiple algorithmic targeting systems including The Gospel and Lavender, which generate lists of human targets and identify infrastructure to bomb, and Where’s Daddy, which tracks suspected Hamas figures to kill them when they’re at home with their families.

AI systems lack emotion and any of the psychological barriers that constrain human behaviour, such as hesitation, moral reflection and concern about consequences. They can’t approximate human judgement, interpret complex situations or understand context – and they ignore the value of human life.

As they don’t hesitate and make decisions at astonishing speed, AI systems have the potential to escalate conflicts. And as they make decisions based on pattern recognition and statistical probabilities, they bring huge potential for mistakes. Israeli intelligence officials have acknowledged Lavender has an error rate of around 10 per cent, but this hasn’t prevented its widespread deployment. Israel simply priced failure in. During the early weeks of its assault on Gaza, Israeli military policies deemed 15 to 20 civilian deaths acceptable for every junior Hamas operative the algorithm identified. For senior commanders, the acceptable cost could exceed a hundred civilian lives.

A major consequence of the depersonalisation of violence is a loss of accountability, leaving it unclear who could be held legally responsible when decisions result in violations of international humanitarian or human rights law.

Concerns about autonomous weapons come as part of a larger context of doubt about developments in AI, including over potential civic space and human rights impacts. As the technology becomes cheaper, it’s spreading across domains, from battlefields to border control to policing operations, raising concerns about proliferation among governments and others, including criminal groups. Meanwhile AI integration in surveillance systems, including facial recognition technologies, is amplifying their reach and effectiveness. As mass surveillance becomes necessary for developing and training these systems, they undermine the right to privacy. Biases embedded in algorithms also perpetuate exclusion based on gender, race and other characteristics.

A regulatory void

Although it’s commonly accepted that existing international humanitarian law – including rules on distinction, proportionality and precautions in attack – applies to all weapons systems, there’s a regulatory gap, with no specific international treaty that prohibits or regulates autonomous weapons systems.

Discussions on a potential treaty have long been ongoing within the framework of the United Nations (UN) Convention on Certain Conventional Weapons (CCW). In 2013, states that have ratified the CCW agreed to begin discussing autonomous weapons systems. From 2014 to 2016, they held informal meetings of experts, and in 2017, they established a body, the Group of Governmental Experts on Lethal Autonomous Weapons Systems, which has met regularly.

This provides a forum where government representatives, legal experts, military specialists and civil society organisations (CSOs) present their views and discuss legal and technical issues, with the goal of creating a new protocol to the CCW to regulate autonomous weapons systems, similar to those that regulate weapons such as landmines. Amended Protocol II, for instance, regulates the use of landmines, significantly restricting them rather than imposing a complete ban. Importantly, a separate, more comprehensive treaty – the Anti-Personnel Mine Ban Convention, also known as the Ottawa Treaty – also exists outside the CCW framework, banning anti-personnel mines.

The regulation of autonomous weapons systems seems to be following a similar path. The fact that the CCW requires consensus to make decisions allows any state to block proposals, and major military powers including India, Israel, Russia and the USA have persistently done so to prevent progress. In the latest session of the Group of Governmental Experts held in September, 42 states delivered a joint statement affirming their readiness to move forward. It was a breakthrough after over a decade of stalled talks, but the path forward remained blocked by major holdouts.

To circumvent these obstacles, the international community has resorted to a parallel mechanism. In December 2023, the UN General Assembly adopted Resolution 78/241, its first on lethal autonomous weapons systems, with 152 states voting in favour, four – Belarus, India, Mali and Russia – against and 11 abstaining. The resolution requested the UN Secretary-General to seek views from states and other stakeholders on addressing the ethical, humanitarian, legal, security and technological issues raised by autonomous weapons systems.

In December 2024, the UN General Assembly adopted Resolution 79/62, with 166 votes in favour, three – Belarus, North Korea and Russia – against and 15 abstentions. This resolution mandated informal consultations among UN member states. These were held in New York in May and were open to all UN member and observer states plus CSOs, the International Committee of the Red Cross (ICRC), the scientific community and the weapons industry. Discussions went beyond the focus on international humanitarian law that has characterised the Geneva process, exploring ethical dilemmas, human rights implications, security threats and technological risks. The UN Secretary-General, the ICRC and numerous CSOs including the Campaign to Stop Killer Robots have called for the conclusion of negotiations by the end of 2026, given the speed of development of military AI.

The civil society campaign

The Campaign to Stop Killer Robots, a global coalition of over 270 international, regional and national CSOs and academic partners in more than 70 countries, has been leading the charge for regulation since 2012. The campaign has played a crucial role in shaping the debate by highlighting the wide-ranging risks automated weapons systems pose and producing timely research on their evolution. The campaign targets all decision-makers who can influence this agenda, from local to global levels, recognising that public pressure is key to pushing political leaders towards a treaty.

The campaign promotes a two-tiered approach to automated weapons systems, a proposal currently supported by over 120 states. This combines prohibitions on the most problematic types of autonomous weapons systems and regulations for others. If implemented, it would ban systems that target humans directly, operate without meaningful human control, or whose effects can’t be adequately predicted or controlled.

The systems not banned would be permitted under strong restrictions and requirements to ensure accountability, human oversight, predictability and reliability. Regulations could include limits on types of targets – for example, restricting them to military objects rather than people – and on when and where they can be used and for how long, as well as requirements for human supervision and human ability to intervene and deactivate systems, and testing and review requirements before deployment.

A decisive moment

With the technology developing fast, the UN has called for negotiations on a legally binding treaty to regulate or ban these weapons to be concluded by 2026. This means the international community has just a year to reach an agreement that a decade of talks hasn’t been able to produce.

The underlying question is whether machines should be permitted to make lethal decisions without meaningful human control. States that support regulation argue that some decisions about the use of force simply can’t be delegated to algorithms. States that oppose them maintain that existing international humanitarian law is enough.

The technology can’t be uninvented, but it can be controlled. Once autonomous weapons systems are widely deployed and the idea that machines decide who lives and who dies is normalised, it will be far harder to impose restrictions. The choices being made — or avoided — in the next couple of years will determine whether meaningful limits can still be established.

OUR CALLS FOR ACTION

  • States should hold negotiations on controlling autonomous weapons systems under the UN General Assembly or in other forums where the need to find consensus doesn’t allow a small number of states to block progress.
  • States must negotiate a treaty by 2026 that prohibits autonomous weapons systems targeting humans or operating without meaningful human control.
  • States must establish clear accountability mechanisms for violations involving autonomous weapons systems.

For interviews or more information, please contact research@civicus.org

Cover photo by Annegret Hilse/Reuters via Gallo Images