CIVICUS discusses military applications of AI and the urgent need for regulation with Franco Giandana Gigena, Policy Analyst at Access Now, a digital rights organisation that advocates for human rights in the technological age.

As the US government prioritises AI for national security, tech giants such as Anthropic, Google, Meta and OpenAI are forging defence partnerships despite previous ethical commitments. This corporate-military convergence comes alongside faltering regulatory efforts, exemplified by the 2025 Paris AI Action Summit’s failure to achieve consensus. The UK and USA explicitly rejected AI safety pledges, emphasising innovation over regulation. Competition with China is also accelerating development timelines. The consequences of military AI deployment are already visible in conflict zones, such as Gaza.

Why have tech companies reversed their stance on military AI applications?

The conversation around AI policy has shifted dramatically. There’s a clear link between Trump’s rise to power and the shift from prioritising AI safety to accelerating AI development. Google reflects this change, moving from the motto ‘Don’t be evil’ to the more ambiguous ‘Do the right thing’, a phrase flexible enough to suit different interpretations.

But this isn’t entirely new. Google had already engaged with the military in 2017 through Project Maven, which used AI to analyse vast amounts of data and identify targets. After employee protests, the company withdrew in 2018.

Just a few months ago, AI was seen as a technology requiring responsible stewardship. Now the discourse has shifted towards dominance and competition, fuelling an arms race across all sectors, including warfare. The earlier commitment to restrain AI in military applications was based on avoiding systems that could cause direct harm and a global push for ‘ethical AI’. That idea has been replaced by the belief that the west, and particularly the USA, must maintain AI leadership at all costs.

US-based tech companies are working closely with the federal government to maintain their competitive edge. The private sector is yielding to longstanding pressure to take contracts with the Pentagon and other defence agencies. The US Department of Defense explicitly prefers to outsource software development rather than build in-house tools, suggesting deeper collaborations between tech companies and the military.

Another factor is that other countries have developed AI capabilities independent of the US private sector. With the rise of far-right governments prioritising national security and dominance, tech giants likely see less reputational risk in supporting AI for warfare and more financial incentives to do so.

Finally, the argument that regulation stifles innovation is gaining ground, used by the Trump administration to derogate Joe Biden’s AI Executive Order. A related trend is the loosening of content moderation policies by social media platforms, which have dropped fact-checking and other safeguards.

How is geopolitical competition shaping AI development?

China and other non-western powers are viewed as threats. The best example is the ‘DeepSeek effect’. DeepSeek, a Chinese AI model developed with fewer resources than western counterparts, caused significant disruption, reinforcing concerns about the west losing AI dominance while accelerating discussions about military applications.

Western countries have imposed restrictions on tech imports to prevent non-western powers catching up, yet many countries have already found ways to bypass these barriers.

The situation resembles the Cold War’s technological competition, but instead of a bipolar world order we now have a multipolar one, creating unprecedented tensions. The race to develop advanced AI weapons could further divide the global community and reshape alliances. That’s why civil society is calling on governments to prevent further escalation and instead push for transparent, rights-centred AI governance.

What human rights risks do AI-powered military and surveillance systems pose?

AI-driven spyware, autonomous weapons and predictive policing systems make warfare and surveillance increasingly difficult to address. For instance, Pegasus, a powerful spyware produced by the Israeli company NSO Group, has been used in Jordan and other countries such as El Salvador against activists, journalists and politicians who were unaware of being monitored for years.

Lethal autonomous and semi-autonomous weapons are being deployed in Gaza and other conflict zones. AI-powered targeting systems such as Lavender and The Gospel select bombing targets, contributing to large-scale destruction and civilian casualties. Other AI-driven weapons include suicide drones, robotic snipers and automated turrets creating kill zones.

These technologies reduce human lives to data points, dehumanising warfare, and depersonalise the decision of who lives or dies, stripping away accountability.

Israel is using AI to refine and test military technology, treating Palestinian lives as expendable. These systems often rely on flawed databases and have minimal human oversight, leading to innocent people being targeted. For these reasons, we call for an immediate ban.

Predictive policing is another concern, because it operates on biased datasets that reinforce discrimination. It profiles and monitors entire communities without consent, often under the guise of preventing crime. This flips the presumption of innocence, making everyone a potential target. The use of biometric data to classify people is also problematic, as it forces people into predefined, often inaccurate categories based on subjective criteria.

What progress has been made toward global AI governance?

During the Paris AI Summit, world leaders, AI companies and experts discussed challenges and opportunities posed by this technology. However, the UK and USA refused to sign the summit’s pledge, showing how powerful nations prioritise AI dominance and national security over global cooperation.

The USA remains the undisputed AI leader and has repeatedly resisted regulatory efforts, arguing that restrictions would slow innovation. US Vice President J D Vance claimed regulation would ‘kill a transformative industry just as it’s taking off’ and add ‘endless legal compliance costs’. The UK suggested the pledge didn’t address urgent issues such as national security threats.

Their refusal represents a setback for global AI governance, reflecting a shift toward prioritising military and economic supremacy.

But there’s some positive news. The Paris Summit launched Current AI, a fund designed to invest in public-interest AI projects, showing some countries still support tangible action to enhance open and collaborative ecosystems.

What steps are needed to ensure responsible AI development?

Governments must ban AI applications that violate human rights, such as predictive policing and warfare applications. As the arms race continues, increasingly dangerous technologies will continue to emerge with devastating consequences in conflict zones. While it may be difficult to achieve a global ban, civil society must push for accountability and demilitarisation.

Other steps include conducting human rights impact assessments throughout AI’s lifecycle and ensuring models and datasets remain transparent and auditable. We should create smaller, task-focused models that use power efficiently to avoid compromising water and energy resources for vulnerable communities.

The privatisation of AI governance must be addressed. Currently, private companies shape AI policies and prioritise profit over human rights. Regulatory frameworks should be led by public institutions with meaningful civil society involvement, ensuring AI serves the public interest.

Finally, we must bridge the gap between AI development and public awareness. Most people don’t understand how AI shapes policies affecting their rights and freedoms. It is essential to strengthen public education, investigative journalism and digital literacy programmes to ensure AI remains a tool for collective benefit rather than control.