AI governance: the struggle for human rights
As AI use grows, its global regulation becomes a more pressing issue. The United Nations recently passed a resolution to establish two new governance mechanisms, while the European Union’s AI Act, providing the world’s first comprehensive legal framework, soon enters into force. But both approaches contain significant gaps, particularly regarding AI’s military and surveillance uses, which bring growing human rights impacts. With the USA now actively undermining multilateral cooperation and tech companies increasingly aligning with authoritarian governments, civil society risks standing alone in advocating for human rights protections. Today’s struggles over regulation will determine whether AI serves humanity as a whole or furthers elite power.
Algorithms decide who lives and dies in Gaza. AI-powered surveillance tracks journalists in Serbia. Autonomous weapons are paraded in a show of force through Beijing. This isn’t science fiction – it’s reality. An AI revolution threatens to reshape many facets of daily lives at astonishing speed, bringing an array of human rights impacts. The question of what the rules should be that govern AI – and who sets those rules – grows only more urgent.
AI has global implications, which means it requires coordinated international responses. On 26 August, the United Nations (UN) General Assembly took an important step by adopting a resolution to establish the first-ever international mechanisms specifically designed to govern AI. But as the fraught negotiations surrounding the resolution laid bare, the international community continues to struggle to address the issue adequately.
Human rights concerns
Civil society is particularly concerned about the ability of AI to intensify attacks on human rights and the erosion of civic space. Concerns have been further fuelled by the alignment of tech oligarchs with the Trump administration and other repressive states.
One of the ways AI can be used to repress rights is through its integration in surveillance systems, such as facial recognition technologies, amplifying their reach and effectiveness, including against protesters. AI can supercharge the spread of disinformation, including through deepfake videos, promoting division, hatred and polarisation, causing particular harm during election campaigns. Biases in AI algorithms can perpetuate exclusion, including on the basis of gender and race.
In the military domain, AI is used in lethal autonomous weapons, commonly called killer robots, where it can identify, select and kill human targets without human intervention. Israel’s campaign of genocide in Gaza has shown the consequences of AI use in warfare, demonstrating how the depersonalisation of violence alters the nature of accountability. But despite the enormous risks, lethal autonomous weapons are still largely unregulated. Talks on the issue have been held at meetings on the UN Convention on Certain Conventional Weapons since 2014, but have so far failed to deliver because the consensus-based decision-making process allows major military powers such as Israel, Russia and the USA to block any proposals.
AI’s contribution to the climate crisis
AI’s climate and environmental impacts are another significant concern. Interactions with AI chatbots such as ChatGPT consume around 10 times more electricity than a standard Google search. The International Energy Agency projects that global data centre electricity consumption will more than double from 2024 to 2030 to around 945 terawatt-hours – roughly equivalent to Japan’s entire annual electricity use. AI is the biggest driver of this increase.
Intense demand for electricity for data centres, caused by AI expansion, is driving construction of gas-powered electricity plants and delaying plans to decommission coal plants at a time when any hope of limiting global temperature rises depends on stopping fossil fuel use. Microsoft’s emissions have reportedly grown by 29 per cent since 2020 due to data centre construction to support AI workloads. Google recently removed a pledge to achieve net zero greenhouse gas emissions by 2030 from its website; due to AI, its emissions increased 48 per cent between 2019 and 2023.
Data centres also require vast amounts of water for their cooling systems. ChatGPT consumes roughly half a litre of fresh water for every five to 50 queries. Microsoft’s global water consumption spiked 34 per cent from 2021 to 2022, while Google’s rose 20 per cent over the same period, with researchers attributing the majority of this growth to AI development.
Competing governance models
The recent UN resolution establishes two governance mechanisms that were formally agreed as part of the Global Digital Compact at the Summit of the Future in September 2024: an Independent International Scientific Panel on AI and a Global Dialogue on AI Governance. The 40-member Scientific Panel’s job will be to issue annual evidence-based assessments of AI’s risks, opportunities and impacts. The Global Dialogue will serve as a platform for states and others to discuss international cooperation and share best practices.
#AI governance: the UN General Assembly establishes 2️⃣new mechanisms to promote international cooperation.@gioasempre, Special Envoy for Digital & Emerging Technologies, tells us how the development underlines commitment by Member States to build on the Global Digital Compact⤵️ pic.twitter.com/EHTfIVSwP3
— United Nations Geneva (@UNGeneva) September 1, 2025
The resolution resulted from a carefully negotiated compromise between different visions of AI governance that reflect geopolitical divides.
Through its Global AI Governance Initiative, China champions a state-led approach to multilateralism, leaving no role for civil society, while positioning itself as a global south leader by emphasising development and capacity strengthening. It frames its approach as a counter to western technological dominance and insists AI development must serve broader economic and social objectives. States in the Group of 77 – a bloc of mostly global south countries – collectively call for AI to be deployed responsibly and inclusively, prioritising AI’s potential to improve digital economies, education, health and public services, and demanding full inclusion in shaping AI governance frameworks.
Meanwhile, the USA under Trump seeks to preserve its dominance through an ‘America First’ stance that emphasises export restrictions and ‘trusted networks’ of allies rather than multilateral cooperation. It embraces technonationalism – a strategy that treats AI as a tool of economic and geopolitical leverage and sidelines multilateral cooperation in favour of bilateral arrangements. Recent decisions that reflect this shift – and make global cooperation still more challenging – include a 100 per cent tariff on imported AI chips and the purchase of a 10 per cent stake in tech giant Intel.
The European Union (EU) takes a different approach, seeking comprehensive risk-based regulation through legal frameworks while prioritising the Scientific Panel’s independence and the importance of multi-stakeholder dialogue in an attempt to balance innovation with human rights protections.
The UN resolution is non-binding. However, it potentially opens up scope for discussions that could eventually crystallise into binding norms and rules. The challenge is to translate diplomatic achievements into tangible protections for human rights and global security within an increasingly fractured international system.
The EU approach
In contrast to the UN’s soft governance approach, the EU has taken a more prescriptive path with its AI Act, which becomes applicable on 2 August 2026. Hailed as the first comprehensive legal framework on AI, it establishes rules based on levels of risks, with obligations becoming more stringent as risks increase. The measures are intended to guarantee safety, fundamental rights and human-centric AI while also strengthening investment and innovation.
The Act’s tiered approach bans AI systems that present ‘unacceptable’ risks outright while subjecting those deemed ‘limited risk’ to transparency requirements. Current generative AI models are assessed as falling into this limited risk category, requiring basic safeguards such as informing users they’re interacting with AI and preventing illegal content generation.
The AI Act is a step in the right direction for addressing a technology that currently answers only to wealthy tech bosses in Silicon Valley. However, civil society has raised the alarm over serious shortcomings that undermine the Act’s human rights protections. While it initially proposed an outright ban on live facial recognition, the law’s final version allows limited use subject to safeguards that human rights groups argue aren’t effective enough. The Act bans emotion recognition technologies in education and workplaces but allows law enforcement and migration uses – a particularly concerning omission given existing systems show racial bias. The ProtectNotSurveil coalition has warned that migrants and Europe’s racial minorities are serving as testing grounds for AI-powered surveillance and tracking tools.
The Act also exempts AI systems used for national security purposes and, critically, autonomous drones used in warfare. This exclusion is particularly problematic given the EU’s dual role as both an AI standard-setter and a major arms supplier, including to Israel. International hypocrisy was on full display recently when Israel signed the Council of Europe’s AI and Human Rights Convention while simultaneously using AI for mass surveillance and killings in Gaza.
The struggle ahead
The future of technology regulation is approaching a defining moment as mandates for long-running spaces for dialogue including the UN Open-Ended Working Group on ICTs and the Internet Governance Forum are set to expire. These platforms have fostered inclusive discussion, bringing together governments, the private sector, civil society and IT experts. Their potential demise has prompted urgent discussions about the trajectory of technology governance. While the AI Act’s implementation will be a significant milestone for global AI regulation, substantial action is needed to strengthen its foundations.
The global nature of AI technologies demands global solutions, but so far what’s on offer is a patchwork of regional rules, non-binding international resolutions and lax industry self-regulation, amid geopolitical friction and competing visions of tech governance. State self-interest is prevailing over humanity’s collective needs and universal human rights. Meanwhile, the companies that own AI systems have immense potential power. Any attempt to regulate AI that doesn’t confront this power is sure to fail.
Champions are needed in the international system to push for strong regulation, with human rights at its heart. The pace of AI development means there’s no time to waste.
OUR CALLS FOR ACTION
-
The United Nations must strengthen AI governance mechanisms and urgently negotiate a treaty banning lethal autonomous weapons systems.
-
The European Union must close loopholes in the AI Act concerning facial recognition, law enforcement uses, migration controls and military applications.
-
Governments should establish coordination mechanisms to counter tech giants’ control over AI development and deployment.
For interviews or more information, please contact research@civicus.org
Cover photo by Suriya Phosri/Getty Images via Gallo Images