‘Powerful states view AI governance as a tool for strategic and economic positioning’
CIVICUS discusses AI governance challenges with Federica Marconi, researcher in the multilateralism and global governance programme at Italy’s Institute of International Affairs, a non-profit think tank that promotes awareness of international politics and contributes to the advancement of European integration and multilateral cooperation.
The current global AI governance landscape, characterised by fragmented regulatory approaches and voluntary commitments, isn’t adequate to address serious risks. State prioritise strategic positioning and major powers, including China, the European Union (EU) and the USA, have competing approaches driven by a determination to secure technological advantages and extend geopolitical influence, making cooperation difficult. While civil society participation is widely recognised as crucial for legitimate governance, formal inclusion doesn’t necessarily translate into real influence over decision-making.
Why does AI need global governance?
AI promises economic growth and development, but it also risks worsening global inequalities: countries leading in innovation and governance are likely to secure lasting economic and strategic advantages over those left behind. AI can create new threats and systemic risks for states and individuals, whose rights may be strengthened but also undermined in unprecedented ways. As technologies become increasingly intertwined with national security concerns, we need a shared framework that maximises benefits while minimising risks.
AI governance debates are becoming more important at the national and regional levels and in multilateral forums. Stakeholders are increasingly seeing the need for a truly global framework, given the transnational and dematerialised nature of AI.
But it is particularly difficult to design such a framework. It must balance robustness to deliver effective outcomes with flexibility to adapt to cyberspace’s rapid evolution. Inclusivity is also essential, so diverse perspectives are represented and innovation is reconciled with ethical considerations. A starting point could be the development of common standards rooted in human rights, democratic principles and the rule of law, alongside agreed international policy priorities to build cooperation.
What’s the current state of AI governance?
Multilateral efforts have gained momentum in recent years. In 2019, the Organisation for Economic Co-operation and Development (OECD) adopted the first intergovernmental standard on AI. That same year, the Osaka Leaders’ Declaration endorsed the non-binding G20 AI Principles, building on the OECD recommendations. In 2021, the United Nations (UN) Educational, Scientific and Cultural Organization issued recommendations on the ethics of AI. Building on these, Japan’s G7 presidency launched the Hiroshima AI Process in 2023, establishing a framework for common principles on responsible AI development and use, further expanded by Italy’s and Canada’s G7 presidencies, with a focus on implementation. And in 2024, the UN developed the Global Digital Compact, a roadmap for global digital cooperation seeking to harness technology’s potential while closing digital divides.
As a result, we now have a patchwork of regulatory approaches and voluntary commitments with gaps, inconsistencies and weak enforcement. Regional and multilateral initiatives have made important progress in fostering a shared understanding of key AI issues and promoting coordinated action, but there’s still no binding global framework and meaningful engagement with regulatory processes, particularly from a human rights perspective, remains difficult.
How is state competition affecting AI governance?
Competition is making progress more difficult. 2024 saw an intense race to shape AI governance as the technology expanded. Major powers adopted competing approaches, seeking technological advantages and influence over AI governance. Setting standards has become a powerful instrument of geopolitical competition, with states viewing AI governance as a tool for strategic and economic positioning beyond security and ethics.
The EU became a regulatory leader by adopting the first comprehensive legal framework – the AI Act – followed by the General-Purpose AI Code of Practice. EU states are now trying to implement EU law requirements.
The USA launched its own initiatives, including President Joe Biden’s 2023 Executive Order on Safe, Secure and Trustworthy AI, and President Donald Trump’s 2025 AI Action Plan, with three additional executive orders stating that maintenance of global technological dominance is a national security imperative.
In 2023, China announced its Global AI Governance Initiative to mark the 10th anniversary of its Belt and Road Initiative, a massive infrastructure and investment programme, demonstrating its ambition to align AI governance with its strategic priorities.
Global south states are also developing regulatory frameworks, seeing them as both a necessity and opportunity, but they remain underrepresented in global conversations. Many are strengthening their participation in multilateral forums to ensure their interests are taken into account.
In multilateral arenas, competition and fundamental differences among states often obstruct consensus building, delay processes and limit outcomes, as seen in the lengthy debates at the UN Open-Ended Working Group on security in information and communications technology. Meanwhile, technological change outpaces regulation, making it harder to build effective and adaptable frameworks.
Does civil society have meaningful influence?
Given AI’s impact across sectors, legitimate regulation requires meaningful civil society inclusion. Civil society organisations provide technical expertise, amplify excluded groups’ perspectives and advance transparency and accountability. Their participation is crucial to prevent decisions being dominated by powerful private stakeholders driven by economic interests rather than the public good.
Civil society’s role is widely acknowledged as essential, but it faces two problems: getting access and having influence.
Access to multilateral forums varies. Some arenas restrict civil society participation entirely; others have established structured channels. But even when these mechanisms exist, access alone isn’t enough: having a seat at the table doesn’t guarantee civil society voices shape decisions.
The solution requires overcoming the notion that state leadership and stakeholder participation are competing legitimacy models. Civil society perspectives can be incorporated through governments via national consultations, advisory bodies or official delegations while civil society can also engage independently with multilateral institutions through established participation channels.
What should states and international institutions do?
States should agree on core AI governance principles. Approaches differ – the EU focuses on human rights protection while the USA prioritises innovation and technological leadership – but establishing common values can help build a global culture of responsible AI development
States should strengthen existing institutions rather than creating new ones. This includes building their capacity, improving legitimacy and fostering coordination across multilateral forums to reduce fragmentation.
Implementation and enforcement are crucial. Setting principles isn’t enough; they must be translated into actionable measures with effective compliance monitoring.
A multi-centred, multilevel governance approach is needed, with complementary mechanisms operating across global, regional and national levels. This will likely prove more effective than a single, centralised body, combining flexibility with coherence and allowing adaptation to rapid technological change while maintaining shared standards.
Together, these actions would shift global AI governance from fragmented, symbolic measures towards a structured and effective system capable of addressing present and future challenges and establishing guardrails to ensure AI is safe, responsible and inclusive.