‘Setting clear red lines for AI is not about stifling innovation but preventing catastrophic misuse’
CIVICUS discusses the campaign for better AI regulation with Gabriel Wang, data scientist and AI solution architect for AI4NGO, an international organisation that seeks to empower civil society to harness the transformative potential of AI to increase its impact and address critical global and local challenges.
In September 2025, journalist and Nobel Peace Prize laureate Maria Ressa announced the Global Call for AI Red Lines at the United Nations (UN) General Assembly. Backed by over 200 prominent figures, the initiative urges governments to establish binding international safeguards by 2026 to prevent catastrophic AI risks, from autonomous weapons to engineered pandemics, before advanced AI systems surpass meaningful human control.
What’s the Global Call for AI Red Lines?
This is a public campaign launched during the 80th session of the UN General Assembly in September. It urges governments to establish an international agreement defining the ‘do-not-cross’ limits for AI: boundaries to protect humanity from its most dangerous and universally unacceptable risks. The initiative calls for a practical and enforceable global framework for AI governance, with clear standards, accountability mechanisms and pathways for oversight. It conveys a strong sense of urgency, setting a deadline for meaningful international action by the end of 2026.
Over 200 people, among them Nobel Prize winners, former heads of state and leading figures in human rights, diplomacy and economics, have endorsed the call. Their collective support signals a growing international consensus that the risks of unregulated AI are too great to ignore and demand immediate, coordinated action.
What are the main risks posed by AI and what safeguards should be put in place?
The Global Call highlights a series of escalating risks arising from AI’s rapid advancement. These include the possibility of engineered pandemics, large-scale disinformation and manipulation, even targeting children, as well as serious threats to national and international security. There are also growing fears about mass unemployment and systemic human rights violations resulting from the misuse or unchecked development of AI technologies.
While the campaign doesn’t prescribe a definitive list of prohibitions, it calls for the creation of red lines through international scientific and diplomatic dialogue and offers examples to guide these discussions. Some proposed safeguards address how humans and organisations are allowed to use AI, such as prohibiting its involvement in nuclear command and control, lethal autonomous weapons, social scoring systems and mass surveillance, and any use that deceives people by impersonating humans without disclosure.
Others focus on AI behaviour itself: preventing the release of systems capable of disrupting critical infrastructure, aiding in the creation of weapons of mass destruction, or self-improving without human authorisation. Equally vital is ensuring any AI system can be immediately deactivated if control is lost. These safeguards aim to ensure AI serves humanity safely and ethically, without crossing boundaries that could endanger our collective future.
Advocates argue that setting clear red lines is not about stifling innovation but about preventing catastrophic misuse. Establishing enforceable limits ensures technological progress remains safe and accountable. Analysts at the Organisation for Economic Co-operation and Development (OECD) have noted that boundaries aimed at reducing risk can build public trust and create the conditions for innovation grounded in safety and transparency.
What role is civil society playing in shaping global AI governance?
Civil society organisations are at the forefront of this effort, increasingly raising concerns about AI risks and demanding effective global regulation. Groups such as AI4NGO play a vital role in bridging the gap between civil society and global governance processes. Our work focuses on amplifying excluded voices to ensure meaningful representation in policy debates, building coalitions and capacity through collaboration and education, and translating ethical principles into tangible actions that promote accountability in AI development and deployment.
As Kumi Naidoo, former head of Amnesty International, CIVICUS and Greenpeace, recently remarked in an AI4NGO webinar, we need collective intelligence to navigate our current existential challenges. This spirit of collaboration – bringing together diverse perspectives, shared knowledge and coordinated action across borders and sectors – is essential to ensure innovation remains safe, inclusive and aligned with human rights.
What needs to happen to turn this call into binding international action?
Several complementary pathways are being pursued to make the call legally binding. A coalition of leading countries could advance the idea across global forums. The newly established UN Independent Scientific Panel on AI, with support from the OECD, could define technically clear and verifiable red lines. States might endorse initial proposals at the AI Impact Summit in India in 2026, leading to broader consultation at the UN Global Dialogue on AI Governance in Geneva later that year. By the end of 2026, governments could then initiate a UN resolution or joint ministerial statement to launch negotiations for a binding treaty.
Any future treaty should rest on three pillars: a clear list of prohibited AI uses and behaviours, robust, auditable verification mechanisms to ensure compliance and an independent oversight body to monitor and enforce implementation.


