Four countries have banned children from accessing social media, five more have passed laws awaiting implementation and around 40 more are considering bans. What Australia began when it banned under-16s from 10 social media platforms is rapidly becoming a global trend. Children need protection from the documented harms caused by early and heavy social media use, but the rush to introduce bans is outpacing evidence and being done without consultation. Bans restrict freedom of expression and the right to access information, raise privacy and surveillance concerns and shift compliance burdens from tech companies to children and families. States should explore alternatives, starting by listening to young people.

When Nepal’s young people took to social media to call out corruption, their government tried to silence them by banning 26 social media platforms. Nepal’s September 2025 ban provided the trigger for Gen Z-led protests that ousted the government. But now dozens of governments are rushing to cut the next generation off from social media in the name of protecting them.

A global experiment

Australia introduced its ban on under-16s accessing 10 major social media platforms in November 2024, and it took effect in December 2025. The policy has proved divisive, with both its appropriateness and effectiveness called into question, but this hasn’t stopped other governments rushing to introduce their own bans. Three other countries – China, Indonesia and Vietnam – have now implemented bans, while five others – Brazil, France, Malaysia, Turkey and the United Arab Emirates – have passed laws due to come into force.

Governments are considering bans or politicians are demanding them in around 40 countries, over a fifth of United Nations members. The government of Greece, which recently announced a parliamentary vote on a ban to take effect next January, is calling for a common European Union (EU) approach, something the European Parliament supports.

Current and proposed bans vary in age range – most set the threshold at 16, though in Greece and Turkey, bans will apply to under 15s – and in the platforms affected. Australia’s ban applies to 10 major platforms, including Instagram, TikTok and YouTube, but not others such as Discord and WhatsApp, while Indonesia’s focuses on eight. Age verification methods differ too. Authorities in Greece plan to mandate an app that will block access on young people’s devices, while Australian authorities rely on platforms using a mix of features including facial and voice recognition to estimate users’ ages. What all policies share is the stated aim of preventing harm caused by young people’s social media use.

Motivations

There’s a lot to be concerned about. Young people are exposed to harmful social media content on a huge scale. Disinformation and hate speech are rife and cyberbullying is widespread. Australian government research found that seven out of 10 under 16s had been exposed to harmful content, and over half had experienced cyberbullying.

Girls and young women are targeted with misogynist and sexualised content, a problem exemplified by Twitter/X’s recent nudification scandal, in which people used its Grok AI tool to generate fake sexualised images of women and girls, including children.

Concern is mounting over the impact of heavy social media use on young people’s mental health, including its potential to drive anxiety, depression, difficulties in concentrating and self-harming behaviours. At a crucial stage in young people’s development, their sense of self, habits of communication and understanding of the world are being shaped by privately owned platforms whose algorithms are engineered to grab and hold attention, including through sensationalist content, with the aim of enabling tech companies to profit by selling advertising and harvesting saleable data. Social media is addictive by design, as a US court recently recognised when it ordered Meta and YouTube to pay US$6 million in damages to a woman who testified about the harm caused by becoming addicted to social media before the age of 10.

For proponents of bans, restrictions on young people’s social media access constitute a public health intervention, as justified as bans on underage drinking and smoking. Social media developments have long outpaced regulation. From this perspective, governments are trying to catch up.

Problems

Bans, however, present serious problems. Rights campaigners argue they’re incompatible with the United Nations Convention on the Rights of the Child, which establishes that children have evolving capacities to take decisions and exercise rights as they develop, including the right to seek and receive information. Restricting this right risks leaving vulnerable young people isolated, including LGBTQI+ children seeking support and connection online. This isn’t protection; it’s adding a further layer of exclusion.

Dangers may be worsened if interactions shift to more obscure spaces. When bans such as Australia’s target major platforms, young people may move to less well-known alternatives with weaker oversight and self-regulation. AI chatbots, despite documented cases of them encouraging young people to self-harm, aren’t covered by Australia’s ban.

Many young people will evade bans, including by using VPNs, fake accounts or a parent’s device. Australia’s eSafety Commissioner reports that two-thirds of the target group remain on supposedly banned platforms. Free VPNs bring dangers of exposing young people to security risks, including the logging and sale of their data. Those unable to seek parental help, such as LGBTQI+ children, may be left more isolated.

When platforms fail to comply, governments can fine them, but tech companies may see fines as merely the cost of doing business. Australia’s maximum fine is US$32 million, while Meta’s 2025 revenue stood at over US$200 billion.

Freedom of expression

Bans raise more fundamental questions about online freedom of expression and the right to access information, and who gets to decide the rules. The danger is of normalising the idea that states can be the sole arbiters of who accesses the internet, what they can do and how much privacy they have.

This is a dangerous power to vest even in democratic governments, but the concern is still greater when governments that already restrict civic space and have a record of controlling online expression introduce bans. In 2025, at least 11 per cent of restrictions recorded by the CIVICUS Monitor, our research initiative that tracks worldwide conditions for civil society, had a digital element, including internet and social media restrictions imposed by governments. Social media bans for children could be another tool for tightening state control over online communication. It’s little surprise that the governments of China and Vietnam, which systematically limit what people can say and do online, are among the early adopters.

The government of Malaysia intends to introduce an under-16s social media ban in June, arguing it can do so under its Online Safety Act. The government has a track record of using restrictive laws to penalise criticism and routinely bans books it deems controversial. The concerns are similar in Indonesia, where implementation of a ban that will take effect over time began in March. Indonesia’s ban comes as the government is introducing laws on cybersecurity and disinformation that will give it further powers to repress online expression, on top of an existing IT law it uses to criminalise people who criticise it.

While governments of different political persuasions are considering bans, the strongest political lobbying is coming from right-wing groups, such as the Trump-aligned Heritage Foundation in the USA. The suppression of young LGBTQI+ identities and online expressions of support for climate action and solidarity with Palestine may be among their motivations.

In recent years, Generation Z has mounted mass protest movements against corruption, economic failures and unaccountable governments, with social media central to how young people communicate, mobilise and build support. Generation Alpha may be denied those same tools.

Little consultation

Governments are introducing bans at a rapid pace, despite the lack of evidence. There seem to be few attempts to take time to study and learn from Australia’s experience. Many of the bans are being introduced with little democratic debate or public consultation. The Malaysian government, for example, has conducted minimal consultation on its decision.

Few governments are exploring alternatives, including enforcing existing minimum-age requirements, typically set at 13 but broadly ignored by social media companies, or piloting behavioural change approaches to help young people voluntarily limit their social media use.

The current hurry gives the impression that states can’t find a way of holding social media companies accountable so instead have shifted the burden onto children and their families. The companies that have created the problem are being asked to do little, while new rules may create fresh opportunities for tech companies that supply age verification services.

Age estimation technology has a high error rate, particularly for young people near ban thresholds, leaving platforms needing to collect additional data. The implications extend well beyond children, since those above age thresholds must also verify their identity. Systems that collect a wide range of biometric data raise serious privacy and surveillance concerns, including hacking risks. Last year, a breach of Discord’s age verification provider exposed around 70,000 people’s details.

Data collection on this scale is a temptation both for tech companies whose business model depends on mining data for profit and governments seeking to expand surveillance. A recent investigation into an age-verification system Discord planned to use revealed extensive dangers, finding that Persona software, backed by far-right tech oligarch Peter Thiel, links facial scans to a vast array of public and government data sources, including financial records, and can file suspicious activity reports with authorities.

The concern is such that 438 privacy and security specialists have signed a call for a moratorium on the development of age verification technologies until their effectiveness and impacts on freedoms, privacy and security can be properly assessed.

Voices from the frontline

Goran Rizaov is a journalist and Media and Information Integrity Programme coordinator at the Metamorphosis Foundation, a North Macedonian civil society organisation that works on accountability, digital rights and media development.

 

Internet access is a human right, and social media, for all its flaws, remains an important source of information and a tool for communication. Young people should not be cut off from it. The solution can’t be to turn off social media, but to make it safer.

Protecting children online requires two things: holding companies accountable and equipping young people to navigate the digital world.

The first step is regulation that targets platforms, not users. Governments should propose, adopt and implement frameworks that place obligations on companies rather than burdening children and their families. The risk, of course, is that regulation can be weaponised to restrict freedom of expression. This is where civil society has a critical role. It must be part of the process, hold social media companies accountable and ensure these frameworks are built to protect people, not curtail their rights.

Governments should also invest in education. Critical thinking, digital literacy and media literacy must be central to how we educate young people. Societies with strong media literacy are far less susceptible to online manipulation.

These bans are not the solution. But they have done one useful thing: they have opened a conversation that was long overdue. At least young people, parents and governments are talking about the dangers of social media. Now we need to make sure that conversation leads somewhere meaningful.

 

This is an edited extract of our conversation with Goran. Read the full interview here.

Alternatives

Democratic states in particular should take care not to send the signal that intrusive intervention is permissible, because repressive states can be expected to go further and harder.

Rather than rushing to bans, governments should ensure that existing protections are properly enforced. The EU’s Digital Services Act, for example, sets rules to limit harmful content but hasn’t yet been fully implemented. Other approaches should be tested, such as interventions to make social media less addictive by increasing friction – slowing and reducing the number of interactions.

Governments should invest in equipping young people with skills to navigate complex digital environments and set their own limits on social media use. They must emphasise critical thinking, digital literacy and media literacy in education, provide better mental health services and stronger support for those experiencing cyberbullying. Social media companies could be required to fund such initiatives, voluntarily or through taxation.

Governments serious about online wellbeing should respect young people’s agency. They must take the time to engage with young people, who understand the positives and negatives of their online lives. The goal of protecting children from social media harm is legitimate, but the method of blanket bans is deeply flawed.

OUR CALLS FOR ACTION

  • States should consult widely with civil society, including children’s rights groups, to develop comprehensive policies to protect children and young people from social media harm.
  • States that are planning to introduce blanket social media bans for children and young people should pause implementation and review the evidence from countries where bans are already in place.
  • Technology companies should financially support independent schemes to protect young people from social media harm.

For interviews or more information, please contact research@civicus.org

Cover photo by STR/AFP