Supercharged by advances in AI, states are increasingly using facial recognition technology to police dissent. Authoritarian states have pioneered the use of this technology, but more democratic governments are using it too, creating surveillance systems that often operate in secret and without adequate oversight. Human rights concerns include the erosion of the right to privacy, the targeting of excluded groups and the use of the technology to identify protesters and restrict assembly rights. Civil society is working to expose these abuses and demand changes in laws and regulations that haven’t kept up with technological advances. Governments should legislate to address this gap.

As people walk city streets, enter a shop or catch public transport, chances are they’re being watched by one of the world’s billion-plus surveillance cameras. This surveillance technology is increasingly taking a further troubling turn. Many of these devices no longer merely record anonymous figures; now they identify them.

Facial recognition technology – software that matches images to identities – has been used since at least the early 2000s, but AI is turbocharging it, removing the need for human analysis and enabling real-time identification. This is giving states unprecedented power to monitor and control people.

Hungary, ruled by authoritarian nationalist Viktor Orbán, offers a disturbing example of how the technology can be used to restrict civic freedoms, particularly the right to protest, and intensify exclusion. In March, the government passed a law effectively banning LGBTQI+ Pride events, on the spurious and defamatory pretext of protecting children. Authorities will be able to use facial recognition to enforce the law, identifying anyone who takes part in a Pride event, leading to punishments of fines and, for organisers, potential one-year jail sentences.

The Hungarian government is weaponising technology against dissent. Facial recognition is helping further a divisive and repressive political project, in what’s sure to be a dirty campaign for next year’s election.

Authoritarian pioneers

States that deploy the technology claim it improves security, helping combat terrorist threats and identify wanted criminals. But this justification doesn’t stand up to scrutiny, and civil society has documented extensive abuse.

Most fundamentally, widespread use of facial recognition breaches the right to privacy, widely recognised as a fundamental human right. The International Covenant on Civil and Political Rights (ICCPR), ratified by almost all states, sets out that no one’s privacy should be subject to ‘arbitrary or unlawful interference’. Most national constitutions – 186 on the last count – provide for some kind of recognition of privacy rights, as does the European Convention on Human Rights.

States argue that facial recognition is being used in limited and lawful ways, insisting its uses comply with the ICCPR, other human rights conventions and national constitutions. But some governments are clearly using facial recognition as part of a broader crackdown on civil society. The pattern is evident: many of the states that have led the way in implementing the technology are deeply authoritarian.

China’s surveillance state leads the way: no country deploys facial recognition so extensively or more ruthlessly. It’s everywhere – in banks, hotels, transport hubs, even public toilets. The state is obsessed with tracking what people are doing and who they’re doing it with, constantly scanning for the slightest sign of dissent or divergence from the narrow party and national identity it expects people to adhere to. Facial recognition is an important component of China’s overwhelming surveillance architecture, linked to an evolving social credit system designed to incentivise compliant behaviours and penalise those deemed anti-social, driving self-censorship.

One Chinese company now claims to have achieved 95 per cent accuracy in recognising faces, even under masks. China exports its technology, as it does with many of its industries, as a means of extending its international influence. By doing so, it’s globalising its surveillance model, providing other repressive states with the tools to crush dissent. In Afghanistan, an unlikely alliance between the Taliban and cutting-edge technology has seen some 90,000 Chinese-manufactured cameras equipped with facial recognition capability installed in the capital, Kabul. This helps the Taliban enforce gender apartheid policies that deny women the most basic public visibility and suppress any attempt at public protest.

Iran followed suit as part of its war against women in 2022, as many were protesting against the systematic denial of rights. The government announced a plan to introduce facial recognition, linked to its existing extensive biometric database, to further police every aspect of women’s public behaviour.

Russia is now second to China in its use of facial recognition. The state has expanded it from five to 62 regions since the start of its full-scale war on Ukraine, as part of its attempt to eradicate the dissent its invasion sparked. Authorities use the technology pre-emptively, stopping people who’ve participated in past protests when they may be on their way to another. Russia is also exporting its surveillance technology: the Indian government has reportedly incorporated Russian technology in a facial recognition system tracking people at railway stations.

Israel is another pioneer, extensively deploying the technology as part of its apparatus of control over Palestinian lives. Meanwhile, Turkey has turned to facial recognition to silence democratic voices: in response to recent anti-government protests, the authorities have used facial recognition to identify protesters and restrict movement in and out of Istanbul.

Function creep

But troublingly, states that supposedly respect rights are embracing the same repressive technologies. In 2021, it was revealed that police in Brussels, Belgium had used facial recognition technology potentially hundreds of times – something the government initially tried to deny. That same year, Canada’s privacy watchdog determined that police had broken privacy laws by building a three-billion-image databank, scraped by facial recognition software from social media feeds. Again, police tried to deny having access to the software.

In 2023, a Czech civil society organisation exposed the police’s secret use of software to match photos with identity card and travel databases. The pattern repeats itself: police had overstepped their powers and tried to shroud their actions in secrecy.

These are all instances of function creep: authorities may introduce the technology for ostensibly benign purposes, but once they have it, they can’t help extending its use. This includes technology deployed for high-profile events like the Olympics, which typically states keep using once the show is over. Often, laws lack adequate safeguards to prevent the technology’s extension and misuse.

In democratic states, just as in repressive ones, authorities use facial recognition to limit protest rights. In the UK, police used real-time facial recognition as part of their heavy-handed response to peaceful protests on the day of King Charles III’s coronation, 6 May 2023. Data gathered through the UK’s wide camera network is cross-referenced with ‘watch lists’, which include details of people who don’t appear on any police wanted list. Despite a 2020 court ruling against one of the UK’s police forces, which found that the technology breaches privacy rights, rollout of facial recognition continues. Last November, the government issued a call for tenders for a US$26.7 million contract to provide the police with more facial recognition technology.

As with many other countries, the UK lacks any specific legal framework to govern the use of facial recognition. It does, however, have a suite of recent laws that make it easier to criminalise protesters, and facial recognition will likely complement these restrictions, targeting peaceful protests for climate action, racial justice and Palestinian rights.

Algorithmic discrimination

Facial recognition doesn’t treat everyone the same. The technology may promise objectivity, but it brings bias. Its general potential for error is magnified for people from excluded groups.

The experience of Robert Williams offers one example. Williams, a Black man, was arrested in 2020 in Detroit, USA, for a crime he couldn’t possibly have committed, purely on the basis of a wrong facial recognition match. As with many AI-enabled technologies, the problem lay in software that was bad at telling apart people with darker skin tones, compounded by institutionally racist policing.

The technology used by private companies brings the same discriminatory patterns. The US pharmacy chain Rite Aid used facial recognition systems to identify potential shoplifters but disproportionately misidentified women, Black and Latinx people and people of Asian heritage, leading to humiliating scenes in which innocent people were treated as suspects. Research shows error rates of up to 34.4 per cent for darker-skinned women from commercial AI-enabled facial recognition systems. These result from algorithms overwhelmingly trained on white male faces.

In many cases, the use of facial recognition to perpetuate discrimination is no accident. The Chinese state turned an 11 million-strong ethnic group into test subjects for technologically aided oppression. It brought in the technology against the systematically persecuted Uyghur population, a predominantly Muslim majority in the northwest Xinjiang region, before it used it anywhere else, in the first ever case of a government deliberately using AI to enable racial profiling.

This use of facial recognition against excluded communities is seen elsewhere. The Indian government has used AI-based facial recognition technologies in the contested region of Kashmir, where it intensely restricts the civic freedoms of the mostly Muslim population. It also used them during protests in 2019 and 2020 sparked by changes to the Citizenship Act, which denied citizenship to undocumented Muslims. In Thailand’s mostly Muslim southernmost regions, facial data is collected as part of mandatory SIM card registration under the government’s counterinsurgency strategy.

In Singapore, widespread surveillance that makes growing use of facial recognition disproportionately affects migrant workers. In the USA, the first Trump administration issued an executive order allowing AI-enabled facial recognition at borders.

Even legislation that’s supposed to recognise rights can create hierarchies of protection: the European Union’s (EU) AI Act, which entered into force in 2024, offers citizens some protection from AI-enabled surveillance systems, but not migrants. With Afghanistan and Iran using facial recognition to repress women, Hungary instrumentalising it against LGBTQI+ people and Israel deploying it against Palestinians, those with the least access to rights are repeatedly the targets.

Stronger regulation needed

Civil society is sounding the alarm. Its investigations have led to numerous revelations of secret use and police overreach. Some have succeeded in stopping the technology’s deployment. In 2022, a court in Buenos Aires, Argentina ordered the suspension of the city’s facial recognition system because it enabled police to access millions of biometric records without a warrant. The system had been used against leaders of human rights organisations, trade unionists and journalists. The court’s ruling resulted from a challenge by a civil society organisation, Argentina’s Observatory of IT Rights.

There are many in civil society leading the way. They include La Quadrature du Net in France, Roskomsvoboda in Russia and Big Brother Watch in the UK. In the USA, the Surveillance Technology Oversight Project focuses on the harms of surveillance technologies, including facial recognition, particularly in urban settings such as New York, while the Immigrant Defense Project tracks the use of the technology against migrants. Across Europe, Reclaim Your Face, a coalition of civil society organisations, campaigns against biometric mass surveillance, including facial recognition, and advocates for its ban in public spaces. Globally, Access Now leads the call for a complete ban on facial recognition and other biometric surveillance technologies.

It isn’t too late. The rollout of facial recognition hasn’t yet reached a tipping point where its use has become the global norm. But time is running out. A major emphasis of civil society campaigning is on the need for laws and regulations. When the ICCPR was adopted in 1966 and most national constitutions were written, facial recognition was the stuff of science fiction. Laws and regulations simply haven’t kept pace with technological developments.

Recent international efforts offer mixed hope at best. Thanks to concerted civil society advocacy, the EU’s AI Act offers some protections, for example by banning most real-time facial recognition. But as well as its failure to protect migrants’ rights, states have a national security exemption to its provisions against surveillance. The EU’s least democratic states could exploit these loopholes. Hungary’s threatened use of facial recognition against Pride participants could offer the Act’s first test case.

The Global Digital Compact, agreed at last year’s United Nations (UN) Summit of the Future, says little. There’s a general commitment to ‘establish appropriate safeguards’ against the impact of technology on human rights, and a single mention of surveillance, with states committing to ensure surveillance technologies comply with international law. But the text doesn’t specifically mention facial recognition.

Most troublingly, the UN Cybercrime Convention, adopted last December, threatens to worsen the human rights impacts of facial recognition. The treaty was pushed by repressive states, with Russia to the fore, many of which already use cybercrime laws as a pretext for repression. Civil society and more democratic states worked to alleviate the treaty’s worst draft provisions, but the final text still lacks strong human rights safeguards, including on key principles such as legality and non-discrimination. The treaty makes it easier for states to share sensitive biometric data, including that gathered through facial recognition. This creates new potential dangers for activists and dissidents: exiled critics of authoritarian regimes could face deportation for online dissent after being identified through facial recognition.

Alongside global regulation, national governments must urgently pass laws to rein in facial recognition before it goes too far. They must respect privacy and protest rights and end discriminatory surveillance practices. Civil society has issued the warning – now authorities must act before facial recognition becomes the new normal and protest becomes even more dangerous.

OUR CALLS FOR ACTION

  • States should legislate to ban the use of facial recognition technologies and other remote biometric surveillance technologies.
  • States should be transparent and accountable over any existing uses of facial recognition technology and consult with civil society over its application.
  • States should consult with a wide range of civil society before taking any steps to ratify the Cybercrime Convention.

For interviews or more information, please contact research@civicus.org

Cover photo by Neal/Getty Images via Gallo Images