‘It’s easier and cheaper than ever to spread disinformation on a massive scale’
CIVICUS discusses online disinformation and hate speech, and the role of civil society in combating them, with Imran Ahmed, founder and CEO of the Center for Countering Digital Hate (CCDH).
CCDH is an international civil society organisation dedicated to combating online disinformation and hate and holding social media companies accountable for their role in perpetuating them. As a result of its work to expose harmful content on X/Twitter, it was unsuccessfully sued by the platform’s owner, Elon Musk.
What are the dangers of online disinformation and hate speech?
The risks of online disinformation and hate speech are immense. People have always lied, but we’ve traditionally had safeguards in place to identify and challenge false information in order to maintain a healthy public discourse. But this has become more difficult with the widespread use of social media platforms, which are driven by algorithms that are not designed to represent reality but to trigger emotional responses, create addiction and keep users engaged for as long as possible.
We are now facing what I would call the nuclear age of disinformation, in which it is easier and cheaper than ever to spread disinformation on a massive scale. Disinformation is spread not only to deceive the public, but also to radicalise people and undermine democratic processes. What’s particularly worrying is that some of the world’s largest companies are actively facilitating this.
These companies shape reality for billions of people, for better or worse. Their platforms are the main means by which we share information, establish social norms and even determine what we consider to be fact, so you’d expect them to take this responsibility seriously. But instead, they seem focused only on enriching themselves.
Social media companies often claim to promote a global public discourse that enlightens humanity, but in practice they do anything but. We hope they will eventually live up to this promise. However, all signs currently point in the opposite direction. Right now, our priority is simply to control the damage they’re doing.
How can those responsible for pushing disinformation be held accountable?
Rather than focusing on the person sharing false information and hate speech, we should primarily focus on the platforms that amplify these messages. Their systems are designed to maximise engagement, and unfortunately disinformation is what tends to drive engagement the most, keeping users logged in longer and increasing advertising revenue. They continue to promote harmful content simply because it’s profitable.
To address this, we need to make it more costly for these companies to spread disinformation and hate speech. There are two main mechanisms for doing this: regulation and advertiser responsibility.
Governments should regulate the most harmful content. They should impose penalties when platforms cause significant harm to society. I’ve been involved in pushing for such regulation. In September 2021, I testified in favour of the UK’s Online Safety Act, which became law in October 2023. I’ve also worked with policymakers in the European Union (EU) on the Digital Services Act, which regulates online platforms to prevent illegal and harmful activity online. In the USA, our focus is on educating lawmakers about why regulation is needed and promoting civil litigation to impose costs on companies and hold them accountable.
The second lever is working with advertisers, who provide 98 per cent of the platforms’ revenue. Our team engages directly with companies to help them understand their moral responsibility to take a stand against disinformation. We encourage them to make informed decisions about how they spend their money.
How is CCDH working to counter hate speech and disinformation?
CCDH has been quite effective in countering hate speech and disinformation by working closely with governments and the private sector. We’ve worked with the EU on its first regulatory measures and pushed for legislative reform in the USA. While legislation has not yet been passed, we’ve become a resource for both Democrats and Republicans on their specific concerns. For example, Republicans are particularly concerned about issues such as children’s mental health, while Democrats are also concerned about issues such as antisemitism, racial hatred and threats to democracy.
But one of our most notable successes was a study we did after Musk bought Twitter/X. We found racist hate speech against African Americans tripled after he took over. This study made the front page of the New York Times and triggered an exodus of advertisers from the platform. Musk admitted this cost the company around US$100 million, and the real impact may have been even greater.
Musk blamed us for this loss, and to intimidate us, in July 2023 he filed a lawsuit against us. We won the case, and he’s currently appealing against the decision. But we’re confident we’ll win again. The court issued an anti-SLAPP (strategic lawsuit against public participation) ruling, which means Musk will also have to pay our legal costs. In essence, the court recognised that Musk was engaging in lawfare – using legal tactics to try to suppress our constitutional right to freedom of expression. This was ironic, considering he claims to be a staunch defender of free speech. Not only did we win, but we exposed his attempt to silence us.
This shows it’s possible to hold platforms accountable for the hate and division they fuel. It also shows that social media platforms aren’t neutral entities. They are shaped by the decisions of their owners and executives, which can significantly influence public behaviour. They are not just platforms. They are publishers and the decisions their executives make are tantamount to their editorial line.
Is it possible to fight disinformation without compromising freedom of expression?
Free speech is essential to a healthy democracy and society. I don’t think governments should decide what people can or cannot think or say, unless it’s something that can cause direct harm.
Having said that, the first step should be for the ecosystem to be transparent. We need a better understanding of how platforms work, and particularly how their algorithms work and how advertising influences the content we see. Users often don’t realise how much advertisers’ interests shape their timeline. Platforms should also be more transparent about their rules and how they enforce them. If content is removed or left up, social media companies should clearly state why they have decided to do so and provide a way to appeal. Researchers should also be given access to data on how social media platforms affect society.
In the EU and UK, regulators can play this role, but in the USA civil litigation may be more effective. However, under Section 230 of the Communications Decency Act of 1996, social media companies in the USA can’t be sued for the harm they cause. This outdated law essentially gives them a ‘get out of jail free’ card. We’re working to change that.
What roles should civil society play?
Civil society has a crucial role in pushing for new regulations. Our most effective work happens when we work with groups from different sectors – whether it’s sexual and reproductive rights, democracy, antisemitism, Islamophobia, public health or climate change. The key is to clearly understand and communicate how systemic disinformation hinders progress in these areas.
We need to work together to counter this trend, whether through regulation or targeting advertisers. And we need to address the systemic nature of the problem. It’s in the logic of the current dynamics to favour disinformation over good information.
In the past year, we’ve seen many movements for change. When I started this work eight years ago, many thought change was almost impossible, but now it feels inevitable. A few key factors have driven this change: the recent riots in the UK, which were fuelled by disinformation, growing evidence of how social media is harming children in the USA and increasing awareness within civil society that the harms of social media platforms outweigh the benefits. There’s more alignment on these issues now, and it no longer feels like we’re pushing against our own side. Change is coming, and it feels more tangible than ever.