Barely a day goes by when social media is not in the firing line from activists and advertisers over hate speech and racist rhetoric.
The controversy goes to the heart of the debate about the extent to which social media platforms should become the arbiter of content decisions and whether internet companies should be solely responsible for dealing with abhorrent content posted by users. Facebook and Twitter are both doing more than ever to reduce “online harms” – certainly much more than is legally mandated – but work carried out by Tech Against Terrorism shows that the majority of activity by terrorists and violent extremists has now shifted to the smaller, newer messaging apps, and niche social networks.
We need to acknowledge that, for all the understandable focus on the bigger platforms, it is the minnows who are now predominantly used by Isis, Al-Qaeda and extreme far-right groups due to the limited resources many of these platforms have to eliminate terrorist content. The extremist and violent far-right rapidly adapts its tactics to suit new technology as the response of big tech improves. Facebook is a window through which billions view the online world. But smaller social media platforms are the internet’s back door.
Tech Against Terrorism actively monitors more than 500 extremist channels spread over 20 different smaller content platforms and messaging apps. Our research shows a dramatic increase in many terms related to the extreme far-right such as “accelerationism”. Accelerationism holds that capitalist governments are hurtling towards imminent collapse, a prospect that the violent, extremist far-right wish to expedite, in order to forge a new, racially segregated world order. For them, a welter of violence and conflict is not merely an end in itself, but a step closer to the creation of an “ethnostate” founded upon white supremacy.
How should we respond to this growing threat in a way that doesn’t make the situation worse? We can start by recognising that the internet didn’t invent terrorism or violent extremism. Osama bin Laden didn’t have a smartphone, and the IRA did not have a Twitter account. In fact, most terrorists rely on generating publicity from mainstream media. Governments who demand that social media platforms act more quickly to remove illegal content are entitled to do so, but it would be a serious mistake to assume that will positively change the underlying behaviour of those who create terrorist content in the first place.
The most effective way to fight extremism, in all its forms, is to create an environment where extreme political discourse can be challenged openly, not pushed underground and valorised. Social media platforms are often accused of providing platforms for violent extremists. But the internet reflects views. It doesn’t create them. When larger platforms remove or suppress controversial views, often at the behest of governments, the people who espouse these views are often pushed on to smaller platforms – some of which are specifically created to allow for the extreme discourse removed by big tech competitors.
Governments should recognise that terrorism is a societal problem and treat it accordingly. None of this is easy. Violent extremists pose a challenge to liberal democracies, where freedom of expression is cherished and protected, but hate speech and incitement to violence are already criminal offences. Governments try to balance the right of citizens to say what they want against their duty to protect the public from violent extremists. Sometimes they get that calculation wrong. Under existing legislation aimed primarily at violent Islamist extremism, possession of terrorist propaganda, including certain books, is potentially criminal. So too is viewing certain material online.
Part of the challenge smaller companies face is understanding the scale and complexity of the threat. Unlike Facebook and Twitter, many have limited resources. A requirement to take down content within an hour sounds sensible in principle but doesn’t work in practice.
Forcing social media platforms to vet content before it is even posted would be equally impractical. Imagine having to wait 24 hours before your tweet appears online? Or a week before your family photo album appears on your Facebook profile? On the contrary, overburdening smaller platforms risks compromising their ability to compete and innovate, making them even more vulnerable to exploitation.
The most effective way to prevent terrorists’ exploitation of smaller social media companies, while ensuring online competition thrives, is to provide them with the practical tools to make their platforms more secure. Tech Against Terrorism mentors smaller platforms to help them deal with the threat and spot and remove content quickly and efficiently. But governments also need to work closely with the tech industry to tackle the violent far right.
They could start by acting more decisively to identity and outlaw these groups. Only a handful are currently designated as terrorist organisations by western governments, including neo-Nazi group National Action, Blood & Honour in Canada, and the Russian Imperial Movement (RIM), which is proscribed in the US. Designating far-right organisations as terrorist groups would help the smaller platforms that are most vulnerable to extreme far-right exploitation by giving them the legal protection they need to remove content unchallenged.
From our work with the tech industry, we know that there is willingness to work with governments to help defeat violent extremism in all its forms. But ideologies that are rooted in the real world can’t be defeated online.
Adam Hadley is executive director of Tech Against Terrorism