In response to hate speech proliferating online, scientists have proposed a high-tech solution

The struggle to control online hate speech has been a constant headache for big tech companies – but a team of mathematicians and physicists might have come up with a solution.

In a report published in Nature, scientists have shown how previous attempts to tackle hate speech groups have often strengthened overall ‘hate networks’, instead of weakening them as intended.

The researchers advocate for a new, more sophisticated approach, arguing that Big Tech’s failings in the battle to regulate online hate speech is having disastrous real-world consequences.

The team, led by Essex-born physicist Neil F. Johnson at George Washington University, used a sophisticated mathematical model to show that ‘policing within a single platform (such as Facebook) can make matters worse, and will eventually generate global ‘dark pools’ in which online hate will flourish.’

Not restricted to individual networks or countries, the report argues that hate speech lives in clusters and paths, allowing ideology to travel down ‘hate highways’ and to shift location whenever just one of the clusters are targeted by a big tech platform.

When certain hate speech bad actors are removed, the overall network of hate speech ‘rewires’ itself and strengthens other connections. Removing toxic speech from one platform like Facebook just forces the network to migrate elsewhere, like a virus that can lay dormant in your body even after someone has recovered.

The interconnectedness and global nature of social media means platforms cannot just be considered in isolation, the study’s authors argue.

Previous strategies to tackle online hate speech and misinformation have included wholesale internet bans and automated content removals on individual platforms.

So what are the scientists’ solutions? 

The interconnectedness of social platforms leads to ‘hate highways’ developing online (Getty Images/EyeEm)

The paper’s authors outline four policies based on their analysis on what the networks of hate really look like.

Some of their suggestions involve removing the smaller members of hate clusters at random, rather than the biggest members, which has drawn controversy in the past.  

Another is for social networks to promote the anti-hate clusters to organise, which would act like a sort of ‘human immune system’ for online hate groups.

They also suggest forcing different hate groups to talk to each other, allowing the groups to battle out differences between themselves.

However, they also warn that each of the proposals need to be properly tested before any implementation.

Big tech companies have had problems with hate speech in recent months, after a spate of attacks linked with far-right ideology. Facebook was accused of allowing calls for violence after Sri Lankan terror attacks and allowing a sickening livestream to propagate after the New Zealand massacre, while both Facebook and Youtube have been accused of radicalising users and providing safe harbour for online hate. 

A report released by LGBT+ charity Galop last year found that 84% of survey respondents experienced more than one occurrence of online abuse.

While researchers and technology executives figure out what to do about hate speech online, authorities in the US are becoming quicker to pre-emptively tackle threats.

Since mass shootings in Texas and Ohio last month, again fuelled by online far-right ideology, US police have arrested at least 28 people who have made similar threats online.


READ  Keep your spouse happy and you’ll live longer – science says so


Please enter your comment!
Please enter your name here