The UK wants to play a leading role in holding Big Tech companies to account for “online harms” ranging from cyber bullying to illegal content — and its plans are being closely watched by policymakers elsewhere. Ten months ago, a white paper proposed a “duty of care” for online platforms to safeguard users, with a new regulator to police it. Unveiling the next stage of its plans last week, the government said it was “minded” to turn Ofcom, the broadcasting regulator, into Britain’s new internet watchdog. But its latest proposals showed how much work remains to be done to create an effective, proportionate regulatory regime.

The draft legislation would see Ofcom oversee an online harms framework requiring social media platforms and other services to protect users from dangers including child exploitation, terrorist content, revenge pornography and hate crimes. Those found to be in breach of the rules — for example, for taking too long to remove material — could face potential penalties including fines or having access to their sites blocked in the UK. A similar law focused more narrowly on hate speech came into force in Germany in 2018.

Giving Ofcom this role makes sense. The regulator would have the power to require annual transparency reports, establishing both the scale of harmful content on sites and their efforts to fight it. It could also play a role in ensuring that platforms share data and algorithms with independent researchers. Provided sensitive personal information is handled carefully, this could greatly enhance understanding of the effects of online content, especially on vulnerable users.

READ  I quit my flat to live in a freezing caravan after heart attack forced me out of work and Universal Credit delay left me penniless

Yet the plan is arguably a decade too late. Acting sooner would have given regulators a chance to scale up gradually to deal with ever rising volumes of content. Today, the amount of material on platforms including social media sites, video apps and online games is staggering. Ofcom’s role would be to set the framework and look into complaints of breaches, rather than police all content. But its record in investigating violations of broadcasting rules — which can take months — highlights the scale of the challenge in the potential new rule, even with a much expanded team. The speed at which new technology companies can emerge only makes it harder to stay vigilant.

The definition of online harms is also insufficiently nuanced. Social media companies have proven fairly adept at removing explicitly illegal content such as child abuse, using algorithms and human moderation. Other topics are more ambiguous, such as disinformation: evidence from Germany shows regulators struggle to deal with satire. Expanding the “duty of care” umbrella to cover children having too much screen time — which the proposals hint could happen in the future — risks overburdening the regulatory regime.

Defining online harms too broadly could also undermine one potentially powerful enforcement tool: senior management liability. Having senior managers personally face potential fines or criminal liability has some precedents in the financial sector. But an overzealous approach could prompt companies to avoid setting up operations in the UK for fear of falling foul of the law. Rights groups warn, too, of damage to freedom of expression.

READ  Asda baby sale starts today as the supermarket cuts prices to attract shoppers

The government has much to do in fleshing out the code to support the regulator’s work, before making its final decisions in the spring. Passing mentions of requiring “age verification” for certain sites, for example, fail to reflect the considerable technological challenges such systems entail. The UK is keen to show Big Tech that it has teeth, but it needs sound policy first.



READ SOURCE

WHAT YOUR THOUGHTS

Please enter your comment!
Please enter your name here