The UK government has belatedly woken up to the problem of internet-enabled fraud, adding investment and romance scams to the list of harms technology companies should be responsible for policing under new draft legislation. But the last-minute addition is too little too late.
The Online Safety Bill, announced as part of last week’s Queen’s Speech, requires social media and dating apps to take down “harmful” content even if lawful, or face fines up to 10 per cent of their global turnover. There is also a threat — deferred for now — of senior directors of Big Tech facing criminal charges if they do not fulfil their duty of care.
The draft law comes as countries around the world are grappling with how to recalibrate Big Tech’s legal responsibilities with the great power it wields, without outsourcing the policing of free speech. US lawmakers on both sides want to see changes to the 1996 law that “created the internet” and freed platforms of legal liability for third-party content they host. It is, then, a shame that the UK seems to have fudged what could have been a pioneering piece of legislation.
The government is right to include fraud in the bill: seven out of 10 “authorised” frauds — where the victim has unwittingly consented to a scam — start online, according to UK Finance, a trade body. The pandemic is also expected to have unleashed a torrent through scam invitations for tests or protective equipment.
The bill captures “user-generated content”, or posts by a member or group. That is useful as far as it goes: if a member of an online forum incites others to invest in a scam, or a fake dating profile is used to swindle someone out of their life savings, tech companies will have to take reasonable steps to remove such posts under the draft law — although it is still fuzzy on detail.
But there is no mention of tackling the central problem of online advertising. Be it yield-hungry retirees who google where to invest their lifetime savings following pension liberalisations, or a younger, more digitally savvy generation targeted by celebrity-endorsed trading apps, it is sham advertising that is the real gateway to fraud.
Currently, financial regulators must play digital whack-a-mole. If they spot a fake advert or a cloned website, they must then request tech companies remove it. This takes time when money is lost. One advert is removed as another pops up.
The government counters that the wider bill, which targets child pornography and terrorism, is focused on user-generated content. But as currently construed, it is hard to see the newly enlarged bill as anything other than a tokenistic effort to tackle online fraud.
It also spares tech companies from narrowing a revenue stream. They make money from advertising, fraudulent or not. Niftily, they also make money from anti-fraud campaigners; the UK’s Financial Conduct Authority paid £600,000 last year to post warnings on Google. It is therefore reasonable to suggest that Big Tech verify the legitimacy of that revenue stream in the same way banks are forced to check customers (Google says it is rolling out advertiser verifications this year). This ought not to let law enforcement off the hook: investigators should be far more nimble in tackling the vast amounts of fraud in the UK, most of which goes unimpeded.
The government can do better. The world is watching. Having belatedly accepted that fraud is worthy of inclusion in such a potentially groundbreaking piece of legislation, the least the government can do is make sure it tackles such harms with robust solutions.