Misinformation about the coronavirus, from phony cures to conspiracy theories about it being a bioweapon, can be deadly. As fake news is spreading along with the pandemic itself, it’s creating a sense of obligation for startups that expose online falsehoods to step up to the plate.
Several VC-backed companies, including NewsGuard and Graphika, are already fighting back: setting up coronavirus misinformation hotlines, partnering with public health advocates and relying on human moderators to alert the public to inaccurate information.
“We have never experienced a time when most of the top headlines were really about a particular topic that has implications of life and death,” said Matt Skibinski, general manager at NewsGuard, which offers detailed ratings of the reliability of over 4,000 news websites. More coronavirus news: Continuing coverage from PitchBook
The New York-based company has set up a free tracker to identify news sites publishing materially false information about the virus. The list included more than 160 as of Monday.
Veteran journalists Steven Brill and Gordon Crovitz co-founded NewsGuard in 2018 after they realized computers alone wouldn’t be adequate to fight misinformation. A cleverly executed fake news article can look just the same to an algorithm as a real one, Skibinksi explained. That’s where human intelligence comes into play.
NewsGuard’s team of journalists assesses websites based on standards including ownership, credibility and any history of publishing factually incorrect content. The company said it evaluates the news and information websites that account for 95% of online engagement across the US, Germany, France, Italy and the UK. Its browser extensions display an icon with a credibility score, and a quick hover provides the list of criteria and a detailed explanation of the rating.
“It’s like a nutritional label for the site that basically explains to the reader who they are and why did they get the rating we’ve issued,” said Skibinski.
A key challenge for the company is scale: Its journalists can’t skim through everything on the internet. But Skibinski said readers have helped by flagging potential false stories since the outbreak began.
NewsGuard is also trying to wrestle with an existential question: Will there be enough support from tech giants and social platforms to scale the product?
Facebook broke from its policy of not fact-checking politicians and removed a video shared by Brazilian President Jair Bolsonaro on March 29, where he claimed the anti-malaria drug hydroxychloroquine “is working in all places.” And Facebook-owned WhatsApp, which has more than 2 billion users, partnered with the World Health Organization, UNICEF and the United Nations Development Programme in March to launch an information hub. But these examples are rare.
“We never expected it to be easy to get large tech companies to admit and acknowledge that they are having a problem with misinformation,” Skibinski said. “Some social platforms are trying to dip their toe in the water, but we still have a long way to change the mindset in Silicon Valley around this.”
Other startups, such as Graphika, also rely on technology like AI and machine learning to tackle large-scale deception across the internet. The company maps and analyzes social network structures and depending on the quality of those leads, human analysts take a deeper dive.
Based in New York, Graphika was founded in 2013 by current CEO John Kelly and is backed by investors including Social Media Enterprises and Lavrock Ventures.
The company has published several investigative reports highlighting how foreign state actors influenced election campaigns, leaked trade documents and more. Last year, Kelly testified before the Senate Intelligence Committee on foreign interference in the US presidential election of 2016. In March, Graphika published a blog post discussing early coronavirus hoaxes, featuring screenshots of fake text messages circulating online that had the National Security Council scrambling to correct rumors of a national lockdown.
“We know that content moderators are stretched and they’ve got a very difficult job,” said Kelly. “The pandemic is a good opportunity to use enhanced AI-driven content analytic tools to do a better job of triaging what goes in front of those people, and we want to use the computers to make the most efficient use of the human’s time as possible.”
As perpetrators are likely to find many ways to promote authentic-looking content, Kelly that cautioned it’s not enough for social platforms to defend themselves alone. They should cooperate with other players in the industry and coordinate operations of detecting and sharing misinformation. Algorithms and artificial intelligence can play a powerful part. But most campaigns by bad actors work across multiple platforms.
Said Kelly, “The threat of the coronavirus brings an urgent need to bring the best of what everybody’s learning right now.”