What online platforms are doing (and aren’t doing) to fight misinformation
As people absorb and react to anxiety-producing information about the pandemic, online platforms are particularly vulnerable to misinformation and hoaxes.
Over the last month, several online platforms have mobilized to combat misinformation surrounding the pandemic. On March 16, Facebook, Google, YouTube, Microsoft, LinkedIn, Reddit, and Twitter issued a joint statement saying that they would be teaming up to promote COVID-19 response efforts, promising to “combat fraud and misinformation” while “elevating authoritative content” and “sharing critical updates in coordination with government healthcare agencies around the world.”
Over the last month, some platforms have continued to update their policies and roll out new features — while others remain lax on misinformation.
In March, Mark Zuckerberg announced that the social media platform would remove “false claims and conspiracy theories” relating to the pandemic that global health organizations had flagged as suspicious. He added that Facebook would also ban merchants from running ads that might “exploit” the current situation.
Facebook went on to launch a “coronavirus information center” that provides users with updates on confirmed cases, directs them to the latest news coverage, and links to information and recommendations from official sources like the Centers for Disease Control and Prevention (CDC) and the World Health Organization (WHO).
Beginning on March 23, Facebook Messenger has partnered with developers to provide government health organizations with technology to respond to the pandemic, essentially giving health services the use of messenger tools to provide “timely and reliable information” on the pandemic.
Messenger also launched a Coronavirus Community Hub where users can peruse tips for recognizing misinformation and online scams.
Facebook continues to catalog its updates on the platform’s response to the pandemic in a post on its website.
A search for the coronavirus hashtag on Instagram re-routes to a message suggesting that users visit CDC website before continuing to view posts with the hashtag.
The platform has also pledged to remove coronavirus-themed content from recommendations unless it was posted by a “credible health organization.”
Instagram has gone on to “downrank” content that has been rated false by third-party fact-checkers.
In early March, the Google-owned platform demonetized videos discussing the novel coronavirus, in accordance with its advertising policy which often bans ads on content about “controversial issues and sensitive events” (like armed conflict or global health crises). The platform then revised the policy and began allowing ads for certain creators posting content related to the pandemic.
Days later, YouTube announced that it would begin promoting a row of “verified videos” on its homepage from various news outlets and local health authorities who post content to the platform.
YouTube updated its policy once again to allow all creators approved for monetization to post COVID-19 content with ads. However, according to the platform’s advertiser-friendly content guidelines, the content may not show “distressing footage” or pandemic-related pranks that promote putting oneself or others in danger.
All content containing medical misinformation, YouTube has stressed, will be demonetized.
When Twitter users search for “coronavirus” or related health terms, the platform directs them to national health organizations disseminating verified medical information, like the CDC.
In March, Twitter announced via an updated safety policy that the platform would prioritize banning content with “the highest potential of directly causing physical harm,” specifically, content that could put people at risk of transmitting the illness. As a result, tweets contesting the recommended safety measures, promoting debunked ‘treatment,’ or posing as an authoritative organization will be removed. The platform has, however, issued a disclaimer that removing all misinformation is not possible.
Additionally, Twitter announced that it would be working with “global health authorities” to continue to identify experts and verify their accounts.
The platform makes its continually-updated policies available in a blog post here.
TikTok, like the majority of social media platforms, redirects users searching for coronavirus content to credible medical information from health organizations.
A search for the coronavirus hashtag on the platform yields a “COVID-19” banner from the WHO that leads viewers to informational videos reviewing handwashing techniques, explaining modes of transmission, and listing tips for preventing the spread of the illness.
WhatsApp users report hoaxes flooding group chats, in the form of both text and voice messages. These messages, however, can’t be monitored or handled in the way that other social media posts can, thanks to the Facebook-owned app’s encryption.
Facebook, however, voiced the company’s intention to delete spam accounts with the help of AI that can isolate accounts sending automated content.
The platform also created a WhatsApp Coronavirus Information Hub in partnership with WHO, UNICEF, United Nations Development Programme (UNDP), and The International Fact-Checking Network at Poynter to update users around the world about the pandemic. The initiative, the company says, will help users “connect” while “stay[ing] up to date with the latest health information” and “share[ing] information responsibly.”
In late March, the platform launched a chatbot in partnership with the WHO. The feature is designed to answer commonly asked questions about the coronavirus while providing accurate and frequently-updated information, like tips to prevent contracting the virus, situation reports, and explanations debunking common virus ‘myths.’
Reddit has arguably received the most criticism for its response to the pandemic. While a search for “coronavirus” on Reddit yields a banner from the CDC, linking to the organization’s designated website for the pandemic, the platform has not updated its policies to ban medical misinformation.
Some COVID-19 related subreddits, specifically r/Wuhan_flu and r/coronavirusconspiracy, were “quarantined” for containing “hoax content” back in March, meaning that the threads are not searchable on the site and users are prompted to opt into viewing their content.
Beyond that, the site has not implemented updated policies like other online platforms. However, certain subreddits like r/medical_advice will verify medical professions and the platform has promoted its “ask-me-anything” (AMA) sessions with experts in the field.