technology

Trump fuming at social media over Twitter fact check. How platforms handle misinformation differently



President Donald Trump threatened Wednesday to close down social media platforms that “silence conservatives,” a day after Twitter for the first time added a fact check to one of his posts. 

Trump posted two tweets alleging, without evidence, that expanded mail-in-voting could not be “anything less than substantially fraudulent” and would lead to a “Rigged Election.” Twitter added a warning label at the bottom of the tweets and link reading “Get the facts about mail-in ballots.” The link takes readers to a page with information on voting by mail and posts related to fact checks on Trump’s fraud claims.

Social media platforms’ handling of misleading posts have been under increasing scrutiny since the 2016 election, when the Russian government mounted a campaign to divide and misinform U.S. voters as part of its larger effort to influence the election. Officials and experts have warned Russia and other foreign actors are trying to do the same thing as the 2020 election nears. 

Each company has responded to the problem in various ways. 

Twitter 

Trump’s social media platform of choice has taken some of the most aggressive action to combat misinformation. 

In February, the company announced it would begin labeling tweets that contain “synthetic and manipulated media.” In addition to labels and warnings, Twitter said it would reduce the visibility of tweets sharing altered photos or videos, and sometimes provide additional context and information. Tweets that shared media determined “likely to cause harm” would be subject to removal. 

In response to the coronavirus pandemic, Twitter said March 16 it was broadening its definition of harm to include “content that goes directly against guidance from authoritative sources of global and local public health information.” 

‘Manipulated media’: Twitter uses label for first time after Trump retweets edited video clip of Biden

And on May 11, the company declared it was adding more labels and warnings to posts related to COVID-19 in order “to provide additional explanations or clarifications in situations where the risks of harm associated with a Tweet are less severe but where people may still be confused or misled by the content.” 

Though Trump’s mail-in ballot tweets were not explicitly about the coronavirus, expanded vote-by-mail has been endorsed by Democratic and Republican governors who want to give their residents the opportunity to vote without risking the potential exposure to the virus that could come with in-person voting. 

Twitter spokeswoman Katie Rosborough told USA TODAY the company applied the label to Trump’s tweets because they “contain potentially misleading information about voting processes” and to “provide additional context around mail-in ballots.” 

“This decision is in line with the approach we shared earlier this month,” Rosborough said. 

But the move drew Trump’s ire. The president accused Twitter on Tuesday of “interfering in the 2020 Presidential Election” and “stifling FREE SPEECH.”

“Republicans feel that Social Media Platforms totally silence conservatives voices. We will strongly regulate, or close them down, before we can ever allow this to happen,” Trump tweeted Wednesday. Many Trump supporters have echoed the president’s accusation that the move confirms Twitter is biased against conservatives. 

But others have called the company’s decision a good first step to counterfalse statements or accusations Trump has made on Twitter. Critics say it does not go far enough, pointing to the president’s recent posts making unfounded allegations of murder against a cable news host Joe Scarborough and have called for Trump to be suspended from the platform altogether. 

Trump and Scarborough: Widower of late Scarborough staffer asks Twitter to remove Trump tweets, Twitter says no

Facebook

Facebook has also instituted a number of measures aimed at “fighting the spread of false news.” The company has engaged third-party fact-checkers, sought to reduce the financial incentive for spammers to share misinformation and allowed users to report misleading content when they think they see it. 

Rather than remove misleading content, the company’s response is to cut back on the number of people seeing it, even in the case of repeat offenders. Many conservative commentators, including controversial Trump supporters Diamond and Silk, have accused the site of disproportionately limiting the distribution of conservative content. 

Like Twitter, Facebook labels media that has been manipulated and removes posts it believes can do harm. And it has included misinformation about the coronavirus outbreak in its definition of harmful content. The platform informs users if they interact with harmful misinformation about the pandemic. 

Masks: Fauci wears a mask as a ‘symbol’ of what ‘you should be doing’ amid coronavirus pandemic

But according to Facebook’s policy, “posts and ads from politicians are generally not subjected to fact-checking.”

If a politician shares content “that has been previously debunked on Facebook” the company “will demote that content, display a warning and reject its inclusion in ads.” 

But if “a claim is made directly by a politician on their Page, in an ad or on their website, it is considered direct speech and ineligible for our third party fact checking program –even if the substance of that claim has been debunked elsewhere.” 

The company, which has been criticized for that relatively hands-off approach to political content, explained the stance was rooted in its concerns that “by limiting political speech we would leave people less informed about what their elected officials are saying and leave politicians less accountable for their words.” 

Instagram 

Instagram is owned by Facebook and it primarily relies on the same system of fact-checking and labeling Facebook uses to address misinformation.

If the fact-checkers determine something is false or partly false, that content is removed from the site’s hashtag and “Explore” pages and its visibly in people’s feeds is reduced. 

In response to the coronavirus, the platform only recommends accounts posting on the outbreak from credible health organizations.

“We also remove false claims or conspiracy theories that have been flagged by leading global health organizations and local health authorities as having the potential to cause harm to people who believe them,” the platform says. 

Hong Kong: Pompeo declares Hong Kong no longer autonomous from China in a move that threatens to escalate US-China tensions

YouTube 

YouTube, which is owned by Google, removes misinformation that is deemed harmful and violates its community guidelines. But the site admits it is still wrestling with how to “reduce the spread of content that comes close to – but doesn’t quite cross the line of –violating” those guidelines. 

In January 2019, the company announced it would “begin reducing recommendations of borderline content and content that could misinform users in harmful ways – such as videos promoting a phony miracle cure for a serious illness, claiming Earth is flat, or making blatantly false claims about historic events like 9/11.” 

In February, YouTube said that policy led to a “70% average drop in watch time of this content coming from non-subscribed recommendations in the U.S.” 

And in April, the company said it was expanding its use of “fact check information panels” to include the U.S. 



READ SOURCE

Leave a Reply