Retail

Facebook’s crisis management algorithm runs on outrage


By Sarah Frier

San Francisco: Last year, a Facebook user in Sri Lanka posted an angry message to the social network. “Kill all the Muslim babies without sparing even an infant,” the person wrote in Sinhala, the language of the country’s Buddhist majority. “F—ing dogs!” The post went up early in 2018, in white text and on one of the playful pink and purple backgrounds that Facebook began offering in 2016 to encourage its users to share more with one another. The sentiment about killing Muslims got 30 likes before someone else found it troubling enough to click the “give feedback” button instead. The whistleblower selected the option for “hate speech”, one of nine possible categories for objectionable content on Facebook.

For years, non-profits in Sri Lanka have warned that Facebook posts are playing a role in escalating ethnic tensions between Sinhalese Buddhists and Tamil Muslims, but the company had ignored them. It took six days for Facebook to respond to the hate speech report. “Thanks for the feedback,” the company told the whistleblower, who posted the response to Twitter. The content, Facebook continued, “doesn’t go against one of our specific Community Standards.”

The post stayed online, part of a wave of calls for violence against Muslims that flooded the network last year. In late February 2018, a mob attacked a Muslim restaurant owner in Ampara, a small town in eastern Sri Lanka. He survived, but there were more riots in the mid-size city of Kandy the following week, resulting in two deaths before the government stepped in, taking measures that included ordering Facebook offline for three days.

The shutdown got the company’s attention. It appointed Jessica Leinwand, a lawyer who served in the Obama White House, to figure out what had gone wrong. Her conclusion: Facebook needed to rethink its permissive attitude toward misinformation. Before the riots in Sri Lanka, the company had tolerated fake news and misinformation as a matter of policy. “There are real concerns with a private company determining truth or falsity,” Leinwand says,summing up the thinking.

But as she began looking into what had happened in Sri Lanka, Leinwand realised the policy needed a caveat. Starting that summer, Facebook would remove certain posts in some high-risk countries, including Sri Lanka, but only if they were reported by local non-profits and would lead to “imminent violence.” When Facebook saw a similar string of sterilisation rumours in June, the new process seemed to work. That, says Leinwand, was “personally gratifying”— a sign that Facebook was capable of policing its platform.

But is it? It’s been almost exactly a year since news broke that Facebook had allowed the personal data of tens of millions of users to be shared with Cambridge Analytica, a consulting company affiliated with Donald Trump’s 2016 presidential campaign. Privacy breaches are hardly as serious as ethnic violence, but the ordeal did mark a palpable shift in public awareness about Facebook’s immense influence. Plus, it followed a familiar pattern: Facebook knew about the slip-up, ignored it for years, and when exposed, tried to downplay it with a handy phrase that chief executive officer Mark Zuckerberg repeated ad nauseam in his April congressional hearings: “We are taking a broader view of our responsibility.” He struck a similar note with a 3,000-word blog post in early March that promised the company would focus on private communications, attempting to solve Facebook’s trust problem while acknowledging that the company’s apps still contain “terrible things like child exploitation, terrorism and extortion.”

If Facebook wants to stop those things, it will have to get a better handle on its 2.7 billion users, whose content powers its wildly profitable advertising engine. The company’s business depends on sifting through that content and showing users posts they’re apt to like, which has often had the side effect of amplifying fake news and extremism.

Unfortunately, the reporting system they described, which relies on low-wage human moderators and software, remains slow and under-resourced. Facebook could afford to pay its moderators more money, or hire more of them, or place much more stringent rules on what users can post — but any of those things would hurt the company’s profits and revenue. Instead, it’s adopted a reactive posture, attempting to make rules after problems have appeared. The rules are helping, but critics say Facebook needs to be much more proactive.

Today, Facebook is governed by a 27-page document called Community Standards. Posted publicly for the first time in 2018, the rules specify, for instance, that instructions for making explosives aren’t allowed unless they’re for scientific or educational purposes. Images of “visible anuses” and “fully nude closeups of buttocks,” likewise, are forbidden, unless they’re superimposed onto a public figure, in which case they’re permitted as commentary.

The standards can seem comically absurd in their specificity. But, Facebook executives say, they’re an earnest effort to systematically address the worst of the site in a way that’s scalable. This means rules that are general enough to apply anywhere in the world — and are clear enough that a low-paid worker in one of Facebook’s content-scanning hubs in the Philippines, Ireland, and elsewhere, can decide within seconds what to do with a flagged post.

The working conditions for the 15,000 employees and contractors who do this for Facebook have attracted controversy. In February, the Verge reported that US moderators make only $28,800 a year while being asked regularly to view images and videos that contain graphic violence, porn and hate speech.

Some suffer from post-traumatic stress disorder. Facebook responded that it’s conducting an audit of its contract-work providers and that it will keep in closer contact with them to uphold higher standards and pay a living wage.

Zuckerberg has said artificial intelligence algorithms, which the company already uses to identify nudity and terrorist content, will eventually handle most of this sorting. But at the moment, even the most sophisticated AI software struggles in categories in which context matters. “Hate speech is one of those areas,” says Monika Bickert, Facebook’s head of global policy management, in a June 2018 interview at company headquarters. “So are bullying and harassment.”

On the day of the interview, Bickert was managing Facebook’s response to the mass shooting the day before at the Capital Gazette in Annapolis. While the massacre was happening, Bickert instructed content reviewers to look out for posts praising the gunman and to block opportunists creating fake profiles in the names of the shooter or victims. Later her team took down the shooter’s profile and turned victims’ pages into what the company calls “memorialised accounts,” which are identical to regular Facebook pages but place the word “remembering” above the deceased person’s name.





READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.