internet

Who will fight internet lies in Myanmar? – Frontier Myanmar


There are organisations demoting misinformation on Facebook in some Asian nations but not in Myanmar and that needs to change.

By ERIC S JOHNSON | FRONTIER

WITH MYANMAR due to vote in a general election in 2020, one of the hot topics in the aid community is how to increase the average quality of news to enable voters to maximise their ability to make well-informed decisions. Providing more high quality information (such as Frontier articles!) is an obvious way to do that, but reducing the amount of low quality information would help, too.

The lowest quality information is illegal. We depend on our law enforcement agencies to punish those who engage in incitement to violence or promote child porn, and we make it possible for someone who’s been maliciously defamed to pursue legal retribution.

Much of the information that Myanmar citizens receive comes through Facebook, so one way to define the next lowest standard of information is that which runs afoul of the social media giant’s “community standards”. As a private company, Facebook is legally free to permit or disallow any content. Encouraging crime, explicitly threatening safety (such as hate speech), violating moral standards too egregiously (pornography), stealing intellectual property (piracy), or engaging in cyber bullying (harassment) will land you in “FB jail”.

Your infringing content will be taken down and you will lose “social credit”, meaning your subsequent posts will be further down in the feeds of your “friends” than they otherwise would be (a penalty known as “demotion”). Reoffend and your Facebook account will be locked or deleted, and your email address cannot be used with a new Facebook account. Facebook employs thousands of moderators to review content that users report as problematic, and is reported to have a 100-strong Burmese-language moderating team based in Singapore.

Misinformation is neither illegal nor infringing, but it is in some ways more damaging. It can lead people to make bad decisions, such as failing to have their children vaccinated, or casting a vote provoked by fear-mongering or appeals to tribe over fairness and the common good.

However, Facebook does not want to be in the position of deciding what legitimate speech to censor, because doing so would take it perilously close to the definition, under United States law, of being a publisher. As a publisher, Facebook would be liable for all content on its platform and would always err toward a conservative definition of what is acceptable. Such a scenario would be the end of any freedom on Facebook, and we, too, probably don’t want it to be censoring content. But while neither law enforcement agencies nor Facebook moderators are going to take down misinformation, its broad propagation has real-world consequences. So, in Lenin’s immortal words, “What is to be done?”

Artificial intelligence might eventually solve many problems. Imagine ticking a box in your Facebook settings to exclude false information from your feed, especially posts in which someone is trying to intentionally deceive you with disinformation. Facebook would be happy to automatically demote such content, if only it could be detected.

Although machine learning is getting better at identifying the most obvious offending content – say, child porn – it is still not very good at identifying what is false, let alone discerning a malicious actor’s intent to mislead. This is especially so in “smaller” languages such as Burmese, for which there is not yet a corpus of content judged by humans that could be used to “train” a machine learning algorithm on what to look for. It’s worth adding that, if we had AI with the desired capabilities, we could teach it to avoid the filters, too. It would be AI vs AI.

Facebook’s answer to misinformation is to “outsource” (just as responsibility for hiring and managing its Burmese-language content moderators is outsourced to Accenture).

Facebook has partnered with the International Fact-Checking Network, a project of the Poynter Institute, a journalism school and think tank in the US. The IFCN has become a sort of association of fact-checkers, a club to which aspiring members must apply and then be vetted by local independent sources. Facebook says to anyone who meets IFCN’s standards: If you’d like to fact-check viral posts that our algorithms have flagged as dubious, we’d be happy to reimburse you for doing that, and anything you tag as false will be demoted in everyone’s feeds.

Since nothing is being removed from Facebook, this demotion is not the same degree of censorship as “taking something down”; it is merely tweaking the variables provided to the recommendation engine that already drives how the content of your feed is ranked.

More than 50 fact-checking organisations have accepted Facebook’s offer and are happily looking into fact-checkable content in their own language and national context. The radical transparency demanded by IFCN membership means that most “FB third-party fact-checkers” are non-government organisations. Media might hesitate to join the IFCN, fearing a conflict of interest when facing misinformation that originated from an advertising or subscription client, such as a government. In the run-up to the Indian elections, seven IFCN members (some NGOs, some media) were demoting Facebook content. In the Philippines, three IFCN members demote. Pakistan’s lone IFCN member tags at least a post a day as demonstrably false.

Another system, ClaimReview, is experimenting with markup to feed fact checkers’ decisions on demoting misinformation into the ranking of Google News search results.

However, no one is demoting Myanmar misinformation on Facebook or Google News.

Several of Myanmar’s more-adventurous NGOs have been experimenting with fact-checking projects, such as “Think Before You Trust” and “Real or Not.” Democratic Voice of Burma elaborates on these efforts in its TV programme “MIL Kyi”. All are laudable efforts but probably and unfortunately reach far fewer people than the misinformation they seek to correct – and perhaps do not reach those who were misinformed at all.

It’s tempting to think that instead of playing whack-a-mole with misinformation, we should address the cause. But the disinformers don’t generally want to be found. The team leading Filipino online news site Rappler has had some luck identifying families of Facebook accounts in the Philippines that share enough characteristics, prompting Facebook’s fraud team to take action against these accounts for displaying “coordinated inauthentic behaviour”. But without access to Facebook’s back end, hunting disinformers is hard; and once detected, they will become better at obscuring their trails.

The first part of the name of DVB’s TV programme is also an acronym for “media and information literacy”. Hopefully, citizens will eventually be inoculated against fake news. However, everyone acquiring the right degree of scepticism towards information sources might require generational turnover. In the US, a study found that those aged over 65 forward misinformation seven times more often than those aged under 20.

At this point, there’s no substitute for human intelligence when it comes to identifying misinformation. Myanmar needs an IFCN member demoting misinformation on Facebook.





READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.