internet

What an internet Echo Chamber is—and why you should get out of it – ABS-CBN News



Culture
Spotlight

These days, our cyberspace experience controlled by algorithms, which are scripted to control the data you consume based on your past behavior. But this digital “preaching to the choir” isn’t always a good thing, and has helped spread chaos and cost lives in the process.

Dominic Ligot | Sep 21 2019

Whenever you get recommendations for shopping on Lazada, or suggested friends on Facebook, or recommended movies on Netflix, you are seeing the results of algorithms at work. Going by seemingly innocuous jargon such as Collaborative Filtering, Association Rules, Page Rank, or Market Basket Analysis, algorithms power recommendation engines that quietly observe and guide our choices on online platforms. These algorithms operate on similarities, they learn from past data to provide you suggestions based on similar choices made by similar people. The use of similarity stems from a marketing concept called affinity–people’s preferences tend to mimic those who share their interests and behavior.

You may also like:

What works for shopping also works for information. On search results and news feeds, the order and sources of the information you see will depend on your previous behavior. Platforms such as Google, Facebook, and Twitter prioritize as “relevant” anything similar to what you’ve already liked and shared online.

While this curation may be desirable for marketing, it’s a bias that also affects the transparency of information we find on the internet.

Soon after the Tay bot on twitter, Microsoft pulled it out after a slew of offensive tweets. Photograph from @TayandYou on Twitter

In 2016, Microsoft launched an experimental chat bot called “Tay” on Twitter, openly inviting the public to chat with it so it could learn from the interactions. Within 24 hours Tay was tweeting racist and sexist epithets and endorsing genocide, which forced Microsoft to take it down shortly after its launch. The bot wasn’t malfunctioning; like the algorithms used in marketing, it simply did what it was designed to do: mimic and reciprocate all the explicit sentiments it encountered online. Like Tay, algorithms on search engines and social networks are ensuring that we only see what we like to see and hear what we tend to say.

 

Closer to home

Last month, an altered video of Senator Leila De Lima made the news. Journalists were later surprised to learn it was not new, first posted as early as 2016. It had evaded widespread attention for years because it appeared to circulate only within a curated network of Facebook pages and accounts. The only people who saw the video were the ones who would likely share it. With every click, like, and share we are effectively teaching the algorithms to slowly trap ourselves in “echo chambers” of our own design.

In a recent interview, software developer Frederick Brennan, founder of the forum site 8chan, said that most Filipinos do not see the “real” internet. He was referring to Facebook, which to most Filipinos is the primary access to the internet. Bundled in most mobile internet plans and with a free access plan available to all mobile customers in the Philippines, Facebook is where many Filipinos browse to avoid additional data charges.

Last month, a fake video of Sen. Leila de Lima, which has been circulating in echo chambers since 2016, broke out in the news. Screengrab from @artsamaniego on Twitter

Trapped within Facebook’s “walled garden,” users tend to depend on social media for news and facts, avoiding official news channels or even curated information sites such as Snopes or Wikipedia that could help with verifying information shared online. In a recent study, researchers looking at 10 years of information shared on social networks concluded that falsehoods tend to spread faster than truths online. This was partly due to emotions triggered by false information like fear, disgust, and surprise.

When infused with disinformation and fake news, digital walled gardens and echo chambers can become breeding grounds for toxicity and hate.

The messaging app WhatsApp is bundled with many mobile internet plans in India. In 2018, fake rumors of child kidnappers circulated on Indian WhatsApp groups causing public outrage. A group of men on holiday was visiting a small village where a lynch mob had formed. The mob mistakenly identified the group as the perpetrators and brutally beat the men, killing at least one of them in a skirmish.

That same year, The New York Times ran an expose on the Myanmar military’s campaign for genocide and ethnic cleansing using rumors circulated on Facebook. The toxic content evaded detection for many years since it circulated only within the network targeted by the Myanmar military. Like the Philippines, people in Myanmar use Facebook as their primary portal to the internet.

Frederick Brennan resigned from 8chan after its members and content were linked to mass shootings in Christchurch, New Zealand and El Paso, Texas. Prior to its shutdown 8chan was already flagged by journalists as an echo chamber for white supremacists and racism.

 

How to get out

If these stories disturb you, there is a silver lining to all this. Research shows that when misinformation is called out and people are exposed to critical thinking, the overall effect of false information is reduced. This is good news because it means despite the walled gardens and algorithmic echo chambers we can still have control over our free thoughts.

The best defense is a good offense:

  • First, mind your digital trail: constantly empty your browser histories or browse “incognito” when you can, especially when using a device you do not own. Algorithms only work if you provide it data, so only feed data you are comfortable to share.
  • Next, unplug and get out of your echo chambers–meet new people and learn new perspectives. Spend more time outside social networks.
  • Remain critical of what you read online and learn how to question opinions politely. Check the sources for anything before accepting it as fact. Be skeptical of propaganda online.
  • Think before you click, care before you share. Like something because you really like it, and not because everyone else did.

Simple things, but all seemingly hard to do.

The stories of Tay, De Lima, India, Myanmar, and 8chan only surface today’s ironies: search engines and tools meant to inform can limit access to truth, social networks originally meant for connectivity and happiness can become avenues for hate and disinformation, and algorithms based on our similarities can tear society apart through our differences.

We ended the Third Industrial Revolution in awe of the power of the Internet.

We must now begin the Fourth surviving in spite of it.

 

Dominic Ligot is a data analyst, researcher, software developer, entrepreneur and technologist. He is the founder of CirroLytix a research company focusing on machine learning, data ethics, and social impact. Dominic was part of a 3-person team that recently won the 2019 Grand Prize in Break The Fake: an international hackathon competition against fake news and disinformation.

 

References:

1. How do recommendation engines work? https://marutitech.com/recommendation-engine-benefits

2. Machine learning, collaborative filtering, ranking https://jhui.github.io/2017/01/15/Machine-learning-recommendation-and-ranking/

3. Why Microsoft’s Tay AI went wrong https://www.techrepublic.com/article/why-microsofts-tay-ai-bot-went-wrong/

4. Tay gets a crash course in racism on Twitter https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter

5. Fact or Fake: Sen De Lima’s Video https://www.youtube.com/watch?v=GxzFXvQdhbA

6. 8chan founder: Filipinos not accessing the real internet https://www.rappler.com/technology/238478-8chan-founder-filipinos-not-accessing-the-real-internet

7. Sciencemag: Lies spread faster than truths online https://science.sciencemag.org/content/359/6380/1146

8. WhatsApp turned Indian village into lynch mob https://www.bbc.com/news/world-asia-india-44856910

9. A genocide incited on Facebook https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html

10. 8chan, nexus of radicalization https://www.vox.com/recode/2019/5/3/18527214/8chan-walmart-el-paso-shooting-cloudflare-white-nationalism

11. Journal Plos: neutralizing misinformation through inoculation https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0175799

12. How to be a human lie detector of fake news https://www.kctv5.com/news/us_world_news/how-to-be-a-human-lie-detector-of-fake-news/article_2e02a9bc-3814-578e-8685-674b319272af.html



READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.