Retail

Algorithms already rule our world. Time to make it transparent and fair


This is the story of a 19-year-old girl – Tay. More accurately, we are talking about Tay.ai, an Artificial Intelligence, developed by Microsoft in 2016, and masked behind an innocent young girl’s face. Tay was programmed to respond to tweets while learning from the public’s tweets and comments. It was an auto-run human-trained bot – with no ethical controls. But something went terribly wrong, within 16 hours of its birth, Tay turned into a racist and misanthropic monster. Microsoft withdrew Tay within the first day of its launch. It was tweaked and sent back to the internet after a week. She surprisingly came online and this time learning from users started posting drug related tweets. Her dark side was rekindled. Soon she was removed from public view and her account became private.

We live in a world run by AI, governed by algorithms and designed in codes. Nearly 90% of the world relies on algorithms daily – from as simple as using a phone or ordering a pizza to as complex as online trading or multiplayer games. But merely 1 in 500 humans understand even the basics of coding an algorithm.

Today, humans are blinded by the convenience which AI provides as we walk into the darkness of outsourcing key decisions to algorithms. From the price of your next Uber cab to the direction you drive guided by Google Maps, to the time you spent watching videos on YouTube to even the suggested friends you choose to make on Facebook. AI can govern where and how you go, what you buy and who are your friends and alarmingly, even your enemies.

Recently, cab aggregators were asked by a Parliamentary Committee in India on pricing of cabs; whether the algorithm uses battery level, phone make and gender to determine the price of a cab. We still await the answer – at least in the public domain. Amazon’s algorithm is trained to pick out the best-selling goods on its market and then recommend whether Amazon should make those products itself under a new brand label and promote it. Alexa, the AI digital assistant device which is supposed to make our life easy, is now being questioned on eavesdropping on all conversations, as it is an “always-on” device.

Even if the intention of the code creators was not mala fide, a small error in algorithms can lead to massive problems. Way back in 2014, Amazon set up an engineering team in Edinburgh, Scotland to build an AI to help in hiring people. They developed about 500 computer models which picked around 50,000 key terms from the past selected candidates’ resumes. This AI would browse over the web and recommend candidates for a particular job. A year later, they found out that the AI didn’t prefer women candidates. This was because the AI was trained to select applicants based on the resumes submitted to the company over the last decade. And since, tech industry is male dominated, the system taught itself that male candidates were preferable. It penalized the resume’s which had words like “women” as in “women’s volleyball team captain”.

Imagine a computer controlling a medical robot, originally programmed to treat cancer. That sounds benevolent. But a slightest mis-programming of its boundaries can make it conclude that the best way to obliterate cancer is to eliminate humans who are genetically found to be prone to the disease itself. An ordinary human would never make this decision, we carry the gift of ethics, strengthened over tens of generations, which are ingrained in us. A computer’s brain, on the other hand, is fully open to whatever it is programmed for, advertently and potentially surreptitiously.

If such inadvertent algorithmic flaws can bear such serious consequences, one can only imagine what would be the sphere of impact of willful AI chicanery. Organized large scale cyber-attacks by regimes such as China and North Korea are well known. Usually, democratic nations with relatively unaudited internet are softer targets to cyberattacks. In 2017, Chinese Army personnel allegedly attacked the US Financial giant Equifax and stole financial data of 145 million Americans, more than half of its adult population! India itself is a target for cyberattacks. This also includes – Ransomware attacks, where a rogue malicious code enters the network of an organization and locks data and processing until we pay a ransom to release it. Indian stood at second position globally in terms of the ransom paid to such attacks. In 2020 alone, 74% of Indian organizations suffered from such an attack and together they paid between $1 million – $2.5 million to hackers who made such successful attacks.

The risk that algorithms, which have eased our lives, can also turn into something dark and ugly resonates even with the technology stalwarts. Recently, Elon Musk warned, “… there should be some regulatory oversight… just to make sure that we don’t do something very foolish. I mean with artificial intelligence, we’re summoning the demon.” Others have been less vocal but prepared. Author James Barrat talked about how many highly placed people in AI have built retreats that are sort of algorithm proof bunkers, to which they could flee if it all hits the fan.

We must realize that algorithms can make colossal mistakes, and without oversight such errors can be very costly. We must also realize that algorithms are dictated by the intentions of those who write them – and that virtual world can cause serious damage in the real world.

This brings us to the key question – should we have stronger regulation over algorithms?

The shadow of election manipulations using computer software such as Cambridge Analytica still looms over the world. It is time that duly elected governments, answerable to the people, take it as a responsibility to protect people from becoming victims of any malicious or erroneous algorithm.

Europe recently implemented strong laws about privacy of data. The United States of America is now loudly talking about regulating the power of social media in shaping opinion. India is dwelling deeply into establishing a regulatory framework to shape its first serious attempt to protect its citizens from data manipulations – in the form of the Personal Data Protection 2019, which is currently under the scrutiny of a Parliamentary Committee. This is a golden opportunity to bring transparency and fairness of AI as an extension of protecting people in a digital world.

Time for our right to a fair algorithm.

(The writer is an IIM Ahmedabad graduate and was the Advisor for Policy and Technology to A.P.J. Abdul Kalam, the 11th President of India. He co-authored the book Target 3 Billion and Advantage India with Kalam. He is the CEO of Kalam Centre)





READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.