Consumer trust in Big Tech is at an all-time low. Governments, regulators, and courts all over the world have either introduced legislations and issued decisions to combat the growing misuse of the internet. Increasingly, policymakers are weighing in on an “impossible triangle” when deliberating on platform regulation – where it is not feasible to equally serve the interests of national security, privacy and economic growth. Online radicalisation, and hate speech, cybercrimes, and banking frauds, and disinformation online have led to not only a tech-lash amongst civil society and governments, but also to reckoning from insiders of the technology industry themselves.
Companies are now openly calling for more regulation on political advertising and harmful content online. even setting up external advisory boards to guide them on some of these areas, including the scope for bias in the design of AI technology. The panel at Raisina 2019 primarily addressed this question: How must corporations respond to the growing trust deficit of consumers with big tech platforms?
Across the board, the experts agreed that there is no one-stop solution when it comes to companies building and maintaining trust. Paula Kift noted that “trust” itself could be difficult to define, referring to how the notion of privacy is understood differently across the globe. Definitions are crucial since they ultimately determines how a nascent regime regulating emerging technologies can be set up. Kift further noted that the impact of existing frameworks must be deeply understood by all actors before enacting any news ones – specifically referring to EU’s General Data Protection Regulation.
Even as stakeholders acknowledge the range of challenges that the internet brings about, the need to devise contextual and localised solutions to fight the misuse of the internet was identified. Rema Rajeshwari shared anecdotal evidence of battling the spread of fake news in Telangana in early 2018. The mob lynchings reportedly arising out of misinformation circulated on a popular encrypted messaging platform instigated rumours of child abduction calling for law enforcement and government and community leaders to address the problem locally. Rajeshwari stressed on the urgency of tackling the digital literacy gap in the country by engaging all stakeholders to design immediate and contextual campaigns to educate users. She recounted her team’s on-ground experience where they worked closely with local public representatives to educate residents by going door-to-door and organising extensive workshops to help law enforcement agents identify fake news online.
Such contextual and community-driven and located solutions, however, might be difficult to come by as long as decision-making is concentrated with companies located in Silicon Valley. In a world where technology is spreading rapidly across countries in areas of low literacy, among first-generation internet users, and different local realities in government-citizen engagement, technology firms driven by Western realities are struggling to evolve. Scott Carpenter reiterated that locating the blame on technology is futile; rather, what would be more fruitful is leveraging technology to address online threats and at scale. Carpenter outlined a model where experts who understand the nature of the threat, including NGOs and investigative journalists, can collaborate with technologists to address hate speech, fake news, and vulnerabilities online.
Defining, building, and maintaining trust online will be crucial in the coming months and years as emerging technologies play an larger role in democratic processes and delivery of essential services. Devising, incubating, and curating best practices will be necessary as stakeholders from civil society to governments innovate in regulating the digital realm against its ill effects.
This essay originally appeared in Raisina Dialogue Conference Report 2019