startups

Silicon Valley Writes a Playbook to Help Avert Ethical Disasters


Silicon Valley is having its Frankenstein moment. The monsters of today are the billion-dollar companies we’ve come to depend on for everything from search results to car rides; their creators, blindsided by what these platforms have become. Mark Zuckerberg hadn’t realized, back when he launched Facebook from his Harvard dorm room, that it would grow to become a home for algorithmic propaganda and filter bubbles. YouTube didn’t expect to become a conspiracy theorists’ highlight reel, and Twitter hadn’t anticipated the state-sponsored trolling or hate speech that would define its platform.

But should they have? A new guidebook shows tech companies companies that it’s possible to predict future changes to humans’ relationship with technology, and that they can tweak their products so they’ll do less damage when those eventual days arrive.

The guide, called the Ethical OS, comes out of a partnership between the Institute of the Future, a Palo Alto-based think tank, and the Tech and Society Solutions Lab, a year-old initiative from the impact investment firm Omidyar Network. Both groups focus on the intersection of tech and society. Collectively, they imagine the Ethical OS as a bridge between the researchers who study tech’s growing impact on society and the companies that wield control.

“Here we are in this new era where there’s a whole set of unintended societal consequences that are emerging as tech becomes more ubiquitous, and yet, tech companies haven’t caught up with the direct link between the products they have and being able to get ahead of that,” says Paula Goldman, the head of the Tech and Society Solutions Lab, which led the project. “The impetus for the Ethical OS toolkit was exactly that: Here’s a tool that helps you think through these consequences and makes sure what you’re designing is good for the world and good for your longer-term bottom line.”

Future Shock

The three-part guide—available to download here—addresses social impact harms, ranging from disinformation to the dopamine economy. It functions as a sort of workbook with checklists, thought experiments, and basic solutions for product development teams, designers, founders, or investors to grapple with the future impact of their products.

The first section outlines 14 near-future scenarios, based on contemporary anxieties in the tech world that could threaten companies in the future. What happens, for example, if a company like Facebook purchases a major bank and becomes a social credit provider? What happens if facial-recognition technology becomes a mainstream tool, spawning a new category of apps that integrates the tech into activities like dating and shopping? Teams are encouraged to talk through each scenario, connect them back to the platforms or products they’re developing, and discuss strategies to prepare for these possible futures.

Each of these scenarios came from contemporary “signals” identified by the Institute of the Future—the rise of “deep fakes,” tools for “predictive justice,” and growing concerns about technology addiction.

“We collect things like this that spark our imagination and then we look for patterns, relationships. Then we interview people who are making these technologies, and we start to develop our own theories about where the risks will emerge,” says Jane McGonigal, the director of game research at the Institute of the Future and the research lead for the Ethical OS. “The ethical dilemmas are around issues further out than just the next release or next growth cycle, so we felt helping companies develop the imagination and foresight to think a decade out would allow more ethical action today.”

Question Time

There’s also a checklist for mitigating disasters in eight “risk zones” including machine ethics and algorithmic biases, data control and monetization, and the surveillance state. The guide prompts teams to find relevant risk zones, run through the checklist, and then begin to think about how to correct or mitigate those risks. For example: How could bad actors use your tech to subvert or attack the truth? Is the technology reinforcing or amplifying biases? How might tools be designed to advocate for time well spent? Do you have a policy in place for what happens to customer data if your company is bought, sold, or shut down?

“The checklist is probably the easiest one to envision in a daily stand-up. We even created a version for boards to have as a five-minute board discussion,” says Raina Kumra, the entrepreneur-in-residence at the Tech and Society Solutions Lab, who came up with the idea for the toolkit. “As a founder, when you’re doing your initial product meetings, you can add this checklist into that process at the end or in the middle.

Finally, the guide includes a set of seven future-proofing strategies to help teams get started in taking more ethical actions. These borrow from ethical safeguards in other industries—a Hippocratic oath for data workers, for example, or a bug bounty program that would reward people for flagging ethical issues or potential societal harm from a tech company.

Human Playbook

The guide has, so far, been piloted by nearly 20 companies, start-ups, and schools, which have used it either to stoke conversation about ethics more broadly or to guide specific product decisions. Techstars, which runs over 40 start-up accelerator programs across the country, has begun using the Ethical OS framework to decide which start-ups to invest in based on their ability to think about future issues. Those kinds of conversations, Kumra says, haven’t been the norm in tech. “When I was fundraising for my start-up, I talked to over a hundred VCs and many, many founders,” she says. “The conversation around ethics never came up once.”

‘Everyone wants to do better, but we heard feedback when we were speaking to VCs and tech co-founders that they didn’t know how. They didn’t know what to do.’

Raina Kumra, Tech and Society Solutions Lab

For that reason, a guide like this is “welcome but overdue” says Luke Stark, a researcher at Dartmouth who studies the intersection of behavior and technology. “[Academics] been thinking about these problems for a while, so it’s exciting to see some of the ideas and general concerns potentially get in front of folks who are involved in design, development, and funding.”

Stark says the areas of concerns identified in the Ethical OS are “absolutely spot on.” But because the Ethical OS is a guide meant for tech founders and investors, some of the solutions privilege business needs over societal ones. The future-looking scenarios assume that deep fakes and facial-recognition technologies will continue to grow unchecked, and that the tech industry will remain largely unregulated for the next decade. It suggests ethical solutions for companies that are “nice to have”—including ones that will help a business’s bottom line—rather than “need to have.”

“In a way, ethics itself is a very narrow framing,” says Stark. “It lends itself to these narrow interpretations of individual conduct and individual decisionmaking, as opposed to thinking about the structural questions.” He sees a guide like the Ethical OS as an excellent first step in a series of “increasingly consequential steps” among tech companies.

Goldman also sees the Ethical OS as a first step to get start-ups thinking about future implications. She calls the guide “scaffolding”—a framework on which to build deeper, longer, and more serious conversations. Other industries, like medicine, have similar procedures in place to address ethics; in tech, many companies use similar guidebooks to address security, internationalization, accessibility, or user experience (like, how does someone navigate the first two screens of an app before they sign up for an account). “If you’re in product development, you’re used to having to run those playbooks to launch something,” says Cody Simms, a partner at Techstars. “I think Ethical OS can serve as a similar type of guide.”

Whether this kind of future-proofing can become standard in product development teams remains to be seen. But Goldman and Kumra say the interest from tech companies has never been higher. Silicon Valley is just starting its reckoning, and is looking for tools to do better.

“Everyone wants to do better, but we heard feedback when we were speaking to VCs and tech co-founders that they didn’t know how. They didn’t know what to do,” says Kumra. “Nothing can can change if you don’t have a simple set of tools to enable that change.”

A simple set of tools, then, could at least start the conversation—and make it harder for founders to use the standard dorm-room defense in the future.


More Great WIRED Stories



READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.