technology

The fear and tension that led to Sam Altman's ouster at OpenAI


Over the past year, Sam Altman led OpenAI to the adult table of the technology industry. Thanks to its hugely popular ChatGPT chatbot, the San Francisco startup was at the center of an artificial intelligence boom, and Altman, OpenAI’s CEO, had become one of the most recognizable people in tech.

But that success raised tensions inside the company. Ilya Sutskever, a respected AI researcher who co-founded OpenAI with Altman and nine other people, was increasingly worried that OpenAI’s technology could be dangerous and that Altman was not paying enough attention to that risk, according to three people familiar with his thinking. Sutskever, a member of the company’s board of directors, also objected to what he saw as his diminished role in the company, according to two of the people.

Elevate Your Tech Prowess with High-Value Skill Courses

Offering College Course Website
IIM Kozhikode IIMK Senior Management Programme Visit
MIT MIT Technology Leadership and Innovation Visit
Indian School of Business ISB Digital Transformation Visit

That conflict between fast growth and AI safety came into focus Friday afternoon, when Altman was pushed out of his job by four of OpenAI’s six board members, led by Sutskever. The move shocked OpenAI employees and the rest of the tech industry, including Microsoft, which has invested $13 billion in the company. Some industry insiders were saying the split was as significant as when Steve Jobs was forced out of Apple in 1985.

But on Saturday, in a head-spinning turn, Altman was said to be in discussions with OpenAI’s board about returning to the company.

Readers Also Like:  Ethical Hacking VS Penetration Testing: Learn The Difference

The ouster Friday of Altman, 38, drew attention to a longtime rift in the AI community between people who believe AI is the biggest business opportunity in a generation and others who worry that moving too fast could be dangerous. And the vote to remove him showed how a philosophical movement devoted to the fear of AI had become an unavoidable part of tech culture.

Since ChatGPT was released almost a year ago, artificial intelligence has captured the public’s imagination, with hopes that it could be used for important work such as drug research or to help teach children. But some AI scientists and political leaders worry about its risks, such as jobs getting automated out of existence or autonomous warfare that grows beyond human control.

Discover the stories of your interest


Fears that AI researchers were building a dangerous thing have been a fundamental part of OpenAI’s culture. Its founders believed that because they understood those risks, they were the right people to build it. OpenAI’s board has not offered a specific reason for why it pushed out Atman, other than to say in a blog post that it did not believe he was communicating honestly with them. OpenAI employees were told Saturday morning that his removal had nothing to do with “malfeasance or anything related to our financial, business, safety or security/privacy practice,” according to a message viewed by The New York Times.

Greg Brockman, another co-founder and the company’s president, quit in protest Friday night. So did OpenAI’s director of research. By Saturday morning, the company was in chaos, according to a half dozen current and former employees, and its roughly 700 employees were struggling to understand why the board made its move.

Readers Also Like:  US judge rejects challenges to Apple's $50 million keyboard settlement

Sutskever and Altman could not be reached for comment Saturday.

In recent weeks, Jakub Pachocki, who helped oversee GPT-4, the technology at the heart of ChatGPT, was promoted to director of research at the company. After previously occupying a position below Sutskever, he was elevated to a position alongside Sutskever, according to two people familiar with the matter.

Pachocki quit the company late Friday, the people said, soon after Brockman. Earlier in the day, OpenAI said Brockman had been removed as chair of the board and would report to the new interim CEO, Mira Murati. Other allies of Altman — including two senior researchers, Szymon Sidor and Alexander Madry — have also left the company.

Brockman said in a post on X, formerly known as Twitter, that even though he was the chair of the board, he was not part of the board meeting where Altman was ousted. That left Sutskever and three other board members: Adam D’Angelo, CEO of the question-and-answer site Quora; Tasha McCauley, an adjunct senior management scientist at Rand Corp.; and Helen Toner, director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology.

They could not be reached for comment Saturday.

McCauley and Toner have ties to the Rationalist and Effective Altruist movements, a community that is deeply concerned that AI could one day destroy humanity. Today’s AI technology cannot destroy humanity. But this community believes that as the technology grows increasingly powerful, these dangers will arise.

Sutskever was increasingly aligned with those beliefs. Born in the Soviet Union, he spent his formative years in Israel and emigrated to Canada as a teenager. As an undergraduate at the University of Toronto, he helped create a breakthrough in an AI technology called neural networks.

Readers Also Like:  This AOC gaming monitor is fast, but doesn't live up to mini-LED's potential

In 2015, Sutskever left a job at Google and helped found OpenAI alongside Altman, Brockman and Tesla CEO Elon Musk. They built the lab as a nonprofit, saying that unlike Google and other companies, it would not be driven by commercial incentives. They vowed to build what is called artificial general intelligence, or AGI, a machine that can do anything the brain can do.

Altman transformed OpenAI into a for-profit company in 2018 and negotiated a $1 billion investment from Microsoft. Such enormous sums of money are essential to building technologies such as GPT-4, which was released this year. Since its initial investment, Microsoft has put another $12 billion into the company.

But the company’s success appears to have only heightened concerns that something could go wrong with AI.

“It doesn’t seem at all implausible that we will have computers — data centers — that are much smarter than people,” Sutskever said on a podcast Nov. 2. “What would such AIs do? I don’t know.”



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.