Home United States USA — Science Sam Altman and OpenAI Are Victims of Their Own Hype

Sam Altman and OpenAI Are Victims of Their Own Hype

101
0
SHARE

The reaction to his firing says more about the anxiety around artificial intelligence and whether it can actually make a profit.
Last Friday, seemingly out of nowhere, the board of OpenAI announced that it had fired its celebrity CEO, Sam Altman, for being “not consistently candid in his communications,” offering no further detail. Things devolved from there. The leading AI shop in the world may not effectively exist soon, or may get absorbed into Microsoft, or some third, weirder outcome yet to be generated. To the tech world, it’s a “seismic” story — like when Steve Jobs was fired from Apple, except that maybe the fate of humanity hangs in the balance.
There are plenty of theories about what happened here, some more credible than others. There are reports that Altman was attempting to raise funds for a new venture. There’s ample evidence that OpenAI’s nonprofit board, whose mission is to develop safe AI for the benefit of “humanity, not OpenAI investors,” was concerned about the direction Altman was taking the company. Within the company, according to The Atlantic, there has been a deepening rift since the release of ChatGPT between a true-believer faction represented by lead scientist Ilya Sutskever, who directed the coup against Altman, and a larger faction led by Altman that wanted to pursue growth and didn’t seem particularly concerned about, for example, destroying the world.
These are different takes on what’s starting to happen. But they share one trait that has made the conversation around OpenAI, and AI in general, feel profoundly disconnected from reality and frankly a little bit insane: It’s all speculative. They’re bets. They’re theories based on premises that have been rendered invisible in the fog of a genuinely dramatic year for AI development, and they’re being brought to bear on the present with bizarre results.
There are dozens of nuanced positions positioned along the spectrum of AI’s risk and potential, but among people in the industry, the biggest camps can be described thusly: AI is going to be huge, therefore we should develop it as quickly and fully as possible to realize a glorious future; AI is going to be huge, therefore we should be very careful so as not to realize a horrifying future; AI is going to be huge, therefore we need to invest so we can make lots of money and beat everyone else who is trying, and the rest will take care of itself.
They’re concerned with what might happen, what should happen, what shouldn’t happen, and what various parties need to happen. Despite being articulated in terms of disagreement, they share a lot in common — these are people arguing about different versions of what they believe to be an inevitable future in which the sort of work OpenAI is doing becomes, in one way or another, the most significant in the world. If you’re wondering why people are treating OpenAI’s slapstick self-injury like the biggest story on the planet, it’s because lots of people close to it believe, or need to believe, that it is.
The novel, sci-fi-adjacent speculative topics in the AI discourse get the most attention for being sort of out there: the definition and probability of “artificial general intelligence,” which OpenAI describes as “a highly autonomous system that outperforms humans at most economically valuable work”; the notion and prospect of a superintelligence that might subjugate humans; thought experiments about misaligned AIs that raze the planet to plant more strawberries or destroy civilization to maximize paper-clip production; the jumble of recently popularized terms and acronyms and initialisms — x-risk, effective altruism, e/acc, decel, alignment, doomers — that suggest the arrival not just of a new technology but of an intellectual moment rising to meet it.

Continue reading...