More than three days after OpenAI was thrown into chaos by Sam Altman’s sudden firing from his post as CEO, one big question remains unanswered: Why?
Altman was removed by OpenAI’s nonprofit board through an unconventional governance structure that, as one of the company’s cofounders, he helped to create. It gave a small group of individuals wholly independent of the ChatGPT maker's core operations the power to dismiss its leadership, in the name of ensuring humanity-first oversight of its AI technology.
The board’s brief and somewhat cryptic statement announcing Altman’s departure said the directors had “concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” Altman was replaced by CTO Mira Murati, who was appointed interim CEO. Greg Brockman, like Altman an OpenAI cofounder, was removed from his post as chair of the board and quit the company in solidarity with Altman several hours later.
There have been many twists and turns since Friday, with Altman making a failed attempt to return as CEO, the board replacing Murati as interim CEO with Twitch cofounder Emmett Shear, Microsoft announcing it would hire Altman and Brockman, and almost every OpenAI employee threatening to quit unless Altman returned.
None of them have shed much light on what Altman did or did not do that triggered the board to eject him. An OpenAI staff member speaking on condition of anonymity on Monday says that the board has communicated virtually nothing about its thinking throughout the crisis.
Along the roller-coaster ride of the past few days, several possible reasons for Altman’s removal have been seemingly eliminated. In a memo to staff sent over the weekend, OpenAI’s chief operating officer, Brad Lightcap, said that the board’s decision “was not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices. This was a breakdown in communication between Sam and the board.”
That appeared to rule out the possibility that Altman had been felled by a conventional corporate scandal involving duplicity or rule-breaking related to financial or other workplace policies. It helped fuel a hypothesis that gained ground in some corners of the AI community over the weekend that OpenAI cofounder and chief scientist Ilya Sutskever and his fellow board members had instead acted out of fear that OpenAI was taking risks by developing its technology too hastily.
OpenAI’s odd governance structure was designed to give its board the power to rein in its for-profit arm. The directors’ primary fiduciary duty is to the company’s founding mission: “To ensure that artificial general intelligence benefits all of humanity.” Some following the drama saw hints in recent interviews by Sutskever about OpenAI’s research that he might have been anticipating a breakthrough that raised safety concerns. The New York Times reported that unnamed sources said Sutskever had become more concerned that OpenAI’s technology could be dangerous and felt Altman should be more cautious.
Yet on Monday those theories too appeared to be put to rest. In a post on X in the early hours of the morning, the board’s new interim CEO, Emmett Shear, wrote that before he accepted the job he’d asked why Altman was removed. “The board did not remove Sam over any specific disagreement on safety,” he wrote. “Their reasoning was completely different from that.” Shear didn’t offer any information on what the reasoning had been instead.
Sutskever himself then appeared to quash the possibility he and the board had acted out of fears that Altman wasn’t taking proper care with OpenAI’s technology, when his name appeared among the nearly 500 staff members on a letter threatening to quit if Altman wasn’t restored. Within hours some 95 percent of the company had signed up.
Sutskever also wrote in a post on X that he deeply regretted his role in the board’s actions, again seeming to negate the idea he’d had major safety concerns. “I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company,” he wrote.
Late on Monday, Microsoft CEO Satya Nadella, whose company has pledged more than $10 billion in investment to OpenAI, said he was also in the dark about the board’s reasoning for acting against Altman. In a televised interview on Bloomberg, he said he hadn’t been told of any issues by anyone from OpenAI’s board. “Therefore I remain confident in Sam and his leadership and capability, and that's why we want to welcome him to Microsoft,” he said.
Late on Monday, the fourth day of the OpenAI upheaval, the original reason for the board’s decision to fire Altman remains unclear.
Before he was removed as CEO, Altman sat on OpenAI’s board alongside Brockman, Sutskever, and three outsiders: Adam D’Angelo, CEO of Quora, which has its own chatbot, Poe, built in part on OpenAI technology; Tasha McCauley, CEO of GeoSim Systems; and Helen Toner, an expert on AI and foreign relations at Georgetown’s Center for Security and Emerging Technology. McCauley is on the UK board of Effective Ventures, a group affiliated with effective altruism, and Toner used to work for the US-based effective-altruism group Open Philanthropy.
Altman and his cofounders created OpenAI as a nonprofit counterweight to corporate AI development labs. By creating a for-profit unit to draw commercial investors in 2019 and launching ChatGPT almost a year ago, he oversaw its transformation from a quirky research lab into a company that vies with Google and other giants not just scientifically but also in the marketplace.
Earlier this month, Altman capped off that transformation by hosting the company’s first developer conference, where he announced a kind of app store for chatbots. Somewhere along that trajectory, his board apparently saw reason for concern and decided they had to act.
Additional reporting by Paresh Dave.