The problem with asking investors for money is this: they want to see a return.
OpenAI was launched with a famously altruistic mission: to help humanity by developing general artificial intelligence. But along the way, it became one of the best-funded companies in Silicon Valley. Now the tension between these two facts has come to a head.
Just weeks after releasing a new model it claims can “reason,” OpenAI is quickly relinquishing its non-profit status, with some of its most senior employees set to leave and CEO Sam Altman, who was fired over apparent trust issues. (who was briefly fired) is cementing his position as one of the most powerful people in tech.
On Wednesday, Mira Murati, OpenAI’s longtime chief technology officer, announced she was leaving the company “to create time and space for her own exploration.” On the same day, Chief Research Officer Bob McGrew and Vice President of Post-Training Barret Zoph said they were also leaving. Following Murati’s announcement, Altman called the leadership changes “a natural part of the company” in an X post.
“I’m obviously not going to pretend that it was natural for this to happen so suddenly, but we are not a normal company,” Altman wrote.
But departures began last year after the board failed in its attempt to fire Altman. OpenAI co-founder and chief scientist Ilya Sutskever, who left OpenAI in May, informed Altman of his firing and then publicly rebutted his criticism. Jan Leike, a key researcher at OpenAI, resigned days later, saying that “security culture and processes had given way to shiny products.” At the time of his ouster, nearly the entire OpenAI board of directors, except Quora CEO Adam D’Angelo, was Members resigned, and Altman received a seat.
The company that once fired Altman for “not always being candid in communication” was later reshaped by him.
No more just “donating”
OpenAI began as a non-profit laboratory and later grew into the for-profit subsidiary OpenAI LP. The for-profit sector can raise funds to build artificial general intelligence (AGI), but the mission of nonprofits is to ensure that AGI benefits humanity.
In a bright pink box on a webpage about OpenAI’s board structure, the company stresses that any investment in OpenAI is “smart” and that investors “see no return” “in the spirit of a donation.”
Investors’ profits are capped at 100 times, and excess returns support non-profit organizations that prioritize social benefits over financial gains. If for-profits stray from this mission, nonprofits can step in.
We’ve gone beyond the “spirit of giving” here
According to reports, OpenAI is currently valued at nearly $150 billion, approximately 37.5 times its reported revenue, and has no path to profitability. It is seeking to raise funding from the likes of Thrive, Apple and an investment firm backed by the United Arab Emirates, with a minimum investment of $250 million.
OpenAI doesn’t have the deep pockets, nor the existing established players like Google or Meta, both of which are building competing models (although it’s worth noting that these are public companies with their own responsibilities to Wall Street). Former OpenAI researchers are following in OpenAI’s footsteps while seeking to raise new funding at a $40 billion valuation. We have moved beyond the “spirit of giving” here.
OpenAI’s “for-profit, managed by a non-profit” structure puts it at a disadvantage for plundering money. So it made perfect sense for Altman to tell employees earlier this month that OpenAI would restructure as a for-profit company next year. This week, Burundi According to reports, the company is considering becoming a public benefit corporation (such as Anthropic), and investors plan to give Altman 7% of the shares. (Altman almost immediately denied this in a staff meeting, calling it “ridiculous.”)
Crucially, OpenAI’s non-profit parent company will reportedly lose control during these changes. Just weeks after the news broke, Mulati and company were out.
Both Altman and Mulati claimed that the timing was just a coincidence and that the chief technology officer just wanted to leave while the company was “on the rise.” Murati (through a representative) declined to be interviewed edge About the sudden move. Wojciech Zaremba, one of the last remaining OpenAI co-founders, likened the departures to “the hardships faced by parents in the Middle Ages, when six out of eight children would die.”
Whatever the reason, this marks a near-total change in leadership at OpenAI since last year. Apart from Ultraman himself, the last member will appear in September 2023 wired Featured on the cover is president and co-founder Greg Brockman, who supported Ultraman during the coup. But even so, he has been on personal leave since August and is not expected to return until next year. The same month he left, another co-founder and key leader, John Schulman, also left to work at Anthropic.
When reached for comment, OpenAI spokesperson Lindsay McCallum Rémy noted edge Previous comments to CNBC.
No longer just a “research lab”
Regarding a “shiny product,” as Lake hinted in his farewell to OpenAI, turning a research lab into a for-profit company has put many long-time employees in an awkward position. Many people may join to focus on artificial intelligence research rather than to make and sell products. While OpenAI remains a non-profit organization, it’s not hard to guess how a profit-focused version would work.
Research labs work longer hours than revenue-seeking companies. They can delay product launches if necessary, reducing the pressure to launch quickly and scale. Perhaps most importantly, they can be more conservative when it comes to safety.
There is already evidence that OpenAI is more focused on rapid releases than careful releases: one source reveals washington post In July, the company held a launch party “before it knew whether GPT-4o was safe to launch.” wall street journal Reports on Friday said security personnel were working 20-hour days and had little time to scrutinize their work. Preliminary test results showed that GPT-4o was not secure enough to be deployed, but it was deployed anyway.
Meanwhile, OpenAI researchers are continuing to work on building what they consider the next step toward human-level artificial intelligence. o1 is OpenAI’s first “inference” model, the start of a new series the company hopes to power smart automation “agents.” The company always rolls out new features ahead of its competitors — this week it launched an advanced voice mode for all users, just days before Meta announced a similar product at Connect.
So, what is OpenAI becoming? All signs point to a traditional technology company being controlled by a powerful executive—exactly the structure it seeks to avoid.
After Murati announced his departure, Altman said on the stage of Italian Technology Week: “I think this is a great transition for everyone involved, and I hope that OpenAI can become even stronger because of it, just like we have As it does for all transformations.