How OpenAI created the perfect conditions for chaos
There are so many questions about how Sam Altman was fired.
How come just four board members could act to remove the company's chief executive? Why did some board members have so little board-level experience? And why didn't Microsoft, by far the biggest investor in OpenAI, have a board seat?
The answers to those questions lie in the company's idiosyncratic structure - and to understand that you have to go back to 2015.
When OpenAI first started out, it was resolutely non-profit. Its press release made it crystal clear: "Our aim is to build value for everyone rather than shareholders," it said.
The goal was "to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return".
There was also a view that the goal of achieving artificial general intelligence (AGI) - AI that can perform any task that a human being is capable of - could be achievable with relatively little money.
In 2015, OpenAI said funders had committed $1bn (£799m) to the project - but that "we expect to only spend a tiny fraction of this in the next few years".
By 2019 there was a realisation that this was naive. Huge amounts of cloud computing power would be needed to achieve its lofty ambitions. That meant big investment.
But by its nature, non-profits struggle to get the funding that for-profits can generate. So in 2019, a strange hybrid was born.
This is where things start to get complicated.
The non-profit OpenAI created a for-profit arm of the business. Investors could suddenly put money into the company and hope for returns - which would be capped at 100 times their initial investment.
The for-profit and non-profit arms would form a partnership - but not one of equals.
To make sure the for-profit wing aligned itself to OpenAI's fundamental values, a decision was made that the non-profit board would govern the entire company.
What did that mean? Well, in theory, the company could work towards its goal of helping humanity whilst also attracting major funding. A sort of "best of both worlds" composite.
In the words of OpenAI, the non-profit would ensure "the mission and the principles… take precedence over the obligation to generate a profit".
Mission, not profit
The structure was now… interesting. But it seemed to work. The cash rolled in.
No more so than from Microsoft - which pumped billions into the company. In usual circumstances you'd expect such a large backer to ask for a board seat - and be given it.
But OpenAI's structure was now anything but usual. Its non-profit board did not need to give its major investors a say in how the company was run - because the board was guided by its mission, not profit.
Over the next few years a number of senior board members left - often citing conflicts of interest. As of last week the board comprised of just six people - an unusually small number for a company reportedly valued at $80bn.
That included three senior OpenAI execs: Greg Brockman (chairman & president), Ilya Sutskever (chief scientist), and Sam Altman (chief executive); and non-employees Adam D'Angelo, Tasha McCauley and Helen Toner.
The non-profit's board looked a little like… a non-profit's board. Some of those board members appeared to have very little other experience as board members.
"Most companies of OpenAI's size and consequence have boards of 8-15 directors, most of whom are independent and all of whom has more board experience," former Yahoo chief executive Marissa Mayer posted on X.
With decisions made by a majority, it meant that just four people could enact sweeping changes at OpenAI - including getting rid of its chief executive. And on Friday, that's exactly what they did. Sam Altman was fired - and Greg Brockman quit.