The greater divide at the heart of OpenAI

This piece is cross-posted from my Medium account. Check it out for more posts like these.


OpenAI was supposed to be different. Its non-profit structure was meant to protect its commitment to “building safe and beneficial artificial general intelligence for the benefit of humanity””. But once again, Silicon Valley proves that greed is king.

For the unaware, OpenAI CEO Sam Altman has joined a long list of Silicon Valley founders to be ousted from their firms. This storied group includes the likes of Steve Jobs and Jack Dorsey. But perhaps what separates Altman is how swift and shocking his dismissal was. Ever since ChatGPT was released in late 2022, he has become the public face of generative AI, presenting himself as a wise sage pushing benevolently AI forward. As recently as last Thursday, he had been boasting about the wonders of AI at the Asia-Pacific Economic Cooperation Summit. On Friday he was out. The only explanation OpenAI’s board provided was that Altman was “not consistently candid in his communications” with them.

The news was shocking not just to the general public, but to investors and employees too. OpenAI’s powerful shareholders — such as Microsoft, Thrive Capital and Sequoia Capital —  were not informed about Altman’s firing until a minute before it was announced publicly. Accordingly, Microsoft CEO Satya Nadella was “blindsided” and furious.

Meanwhile, Business Insider similarly reported that employees were “completely shocked”. This makes sense. From an operational perspective, Altman was firing on all cylinders. He had just completed a world tour touting the wonders of AI, courting global leaders from Narendra Modi to Rishi Sunak. Less than two weeks ago he held OpenAI’s first developer conference, launching a suite of innovative new tools. An internal memo from COO Brad Lightcap admitted that Altman’s firing “took us all by surprise”, and was purely due to “a breakdown in communication between Sam and the board.”

Initially, there were strong rumours that Altman would pull a Jobs/Dorsey and return to OpenAI. Indeed, desperate investors, especially Microsoft, were reportedly plotting Altman’s return. Numerous employees threatened to quit if the board did not relent.

These rumours have proven false. The board remains firm. Altman, along with former OpenAI board chairman and fellow co-founder Greg Brockman, are moving to Microsoft to head a cutting-edge AI research team. Meanwhile, OpenAI has hired Twitch co-founder Emmett Shear as the new CEO.

Image by TechCrunch

The leading theory for Altman’s firing, spearheaded by New York Magazine’s Kara Swisher, is a disagreement over how OpenAI should balance its research and commercial goals. Altman’s leadership was shifting the company toward a focus on profit and speedy development. Many board members, emphasising safety and caution, saw that as too risky and fired him.

In other companies, so easily outing a founder wouldn’t have been possible. But OpenAI’s unusual corporate governance structure allowed it. OpenAI started in 2015 as a nonprofit, with the goal of “building safe and beneficial artificial general intelligence for the benefit of humanity”. It soon became clear that relying on donations alone would not be feasible. As such, Altman led efforts to create a “capped profit” subsidiary under the nonprofit. This mechanism meant that investors’ returns would be limited to a certain percentage of their principal. 

But OpenAI retained the nonprofit’s mission. The nonprofit’s board had the power to govern the capped-profit entity. These powers, of course, included the ability to fire the chief executive. Also, Altman doesn’t directly own any shares in OpenAI, further diminishing his power.

What this means is that OpenAI’s board acts not to represent the interests of its shareholders. Rather, it acts to further OpenAI’s goal of broadly beneficial artificial general intelligence. Hence, even though Altman’s leadership led to rapid growth for the company, the board still could fire him on the grounds that he was prioritising profits over the company’s ultimate goal.

If this is true, it would be the most striking manifestation of an ongoing dispute over the identity of OpenAI, and of the greater AI industry. It represents a fiery conflict over the risks of AI, and how fast AI tools should be developed and released. One fundamental part of OpenAI’s culture is fears over AI’s dangers. Its founders believed that their understanding of these risks made them the right people to build it.

But this culture belied two camps, with opposing views on how quickly AI should be developed. Simplistically, these camps are the “move faster” and “move slower” groups.

Altman has always been part of the “move faster” club. Under his leadership, OpenAI blitzed rival AI products by launching ChatGPT in late 2022, transforming OpenAI into an international phenomenon. At times he seemed invincible. In late September he was in talks to court $1 billion in funding from Softbank to build the “iPhone of artificial intelligence”. In October he was exploring options to make AI chips in house.

Meanwhile, much of the board has been part of the “move slower” club. This group is more conservative, focusing more on AI’s dangers than its benefits. Look no further than chief scientist Ilya Sutskever, supposedly the mastermind behind Altman’s removal. In 2015, he left a cushy Google job to help found OpenAI. He was keen that it would be a nonprofit not driven by commercial incentives. He has long been wary of AI’s dangers, particularly the dangers of AI superintelligence going rogue.

Other members of the board share similar conservative stances on AI. Two such members, Tasha McCauley and Helen Toner, are tied to the Rationalist and Effective Altruist movements, which strongly express concerns about AI one day destroying humanity


As a result of these conflicting beliefs, it’s been disclosed that board members, led by Sutskever, disagreed with Altman on the speed of commercialising generative AI tools, and how to mitigate their potential dangers. With growing resentment between the “move faster” and “move slower” groups, Altman was fired.

What has this caused? In the short-term, chaos. Three senior researchers quit on the day of Altman’s firing. Sutskever’s persistence over the weekend has allegedly caused dozens more employees to do so too. AI’s current frontrunner is haemorrhaging its workforce.

Longer-term, the debacle is likely to have disturbing ramifications. The most obvious is the decline of OpenAI. Today, Altman is largely considered the face of AI, and enjoys huge support from OpenAi’s workforce. Other key employees, like interim CEO Mira Murati and COO Brad Lightcap, are Altman allies who have vocally expressed their support for him on X. It’s fair to say that over the coming weeks, a steady stream of employees will leave, either to follow Altman or to simply find more stable employment in the AI industry elsewhere.

This is worrying. OpenAI is the only major AI player with a non-profit structure. Others, such as Microsoft and Google, are predominantly pursuing AI for financial gain. Altman’s move to Microsoft frees him to pursue profit-related initiatives without the cautious restraints of OpenAI’s nonprofit structure. That means rapid and somewhat reckless plans, like Altman’s aforementioned dreams of making in-house AI chips and building the “iPhone of artificial intelligence”. With both investors and employees seemingly more loyal to Altman than OpenAI, it is highly likely that Microsoft has just become the biggest winner of the OpenAI chaos.

Altman’s ambitions have an aura of wonder around them. But as for-profit companies move closer to the forefront of AI, how they will utilise these technologies is unclear and concerning. This is not blanket criticism of tech giants as evil overlords in the making, nor is it blanket praise of OpenAI as humanity’s AI saviour. Merely that OpenAI’s nonprofit structure gives it the obligation — at least in principle — to defend humanity’s interests. And that for-profit companies’ primary obligation is to shareholders, not to some nebulous concept of humanity. In any case, tech giants will release AI technology with more ambition and recklessness than OpenAI has done. And if OpenAI lags behind its for-profit rivals, the risk AI poses to humanity will be more apparent.

Image by Mariia Shalabaieva 

The story is oddly ironic. OpenAI’s well-intentioned structure could very well be its undoing. Its leaders nobly sought to marry the upright incentives of a nonprofit with the capital-raising abilities of a profit-focused company. They came up with an ingenious solution of launching a “capped-profit” subsidiary, while simultaneously building safeguards to ensure that the nonprofit board always had the final say, even to fire the CEO. And when the board did exercise this say, everyone revolted.

What lesson does this story teach us? Merely the all-encompassing persistence of greed. OpenAI was originally established to be a “counterweight” to for-profit companies blindly exploring AI technologies with little regard for their dangers. Altman’s firing now lets him do just that. Unbound by the pesky restraints of OpenAI’s nonprofit mission, he can more ambitiously — and recklessly — pursue more commercial-minded ventures. And at the same time, (probably) bringing much of the OpenAI workforce with him. Caution for AI be damned.

This is not to say that Altman should be the poster child of capitalistic greed. Certainly, this saga is too far into its infancy to make such claims. There are undeniably a myriad more nuances about Altman’s firing that are currently unknown, including those unrelated to the debate between the “move slower” and “move faster” groups. Altman’s move to Microsoft speaks less about him than the sheer power and ruthlessness of tech giants like Microsoft to achieve their goals.

Rather, what the past weekend has proven, above all, is that money talks. Even the most well-intentioned and plucky of nonprofits could not overcome this fact. Further, its stubborn adherence to its core values may well be the catalyst for its decline. And for that, we are all worse off.

Leave a comment

I’m MCC

Welcome to Perceptalk, my small corner of the internet dedicated to sharing fresh stories business and finance. Here, I invite you to join me on a vibrant journey of original commentary and discussion.

Let’s connect

Recent posts

  • On healthcare: businesses aren’t 100% to blame
  • Defending Despicable Me
  • The next stage in enshittification: Adobe
  • Trump shrugs off a guilty conviction to financial success
  • Spotify’s Still Hitting All the Right Notes
  • Mark Cuban’s Cost Plus Pharmacy is capitalism at its finest