OpenAI Shows How Frail Corporate Wizardry can be

This piece is cross-posted from my Medium account. Check it out for more posts like these.


And in a finale (hopefully) to the Game-of-Thrones-esque OpenAI saga, Sam Altman is back. The previous board — barring one member, Adam D’Angelo — is gone. A new board, currently including ex-Salesforce Co-CEO Bret Taylor and ex-Treasury Secretary Larry Summers, is being formed. Altman announced his move to Microsoft on Monday, but confirmed his move back to OpenAI by Wednesday. At the heart of the backtrack is supposedly the board’s relenting to OpenAI employees, of whom about 95% signed a letter threatening to leave if Altman was not reinstated.

For all observers, this has been a thrilling spectacle to witness. Even Kara Swisher, a renowned tech journalist who has become the de-facto chief reporter on the issue, admits her shock:

It was clearly a crisis. I mean, let’s be honest. I mean, this was unprecedented. I’ve covered Silicon Valley, I’ve known you for decades. I’ve never seen anything like this.

As many may now know, OpenAI’s unique corporate governance structure had a major part to play in the debacle. In 2015, OpenAI started as a nonprofit, with the stated goal of “building safe and beneficial artificial general intelligence for the benefit of humanity”

However, due to the highly capital intensive nature of OpenAI’s research, it soon became clear that it would be infeasible to rely on donations alone. Hence, Altman spearheaded efforts to create a “capped profit” subsidiary under the nonprofit, which limited investors’ returns to a 100x multiple of their original principal. 

But key to this structure was that OpenAI retained the nonprofit’s mission. The nonprofit’s board had the ultimate power over governing the capped-profit entity, including the ability to fire the chief executive. 

As such, OpenAI’s board did not represent the interest of its investors like a conventional board. Rather, it acts to pursue OpenAI’s goal of making artificial general intelligence benefit humanity. Hence, despite Altman’s competency in rapidly growing the company, the board could still fire him for reasons unrelated to operational issues.

Roughly, this is what happened. The OpenAI board fired Altman on the grounds that he was “not consistently candid in his communications” with them. This vague explanation was the impetus for much of the chaos surrounding Altman’s firing.

The original leading theory for Altman’s firing, spearheaded by Kara Swisher, is a disagreement over how OpenAI should balance its research and commercial goals. Altman’s leadership shifted the company toward a more profit-focused direction, preferring swift development and release of AI tools for commercial gain. Meanwhile, other board members, led by Chief Scientist Ilya Sutskever, believed that Altman was turning a deaf ear to AI’s risks and hence fired him.

Yet this doesn’t align with the reasoning provided by Sutskever later on. According to Business Insider, Sutskever held a meeting with employees on Sunday night to announce that Twitch co-founder Emmett Shear had been hired as CEO. During this meeting, he reportedly provided two explanations for Altman’s firing. 1) Altman assigned two OpenAI employees to work on the same project; and 2) Altman gave two board members different opinions about an employee.

It’s unsurprising that employees neither believed these explanations nor took them well.  The consensus amongst employees is that this was a “coup” by the board.

With employees increasingly allying with Altman over the board, it’s no surprise that they almost unanimously threatened to quit if Altman did not return as CEO.

It’s undeniable that the board’s handling of Altman’s firing was unprofessional. Even if they had legitimate grounds for firing him, they did not convey it convincingly at all. They also acted seemingly without consulting anyone else at all. Even senior executives like COO Brad Lightcap were unaware, admitting in an internal memo that Altman’s firing “took us all by surprise”, and was purely due to “a breakdown in communication between Sam and the board.”

Perhaps more importantly, OpenAI’s investors including the likes of Microsoft, Thrive Capital and Sequoia Capital, were also not informed about Altman’s firing until a minute before it was announced publicly, leaving Microsoft CEO Satya Nadella “blindsided” and furious.

A Difficult Corporate Structure

Image by Growtika

The reasons behind Altman’s initial firing are still hazy and unconfirmed. But if they were truly due to legitimate safety concerns, their unprofessional handling of the situation would have been even more tragic. In theory, their actions were a feature, not a bug, of OpenAI’s well-intentioned structure. OpenAI was set up to be immune to the pressures of shareholders, and to be singularly devoted to the goal of building AI for the benefit of all. If Altman was truly being too hasty and risky with OpenAI’s technologies, the board’s firing of him should have been praiseworthy.

It’s also difficult to portray the board, particularly Sutskever, as evil, power-hungry maniacs seeking to overthrow Altman for malicious reasons. Take Sutskever for instance. He left a cushy Google job in 2015 to help found OpenAI, and was especially keen that it would be a nonprofit not driven by commercial incentives. He has long been wary of AI’s dangers, particularly the dangers of AI superintelligence going rogue, and installed himself as head of a team dedicated to ensuring that OpenAI’s systems were safe for human use. In fact, the deep-learning AI technologies of today were kickstarted due to a formative paper that Sutskever co-authored in 2012. By all accounts, he is a highly regarded AI expert well-suited to exploring how AI could benefit humanity, not launching aggressive boardroom coups.

Meanwhile, it’s also hard to believe that other board members fired Altman for malicious and selfish reasons, rather than safety concerns. Two members, Tasha McCauley and Helen Toner, are tied to the Rationalist and Effective Altruist movements, which strongly express concerns about AI one day destroying humanity. In fact, one major reason for the breakdown in communication between Altman and the board was reportedly a paper Toner published that argued for stronger governmental regulation on AI. Hardly the stuff of hostile power-hungry lunatics.

However, even if the board’s intentions were noble, it’s still clear that they handled Altman’s firing terribly. But this is not entirely their fault. OpenAI’s structure was inherently flawed from the beginning. The inherent nature of the AI development industry meant that the board would not be immune to the demands of two key groups: employees and investors.

First, employees’ sheer talent allowed them to undermine the board’s authority. There are many “AI professionals” in the world, but very few work at the cutting-edge like OpenAI’s are. Since ChatGPT’s release last year, the ensuing AI wars have seen demand for these bona-fide AI experts explode. OpenAI’s staff vigorously defended Altman, flooding X with posts that “OpenAI is nothing without its people”. They were right. With tech giants like Microsoft and Salesforce eagerly trying to snap them up, their bargaining power exploded. The board knew that if their employees disappeared, so would OpenAI’s lead in the AI wars. Hence, they had to relent. 

Second, OpenAI’s investors, particularly Microsoft, wielded outsized influence in the debacle. On paper, investors only had stakes in the capped-profit subsidiary, and invested with the understanding that practically all of this subsidiary’s actions would be controlled by the parent nonprofit organisation.

But this weekend has proved that wrong. OpenAI investors reportedly had substantial say in the efforts to get Altman reinstated. But perhaps the most callous disregard for OpenAI’s structure came from OpenAI’s largest investor, Microsoft. With its own thriving AI departments and technologies, it’s clear that Microsoft had a substantial conflict of interest with the OpenAI nonprofit. By offering Altman the chance to lead further AI research at Microsoft, Microsoft proved that it is not just an investor to OpenAI, but also a competitor. Had it not been for the board’s relenting, many employees could very well have defected to Microsoft as well, allowing Microsoft to effectively buy OpenAI for free.

Further, Microsoft’s eager support of Altman in a troubling time likely won’t be forgotten anytime soon. Microsoft has hence strengthened their ties with Altman himself. With Altman now back at OpenAI’s helm, this bargaining power will likely come very much in handy in the future.

All this has proven the complete failure in governance by OpenAI’s board. Their actions, well-intentioned or not, have completely backfired, making the independence of OpenAI as a nonprofit more questionable than ever before. With a new board that’s shaping up to be less diverse and more approving of Altman, he will have more free rein to swiftly develop and release AI tools for commercial gains. Precisely what the board hoped to avoid.

Better Corporate Governance Solutions

Image by Benjamin Child

If OpenAI’s wacky capped-profit-subsidiary-in-a-nonprofit structure has obviously not worked, what else will? As elaborated in one extremely insightful Medium article, one solution is Public Benefit Corporations. Such companies are better suited to pursue the two goals of profit and social benefit. With the board consisting of a greater diversity of stakeholders (including investors), they can more smoothly balance these twin goals. Anthropic, a rival firm founded by ex-OpenAI employees disillusioned with its focus on profit, adopted this approach. They combined it with a Long Term Benefit trust, which further aligns Anthropic’s corporate governance with its goal of creating AI that benefits humanity.

Such a solution seems promising, but it is still untested. It remains to be seen if Anthropic would fall victim to the same unprofessionalism and vulnerability to various stakeholders that plagued the old OpenAI board. Ultimately, corporate wizardry can only go so far. In trusting supposedly noble AI firms, we put too much faith into two main assumptions. First, that they are even noble in the first place. Second, that even if they are truly noble (like perhaps OpenAI’s board was), they are competent enough to manage their firms soundly. The OpenAI debacle has proven these assumptions shaky.

To truly represent humanity’s interests well, elected governments would have to be substantially involved in the debate over AI. As the AI industry continues its rapid evolution and integration into various aspects of our lives, the necessity for governmental intervention in the form of legislation and policies becomes increasingly paramount. Given the potential magnitude of AI’s impact on society, including its influence on employment, privacy, ethics, and even national security, establishing clear and comprehensive regulations is imperative. These rules would not only set standards for the responsible development and deployment of AI technologies but also ensure ethical considerations and accountability among leading AI firms. Such measures are crucial to harnessing the transformative power of AI while mitigating its risks, fostering public trust, and promoting equitable access to its benefits for all.

True, governments are imperfect and oftentimes slow in regulating new technologies. But it is untenable to let private firms control the industry without governmental say, regardless of how noble their corporate structure seems. AI is too important and risky an issue to be left up to corporate wizardry alone.

Leave a comment

I’m MCC

Welcome to Perceptalk, my small corner of the internet dedicated to sharing fresh stories business and finance. Here, I invite you to join me on a vibrant journey of original commentary and discussion.

Let’s connect

Recent posts

  • On healthcare: businesses aren’t 100% to blame
  • Defending Despicable Me
  • The next stage in enshittification: Adobe
  • Trump shrugs off a guilty conviction to financial success
  • Spotify’s Still Hitting All the Right Notes
  • Mark Cuban’s Cost Plus Pharmacy is capitalism at its finest