"CEOs can't wait to read Sunny Bindra's articles every week."

What this CEO’s chaotic sacking revealed

Picture this scenario:

The board of directors of an iconic organisation decides to fire its well-known CEO, without warning, issuing a cryptic announcement to that effect and appointing an interim CEO from within the organisation to hold fort. That stuff happens often, so nothing too extraordinary thus far. 

All hell breaks loose, however. It turns out that leading investors, including one that has a huge stake in the business, only found out about the exit shortly before the announcement. Staff members are stunned and unhappy. The media and social platforms are ablaze speculating on what the CEO might have done to warrant his defenestration.

Just a day later, rumours abound that the ousted CEO may be making a dramatic return, backed by the key investors. Even the interim CEO canvasses for the return of her former boss. After a flurry of meetings, the board seems to hold firm and announces a new interim CEO, an outsider this time. The new CEO indicates that he had been on family leave, but that this opportunity was too big to ignore.

The key investor announces that the ejected leader will be joining them in a senior position. Employees write a joint letter demanding the restoration of their former leader, failing which they plan to resign en masse. The investor offers them all jobs too, at current salaries. The board member thought to have instigated the firing writes a post apologising for his role.

Three days after the sacking, the company announces that the original CEO will be returning to his position, and that the board will be reconstituted under a new chair. The returning leader now becomes the fourth boss in five days, replacing himself and two interims. The first interim resumes duties under her new/old boss; the second interim returns to his family.

Too crazy to be true, right? Nope. You were following this fiasco on the news, I’m sure. The company is OpenAI, originator of ChatGPT, the artificial intelligence technology that took the world by storm last November. The ousted/reinstated leader was Sam Altman; the key investor orchestrating his rescue was Microsoft.

At one level this is sheer farce. Sam Altman is the Steve Jobs of the current era, the charismatic young leader fronting a breakthrough technology. Steve was himself famously exiled from Apple, the company he founded, for twelve years. Some witty folks suggested that Sam’s was a modern-day exile, speeded up for the TikTok era!

So many mistakes were made here that I would need many more Sunday columns to do justice to them. However, that board could have just spared itself the agony and asked its own AI bot, ChatGPT, what a board should and shouldn’t do when firing a CEO unexpectedly. Here’s some of the advice the AI would have given. Make sure the reasons for firing are specific, factual, and defensible. Convey the decision respectfully, ideally in a face-to-face meeting. Have a clear transition plan before doing the deed. Provide clear statements for employees and stakeholders.

ChatGPT also offers some no-nos. Do not make the decision impulsively. Do not leave yourselves open to legal action. Do not fail to consider the reactions of employees and investors. Do not be unprepared to handle the media in the aftermath. 

Clearly, the board did not use its own technology! 

And yet. Before we settle on a convenient scapegoat, let’s try to dig a little deeper. What’s really at stake here? There is a huge philosophical debate raging about AI, the technology that has the potential to change the world irrevocably. One side, with dollar signs flashing in its eyes, feels we should be going full-steam ahead so that this tech can be deployed for everyone’s benefit. Another school of thought recommends great caution. If we accelerate into Artificial General Intelligence (AGI) and create an entity smarter than humans, we may be the creators of our own demise.

This boardroom battle seemed to be pitting the “doomer” faction (most of the OpenAI board), vs the more gung-ho (“boomer”) section led by Sam Altman. It must also be remembered that the board in question was that of the non-profit entity, not the commercial organisation that it owns. As such it does have a duty different from normal corporate boards—to manage AI in the interests of humanity, not just a fiduciary duty to shareholders. If it acted to prevent what it thought were worrying acceleration actions by its CEO, without due regard to safety, it was within its rights to act. Yet it has failed to explain its reasons, and reversed its decision.

This board battle demonstrated all the failings of humans: how emotional we are in our decisions; how greedy when big money is there to be made; how suspicious and easy to divide in adversity; and how comically capable of losing control, even when we have benefited from advanced education and long experience. More will emerge in this story, but my key reflection for now is that if we end up in a battle against a super-rational AGI, able to replicate and improve itself without external intervention, it would be game over for humans very quickly. 

(Sunday Nation, 3 December 2023)

Buy Sunny Bindra's book
UP & AHEAD
here »

Share or comment on this article
Picture credit: Generated by DALL-E 3

Archives