Americas

  • United States

Asia

preston_gralla
Contributing Editor

Is OpenAI’s Sam Altman becoming a liability for Microsoft?

opinion
May 30, 20248 mins
Generative AIMicrosoft

Altman’s recent actions have struck many as unethical — and that could spell trouble for Microsoft, OpenAI’s closest partner.

msft nadella openai altman 09 official joint pic
Credit: © Microsoft

Big tech companies are often not built on technology alone. Frequently, they also gain prominence and success thanks to the outsized exploits or personalities of their founders or leaders. Bill Gates and Steve Jobs are two of the earliest and best-known examples.

In some cases, founders or CEOs have the opposite effect — their personas do serious damage to their companies. The most prominent current example is Elon Musk, whose embrace of right-wing conspiracy theories is doing Tesla great harm because so many of the EV car company’s potential customers are liberals or progressives and have vowed not to buy Teslas.

Microsoft CEO Satya Nadella is nobody’s idea of an outsized tech personality. The no-drama technocrat tends to stay out of the public eye, and when he’s in it, he’s not exactly mesmerizing. His rescue of Microsoft from irrelevancy had to do with smarts, vision, and excellent managerial skills, not a compelling persona.

A very different person connected with Microsoft, one who helped turn it into the world’s most influential, powerful, and wealthy AI company, is Sam Altman, a founder and CEO of OpenAI. Microsoft has invested $13 billion in and is a close partner of OpenAI, which created the technology underlying Microsoft’s generative AI tool Copilot.

Altman has become Mr. AI, ubiquitous in the news media, in the halls of Congress, and beyond. As the public face of genAI and the technology’s most well-known booster, he’s one of the reasons AI has taken off like it has.

So far that’s been great for Microsoft; the more he pushes AI, the more Microsoft gains. But there are signs that may end. Recently Altman’s reputation has been tarnished by claims he used the actress Scarlett Johansson’s voice to be the audio interface for a personal AI assistant without her permission. Beyond that, he appears to have abandoned his promise to make sure AI doesn’t turn destructive, and he’s taking hits for it.

If Altman becomes toxic, what will Microsoft do? Will the company double down on support for him, or drop him as fast as it can? To understand that, we’ll look at his recent controversies, then at Microsoft’s likely response.

The mighty fall fast

Until recently, Altman was the friendly face of AI, the go-to-expert for Congressmen who wanted to understand how it worked and what they should do about it, an I-feel-your-pain-and-worry-about-AI kind of guy who admitted that yes, perhaps AI might destroy humankind as we know it, but there’s plenty we can do about it, so let’s start now.

A blip of controversy last November, in which OpenAI’s board of directors fired Altman because “he was not consistently candid in his communications with the board,” quickly blew over. Microsoft staunchly defended its golden boy, offering to hire him to head its advanced AI research team, and Altman was reinstated as CEO of OpenAI after an outcry from investors and employees.

But lately things have begun unraveling for him. It began with Johansson claiming that OpenAI illegally copied her voice to be the voice for Sky, the company’s personal AI assistant, after she refused to license it to Altman. He used her voice, she says, because she voiced the AI assistant main character in the movie Her.

She says she turned him down once, and he tried again. That second time, she says, before he even heard back from her, he used a copy of her voice in a public demo of the technology. To back up her claim, she notes that right before the demo was released, Altman tweeted a single word: “Her.”

“When I heard the released demo, I was shocked, angered, and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” she said.

Altman denies Johansson’s claims. But OpenAI removed the voice from Sky, without any explanation.

Some people were shocked by Altman’s actions. But that’s only because they haven’t been paying attention. ChatGPT and Copilot are built on the theft of intellectual property. Like all large language models (LLMs), they need vast amounts of text to train them. Under Altman’s regime, ChatGPT hoovers up everything it can find, whether it’s copyrighted or not, and whether ChatGPT has licensed the rights to it or not. That’s led to a tsunami of lawsuits charging OpenAI with intellectual property theft, including from the New York Times, Chicago Tribune, comedian Sarah Silverman, novelists Jodi Picault and George R.R. Martin, and many others.

Altman’s response to the suits: Lawyer up and keep on hoovering.

That was just the start. Altman has pitched himself as the ethical face of AI. He’s frequently warned that AI could represent an existential threat to humankind if allowed to be developed unchecked, and urged governments and big tech to put up serious guardrails around it to make sure that won’t happen. That’s one reason Congresspeople and others flock to him; they believe he’s focused on the good AI can do for humankind rather than being a mere money grubber.

Altman co-founded OpenAI in 2015 as a nonprofit company. He and other founders say their primary goal in founding the nonprofit was to make sure AI would be “used in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

In public, Altman has been warning AI could become an existential threat to humanity if allowed to be developed unchecked. That’s just for public consumption, though. Behind the scenes, he’s full speed ahead on AI, no matter the consequences. He recently disbanded the team in charge of addressing the long-term risks AI poses.

After that, top researchers at the company resigned, some openly criticizing him for going back on his promise to develop AI ethically and safely. Jan Leike, a top researcher and co-leader of the team, quit and warned on X, “safety culture and processes have taken a backseat to shiny products” at OpenAI. OpenAI’s co-founder, board member and chief scientist Ilya Sutskever recently resigned as well. He didn’t go public with his reasons, but tellingly, he was one of the board members who months ago voted to oust Altman.

What will Microsoft do?

Given Microsoft’s close ties with OpenAI, all this could harm the company. Microsoft may be on top of the AI heap for now, but things change fast in tech. Many enterprises and individuals worry about AI’s consequences. They want to use AI from a company they trust to make sure AI products are safe and ethical.

In the wake of the recent high-profile departures, OpenAI did establish a new committee to oversee the safety and security of its AI models, which may go some way toward assuaging customers’ doubts. But if Altman is seen as untrustworthy, they may want to take their business to a Microsoft competitor.

If Altman’s public persona sours even more, expect Microsoft to take action. As I’ve written before, Microsoft is distancing itself from OpenAI and preparing to go it alone. It’s building its own internal AI team, headed by Mustafa Suleyman, co-founder of AI startup DeepMind, which Google bought in 2014. Nadella has made it clear that in the long run Microsoft ultimately has no need for OpenAI, saying if “OpenAI disappeared tomorrow… we have all the IP rights and all the capability. We have the people, we have the compute, we have the data, we have everything. We are below them, above them, around them.”

If necessary, Nadella will throw Altman under the bus. Altman is arrogant enough to believe he can take on Microsoft. But he’s wrong. Microsoft is the world’s most powerful and wealthy AI company. In any showdown between Microsoft and OpenAI, Microsoft comes out on top.