Honolulu Star-Advertiser

Saturday, April 27, 2024 81° Today's Paper


Top News

Inside OpenAI’s crisis over the future of artificial intelligence

ASSOCIATED PRESS
                                The OpenAI logo is seen displayed on a cell phone with an image on a computer screen generated by ChatGPT’s Dall-E text-to-image model, Friday, Dec. 8, in Boston.
1/1
Swipe or click to see more

ASSOCIATED PRESS

The OpenAI logo is seen displayed on a cell phone with an image on a computer screen generated by ChatGPT’s Dall-E text-to-image model, Friday, Dec. 8, in Boston.

SAN FRANCISCO >> About noon on Nov. 17, Sam Altman, CEO of OpenAI, logged into a video call from a luxury hotel in Las Vegas. He was in the city for its inaugural Formula One race, which had drawn 315,000 visitors including Rihanna and Kylie Minogue.

Altman, who had parlayed the success of OpenAI’s ChatGPT chatbot into personal stardom beyond the tech world, had a meeting lined up that day with Ilya Sutskever, chief scientist of the artificial intelligence startup. But when the call started, Altman saw that Sutskever was not alone — he was virtually flanked by OpenAI’s three independent board members.

Instantly, Altman knew something was wrong.

Unbeknownst to Altman, Sutskever and the three board members had been whispering behind his back for months. They believed Altman had been dishonest and should no longer lead a company that was driving the AI race. On a hush-hush 15-minute video call the previous afternoon, the board members had voted one by one to push Altman out of OpenAI.

Now they were delivering the news. Shocked that he was being fired from a startup he had helped found, Altman widened his eyes and then asked, “How can I help?” The board members urged him to support an interim CEO. He assured them that he would.

Within hours, Altman changed his mind and declared war on OpenAI’s board.

His ouster was the culmination of years of simmering tensions at OpenAI that pit those alarmed by AI’s power against others who saw the technology as a once-in-a-lifetime profit and prestige bonanza. As divisions deepened, the organization’s leaders sniped and turned on one another. That led to a boardroom brawl that ultimately showed who has the upper hand in AI’s future development: Silicon Valley’s tech elite and deep-pocketed corporate interests.

The drama embroiled Microsoft, which had committed $13 billion to OpenAI and weighed in to protect its investment. Many top Silicon Valley executives and investors, including the CEO of Airbnb, also mobilized to support Altman.

Some fought back from Altman’s $27 million mansion in San Francisco’s Russian Hill neighborhood, lobbying through social media and voicing their displeasure in private text threads, according to interviews with more than 25 people with knowledge of the events. Many of their conversations and the details of their confrontations have not been previously reported.

At the center of the storm was Altman, a 38-year-old multimillionaire. A vegetarian who raises cattle and a tech leader with little engineering training, he is driven by a hunger for power more than by money, a longtime mentor said. And even as Altman became AI’s public face, charming heads of state with predictions of the technology’s positive effects, he privately angered those who believed he ignored its potential dangers.

OpenAI’s chaos has raised new questions about the people and companies behind the AI revolution. If the world’s premier AI startup can so easily plunge into crisis over backbiting behavior and slippery ideas of wrongdoing, can it be trusted to advance a technology that may have untold effects on billions of people?

“OpenAI’s aura of invulnerability has been shaken,” said Andrew Ng, a Stanford professor who helped found the AI labs at Google and Chinese tech giant Baidu.

An Incendiary Mix

From the moment it was created in 2015, OpenAI was primed to combust.

The San Francisco lab was founded by Elon Musk, Altman, Sutskever and nine others. Its goal was to build AI systems to benefit all of humanity. Unlike most tech startups, it was established as a nonprofit with a board that was responsible for making sure it fulfilled that mission.

The board was stacked with people who had competing AI philosophies. On one side were those who worried about AI’s dangers, including Musk, who left OpenAI in a huff in 2018. On the other were Altman and those focused more on the technology’s potential benefits.

In 2019, Altman — who had extensive contacts in Silicon Valley as president of startup incubator Y Combinator — became OpenAI’s CEO. He would own just a tiny stake in the startup.

“Why is he working on something that won’t make him richer? One answer is that lots of people do that once they have enough money, which Sam probably does,” said Paul Graham, a founder of Y Combinator and Altman’s mentor. “The other is that he likes power.”

Altman quickly changed OpenAI’s direction by creating a for-profit subsidiary and raising $1 billion from Microsoft, spurring questions about how that would work with the board’s mission of safe AI.

Earlier this year, departures shrank OpenAI’s board to six people from nine. Three — Altman, Sutskever and Greg Brockman, OpenAI’s president — were founders of the lab. The others were independent members.

Helen Toner, a director of strategy at Georgetown University’s Center for Security and Emerging Technology, was part of the effective altruist community that believes AI could one day destroy humanity. Adam D’Angelo had long worked with AI as CEO of the question-and-answer website Quora. Tasha McCauley, an adjunct scientist at the Rand Corp., had worked on tech and AI policy and governance issues and taught at Singularity University, which was named for the moment when machines can no longer be controlled by their creators.

They were united by a concern that AI could become more intelligent than humans.

Tensions Mount

After OpenAI introduced ChatGPT last year, the board became jumpier.

As millions of people used the chatbot to write love letters and brainstorm college essays, Altman embraced the spotlight. He appeared with Satya Nadella, Microsoft’s CEO, at tech events. He met President Joe Biden and embarked on a 21-city global tour, hobnobbing with leaders such as Indian Prime Minister Narendra Modi.

Yet, as Altman raised OpenAI’s profile, some board members worried that ChatGPT’s success was antithetical to creating safe AI, two people familiar with their thinking said.

Their concerns were compounded when they clashed with Altman in recent months over who should fill the board’s three open seats.

In September, Altman met investors in the Middle East to discuss an AI chip project. The board was concerned that he wasn’t sharing all his plans with it, three people familiar with the matter said.

Sutskever, 37, who helped pioneer modern AI, was especially disgruntled. He had become fearful that the technology could wipe out humanity. He also believed that Altman was bad-mouthing the board to OpenAI executives, two people with knowledge of the situation said. Other employees have also complained to the board about Altman’s behavior.

In October, Altman promoted another OpenAI researcher to the same level as Sutskever, who saw it as a slight. Sutskever told several board members that he might quit, two people with knowledge of the matter said. The board interpreted the move as an ultimatum to choose between him and Altman, the people said.

Sutskever’s lawyer said it was “categorically false” that he had threatened to quit.

Another conflict erupted in October when Toner published a paper, “Decoding Intentions: Artificial Intelligence and Costly Signals,” at her Georgetown think tank. In it, she and her co-authors praised Anthropic, an OpenAI rival, for delaying a product release and avoiding the “frantic corner-cutting that the release of ChatGPT appeared to spur.”

Altman was displeased, especially since the Federal Trade Commission had begun investigating OpenAI’s data collection. He called Toner, saying her paper “could cause problems.”

The paper was merely academic, Toner said, offering to write an apology to OpenAI’s board. Altman accepted. He later emailed OpenAI’s executives, telling them that he had reprimanded Toner.

“I did not feel we’re on the same page on the damage of all this,” he wrote.

Altman called other board members and said McCauley wanted Toner removed from the board, people with knowledge of the conversations said. When board members later asked McCauley if that was true, she said that was “absolutely false.”

“This significantly differs from Sam’s recollection of these conversations,” an OpenAI spokesperson said, adding that the company was looking forward to an independent review of what transpired.

Some board members believed that Altman was trying to pit them against each other. Last month, they decided to act.

Dialing in from Washington, Los Angeles and the San Francisco Bay Area, they voted on Nov. 16 to dismiss Altman. OpenAI’s outside lawyer advised them to limit what they said publicly about the removal.

Fearing that if Altman got wind of their plan he would marshal his network against them, they acted quickly and secretly.

What Did Sam Do

When news broke of Altman’s firing on Nov. 17, a text landed in a private WhatsApp group of more than 100 CEOs of Silicon Valley companies, including Meta’s Mark Zuckerberg and Dropbox’s Drew Houston.

“Sam is out,” the text said.

The thread immediately blew up with questions: What did Sam do?

That same query was being asked at Microsoft, OpenAI’s biggest investor. As Altman was being fired, Kevin Scott, Microsoft’s chief technology officer, got a call from Mira Murati, OpenAI’s chief technology officer. She told him that in a matter of minutes, OpenAI’s board would announce that it had canned Altman and that she was the interim chief.

Scott immediately asked someone at Microsoft’s headquarters in Redmond, Washington, to get Nadella out of a meeting he was having with top lieutenants. Shocked, Nadella called Murati about the OpenAI board’s reasoning, three people with knowledge of the call said. In a statement, OpenAI’s board had said only that Altman “was not consistently candid in his communications” with the board. Murati didn’t have answers.

Nadella then phoned D’Angelo, OpenAI’s lead independent director. What could Altman have done, Nadella asked, to cause the board to act so abruptly? Was there anything nefarious?

“No,” D’Angelo replied, speaking in generalities. Nadella remained confused.

Turning the Tables

Shortly after Altman’s removal from OpenAI, a friend reached out to him. It was Brian Chesky, Airbnb’s CEO.

Chesky asked Altman what he could do to help. Altman, who was still in Las Vegas, said he wanted to talk.

The two men had met in 2009 at Y Combinator. When they spoke on Nov. 17, Chesky peppered Altman with questions about why OpenAI’s board had terminated him. Altman said he was as uncertain as everyone else.

At the same time, OpenAI’s employees were demanding details. The board dialed into a call that afternoon to talk to about 15 OpenAI executives, who crowded into a conference room at the company’s offices in a former mayonnaise factory in San Francisco’s Mission neighborhood.

The board members said that Altman had lied to the board but that they couldn’t elaborate for legal reasons.

“This is a coup,” one employee shouted.

Jason Kwon, OpenAI’s chief strategy officer, accused the board of violating its fiduciary responsibilities. “It cannot be your duty to allow the company to die,” he said, according to two people with knowledge of the meeting.

Toner replied, “The destruction of the company could be consistent with the board’s mission.”

OpenAI’s executives insisted that the board resign that night or they would all leave. Brockman, 35, OpenAI’s president, had already quit.

The support gave Altman ammunition. He flirted with creating a new startup, but Chesky and Ron Conway, a Silicon Valley investor and friend, urged Altman to reconsider.

“You should be willing to fight back at least a little more,” Chesky told him.

Altman decided to take back what he felt was his.

Pressuring the Board

After flying back from Las Vegas, Altman awoke on Nov. 18 in his San Francisco home, with sweeping views of Alcatraz Island. Just before 8 a.m., his phone rang. It was D’Angelo and McCauley.

The board members were rattled by the meeting with OpenAI executives the day before. Customers were considering shifting to rival platforms. Google was already trying to poach top talent, two people with knowledge of the efforts said.

D’Angelo and McCauley asked Altman to help stabilize the company.

That day, more than two dozen supporters showed up at Altman’s house to lobby OpenAI’s board to reinstate him. They set up laptops on his kitchen’s white marble countertops and spread out across his living room. Murati joined them and told the board that she could no longer be interim CEO.

To capitalize on the board’s vulnerability, Altman posted on X: “i love openai employees so much.” Murati and dozens of employees replied with emojis of colored hearts.

Even as the board considered bringing Altman back, it wanted concessions. That included bringing on new members who could control Altman. The board encouraged the addition of Bret Taylor, Twitter’s former chair, who quickly won everyone’s approval and agreed to help the parties negotiate. As insurance, the board also sought another interim CEO in case talks with Altman broke down.

By then, Altman had gathered more allies. Nadella, now confident that Altman was not guilty of malfeasance, threw Microsoft’s weight behind him.

In a call with Altman that day, Nadella proposed another idea. What if Altman joined Microsoft? The $2.8 trillion company had the computing power for anything that he wanted to build.

Altman now had two options: negotiating a return to OpenAI on his terms or taking OpenAI’s talent with him to Microsoft.

The Board Stands Firm

By Nov. 19, Altman was so confident that he would be reappointed CEO that he and his allies gave the board a deadline: Resign by 10 a.m. or everyone would leave.

Altman went to OpenAI’s office so he could be there when his return was announced. Brockman also showed up with his wife, Anna. (The couple had married at OpenAI’s office in a 2019 ceremony officiated by Sutskever. The ring bearer was a robotic hand.)

To reach a deal, Toner, McCauley and D’Angelo logged into a day of meetings from their homes. They said they were open to Altman’s return if they could agree on new board members.

Altman and his camp suggested Penny Pritzker, a secretary of commerce under President Barack Obama; Diane Greene, who founded the software company VMware; and others. But Altman and the board could not agree, and they bickered over whether he should rejoin OpenAI’s board and whether a law firm should conduct a review of his leadership.

With no compromise in sight, board members told Murati that evening that they were naming Emmett Shear, a founder of Twitch, a video-streaming service owned by Amazon, as interim CEO. Shear was outspoken about developing AI slowly and safely.

Altman left OpenAI’s office in disbelief. “I’m going to Microsoft,” he told Chesky and others.

That night, Shear visited OpenAI’s offices and convened an employee meeting. The company’s Slack channel lit up with emojis of a middle finger.

Only about a dozen workers showed up, including Sutskever. In the lobby, Anna Brockman approached him in tears. She tugged his arm and urged him to reconsider Altman’s removal. He stood stone-faced.

Breaking the Logjam

At 4:30 a.m. on Nov. 20, D’Angelo was awakened by a phone call from a frightened OpenAI employee. If D’Angelo didn’t step down from the board in the next 30 minutes, the employee said, the company would collapse.

D’Angelo hung up. Over the past few hours, he realized, things had worsened.

Just before midnight, Nadella had posted on X that he was hiring Altman and Brockman to lead a lab at Microsoft. He had invited other OpenAI employees to join.

That morning, more than 700 of OpenAI’s 770 employees had also signed a letter saying they might follow Altman to Microsoft unless the board resigned.

One name on the letter stood out: Sutskever, who had changed sides. “I deeply regret my participation in the board’s actions,” he wrote on X that morning.

OpenAI’s viability was in question. The board members had little choice but to negotiate.

To break the impasse, D’Angelo and Altman talked the next day. D’Angelo suggested former Treasury Secretary Lawrence Summers, a professor at Harvard, for the board. Altman liked the idea.

Summers, from his Boston-area home, spoke with D’Angelo, Altman, Nadella and others. Each probed him for his views on AI and management, while he asked about OpenAI’s tumult. He said he wanted to be sure that he could play the role of a broker.

Summers’ addition pushed Altman to abandon his demand for a board seat and agree to an independent investigation of his leadership and dismissal.

By late Nov. 21, they had a deal. Altman would return as CEO, but not to the board. Summers, D’Angelo and Taylor would be board members, with Microsoft eventually joining as a nonvoting observer. Toner, McCauley and Sutskever would leave the board.

This week, Altman and some of his advisers were still fuming. They wanted his name cleared.

“Do u have a plan B to stop the postulation about u being fired its not healthy and its not true!!!,” Conway texted Altman.

Altman said he was working with OpenAI’s board: “They really want silence but i think important to address soon.”


This article originally appeared in The New York Times.


By participating in online discussions you acknowledge that you have agreed to the Terms of Service. An insightful discussion of ideas and viewpoints is encouraged, but comments must be civil and in good taste, with no personal attacks. If your comments are inappropriate, you may be banned from posting. Report comments if you believe they do not follow our guidelines. Having trouble with comments? Learn more here.