OpenAI won the battle, but it may have lost the war

amazon, microsoft, openai won the battle, but it may have lost the war

Sam Altman, the poster boy for AI, was ousted from his company OpenAI.

The seismic shake-up at OpenAI has come as a shock to almost everyone. But the truth is, the company was probably always going to break. It was built on a fault line so deep and unstable that eventually, stability would give way to chaos.

That fault line was OpenAI’s dual mission: to build AI that’s smarter than humanity, while also making sure that AI would be safe and beneficial to humanity. There’s an inherent tension between those goals, because advanced AI could harm humans in a variety of ways, from entrenching bias to enabling bioterrorism. Now, the tension in OpenAI’s mandate appears to have helped precipitate the tech industry’s biggest earthquake in decades.

On Friday, OpenAI CEO Sam Altman was fired by the board over an alleged lack of transparency, and company president Greg Brockman then quit in protest. On Saturday, the pair tried to get the board to reinstate them, but negotiations didn’t go their way. By Sunday, both had accepted jobs with major OpenAI investor Microsoft, where they can continue their work on cutting-edge AI. By Monday, 95 percent of OpenAI employees were threatening to leave for Microsoft, too. By Tuesday, new reports indicated Altman and Brockman were still in talks about a possible return to OpenAI.

As chaotic as all this was, the aftershocks for the AI ecosystem might be scarier. A flow of talent from OpenAI to Microsoft means a flow from a company that had been founded on worries about AI safety to a company that can barely be bothered to pay lip service to the concept.

Which raises the big question: Did OpenAI’s board make the right decision when it fired Altman? Or, given that companies like Microsoft will readily hoover up OpenAI’s talented employees, where they can then rush ahead on building AI with less concern for safety, did the board actually make the world a more dangerous place?

The answer may well be “yes” to both.

OpenAI’s board did exactly what it was supposed to do: Protect the company’s integrity

OpenAI is not a typical tech company. It has a very unique structure, and that structure is key to understanding the current shake-up.

The company was originally founded as a nonprofit focused on AI research in 2015. But in 2019, hungry for the resources it would need to create AGI — artificial general intelligence, a hypothetical system that can match or exceed human abilities — OpenAI created a for-profit entity. That allowed investors to pour money into OpenAI and potentially earn a return on it, though their profits would be capped, according to the rules of the new setup, and anything above the cap would revert to the nonprofit. Crucially, the nonprofit board retained the power to govern the for-profit entity. That included hiring and firing power.

The board’s job was to make sure OpenAI stuck to its mission, as expressed in its charter, which states clearly, “Our primary fiduciary duty is to humanity.” Not to investors. Not to employees. To humanity.

The charter also states, “We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions.” But it also paradoxically states, “To be effective at addressing AGI’s impact on society, OpenAI must be on the cutting edge of AI capabilities.”

This reads a lot like: We’re worried about a race where everyone’s pushing to be at the front of the pack. But we’ve got to be at the front of the pack.

Each of those two impulses found an avatar in one of OpenAI’s leaders. Ilya Sutskever, an OpenAI co-founder and top AI researcher, reportedly worried that the company was moving too fast, trying to make a splash and a profit at the expense of safety. Since July, he’s co-led OpenAI’s “Superalignment” team, which aims to figure out how to manage the risk of superintelligent AI.

Altman, meanwhile, was moving full steam ahead. Under his tenure, OpenAI did more than any other company to catalyze an arms race dynamic, most notably with the launch of ChatGPT last November. More recently, Altman was reportedly fundraising with autocratic regimes in the Middle East like Saudi Arabia so he could spin up a new AI chip-making company. That in itself could raise safety concerns, since such regimes might use AI to supercharge digital surveillance or human rights abuses.

We still don’t know exactly why the OpenAI board fired Altman. The board has said that he was “not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” Sutskever, who spearheaded Altman’s ouster, initially defended the move in similar terms: “This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity,” he said. (Sutskever later flipped sides, however, and said he regretted participating in the ouster.)

“Sam Altman and Greg Brockman seem to be of the view that accelerating AI can achieve the most good for humanity. The plurality of the board, however, appears to be of a different view that the pace of advancement is too fast and could compromise safety and trust,” said Sarah Kreps, director of the Tech Policy Institute at Cornell University.

“I think that the board made the only decision they felt like they could make. They stuck to it even against enormous risk and resistance,” AI expert Gary Marcus told me. “I think they saw something from Sam that they thought they could not live with and stay true to their mission. So in their eyes, they made the right choice. What the fallout of that choice is going to be, we don’t know.”

“The problem is that the board may have won the battle but lost the war,” Kreps said.

In other words, if the board fired Altman in part over concerns that his accelerationist impulse was jeopardizing the safety part of OpenAI’s mission, it won the battle, in that it kept the company true to the mission.

But unfortunately, it may have lost the larger war — the effort to keep AI safe for humankind — because the coup may push some of OpenAI’s top talent straight into the arms of Microsoft. Which brings us to …

The AI risk landscape is probably worse now than it was before Altman’s dismissal

The coup has caused an unbelievable amount of chaos. According to futurist Amy Webb, the CEO of the Future Today Institute, OpenAI’s board failed to practice “strategic foresight” — to understand how its sudden dismissal of Altman might cause the company to implode and might reverberate across the larger AI ecosystem. “You have to think through the next-order implications of your actions,” she told me.

Altman, Brockman, and several others have already joined Microsoft. That, in itself, should raise questions about how committed these individuals really are to safety, Marcus said. And it may not bode well for the AI risk landscape.

After all, Microsoft laid off its entire AI ethics team earlier this year. When Microsoft CEO Satya Nadella teamed up with OpenAI to embed its GPT-4 into Bing search in February, he taunted competitor Google: “We made them dance.” And upon hiring Altman, Nadella tweeted that he was excited for the ousted leader to set “a new pace for innovation.”

Firing Altman means that “OpenAI can wash its hands of any responsibility for any possible future missteps on AI development but can’t stop it from happening, and will now be in a compromised position to influence that development,” Kreps said, because it has damaged trust and potentially pushed its top talent elsewhere. “The developments show just how dynamic and high-stakes the AI space has become, and that it’s impossible either to stop or contain the progress.”

Impossible may be too strong a word. But containing the progress would require changing the underlying incentive structure in the AI industry, and that has proven extremely difficult in the context of hyper-capitalist, hyper-competitive, move-fast-and-break-things Silicon Valley. Being at the cutting edge of tech development is what earns profit and prestige, but that does not lend itself to slowing down, even when slowing down is strongly warranted.

Under Altman, OpenAI tried to square this circle by arguing that researchers need to play with advanced AI to figure out how to make advanced AI safe — so accelerating development is actually helpful. That was tenuous logic even a decade ago, but it doesn’t hold up today, when we’ve got AI systems so advanced and so opaque (think: GPT-4) that many experts say we need to figure out how they work before we build more black boxes that are even more unexplainable.

OpenAI had also run into a more prosaic problem that made it susceptible to taking a profit-seeking path: It needed money. To run large-scale AI experiments these days, you need a ton of computing power — more than 300,000 times what you needed a decade ago — and that’s incredibly expensive. So to stay at the cutting edge, it had to create a for-profit arm and partner with Microsoft. OpenAI wasn’t alone in this: The rival company Anthropic, which former OpenAI employees spun up because they wanted to focus more on safety, started out by arguing that we need to change the underlying incentive structure in the industry, but it ended up joining forces with Amazon.

Given all this, is it even possible to build an AI company that advances the state of the art while also truly prioritizing ethics and safety?

“It’s looking like maybe not,” Marcus said.

Webb was even more direct, saying, “I don’t think it’s possible.” Instead, she emphasized that the government needs to change the underlying incentive structure within which all these companies operate. That would include a mix of carrots and sticks: positive incentives, like tax breaks for companies that prove they’re upholding the highest safety standards; and negative incentives, like regulation.

In the meantime, the AI industry is a Wild West, where each company plays by its own rules.

The OpenAI board seems to prioritize the company’s original mission: looking out for humanity’s interests above all else. The wider AI industry? Not so much. Unfortunately, that’s where OpenAI’s top talent might now find itself.

OTHER NEWS

29 minutes ago

Cash incentives may help men lose weight, research finds

29 minutes ago

Ask A Doctor: What Are The Common Types Of Infections That Are More Likely To Occur In Low Temperature Environments?

29 minutes ago

Teachers Are Sharing Just How Much The Pandemic Changed Their Jobs

29 minutes ago

Factbox-Boeing faces multiple probes after mid-air MAX door plug blowout

29 minutes ago

ABB buys Siemens's wiring accessories business in China

32 minutes ago

‘Slap in the face' Family of North Texas native murdered in 2020 protest calls pardon for Daniel Perry

34 minutes ago

Jai Opetaia wants to fight Oleksandr Usyk after world title bout and believes training with Tyson Fury will give him the edge

34 minutes ago

Rory McIlroy's divorce from Erica Stoll SHOCKED their neighbors in Florida golf community where couple owned $22m mansion: 'People are wondering what happened'

35 minutes ago

Child care centre in Jalgaon helps boost women participation in poll

35 minutes ago

'Badi Bottle Dikhai Padegi’: Amit Shah On Why Kejriwal's Campaign Won't Benefit AAP in Lok Sabha Polls

36 minutes ago

Palme d’Or: Behind The Artistry Of Cannes Film Festival's Most Prestigious Award

41 minutes ago

Pope Francis on ideological division: "We cannot divide the world"

41 minutes ago

William Gallas names the former Chelsea star who has been a bigger loss than Kai Havertz

43 minutes ago

Equestrian community rallies to save Palatine Stables from closing

45 minutes ago

Bizarre moment RADAR picks up a structure above 'UFO hotbed' during rocket experiment

45 minutes ago

Xander Schauffele shrugs off Collin Morikawa's charge to maintain his lead at the PGA Championship as he battles to FINALLY end his major drought

46 minutes ago

USMNT weekend viewing guide: Avoiding the drop

47 minutes ago

#Woof: Washington Adds North Dakota Transfer

47 minutes ago

Queensland needs ‘thousands more hotel rooms’ before 2032 Games

47 minutes ago

All-22 Analysis: Buffalo Bills DE Dawuane Smoot

47 minutes ago

PGA Championship 2024: Tiger, Phil top list of missed cuts at Valhalla

47 minutes ago

Breaking down Michael Jordan losing 13 straight games to Larry Bird and Boston Celtics

47 minutes ago

Why England will be thrilled by Stokes' steady return

48 minutes ago

Former cadets say airman killed during ground operations inspired them to join Air Force

50 minutes ago

Legacy of anti-coal seam gas Bentley blockade lives on a decade later, but not everyone is celebrating

50 minutes ago

Kevin Harvick: Rick Hendrick asked for a favor, I said yes

50 minutes ago

British Army sinks the Americans in bridge-building war game triumph

50 minutes ago

Perth's dry spell continues as May temperature record set to fall

50 minutes ago

A New Limited Series Starring Laura Dern and Margaret Qualley Has Us Stoked

50 minutes ago

Welcome to summer without El Niño

53 minutes ago

NHRA Drag Racers, Fans Pay Tribute to Late Don Schumacher in His Chicago Hometown

53 minutes ago

The Indian film at Cannes made by half a million farmers

53 minutes ago

Lok Sabha Elections 2024: Modi-Thackeray Rally: Message For 400 Paar Doubters? | English News

53 minutes ago

Notre Dame Linebacker Teddy Rezac Ready To "Win The Whole Thing"

53 minutes ago

Everyone At My Friend's Baby Shower Thought I Had This Breakfast Casserole Catered

53 minutes ago

Corbyn 'won't be Labour candidate at election,' says Reeves despite local party’s protests in north London

53 minutes ago

Inside ITV's The Fortune Hotel where rooms cost over £1000 a night

53 minutes ago

Stock Investors Haven’t Been This Bullish in Years. That’s Not a Bullish Sign.

53 minutes ago

How these 8 B.C. roommates created a harmonious living arrangement

53 minutes ago

Cranborne Chase, the A-list alternative to the Cotswolds

Kênh khám phá trải nghiệm của giới trẻ, thế giới du lịch