Current and former OpenAI employees warn of AI's 'serious risk' and lack of oversight

OpenAI CEO Sam Altman speaks during the Microsoft Build conference at Microsoft headquarters in Redmond, Washington, on May 21, 2024. Jason Redmond | AFP | Getty Images

A group of current and former OpenAI employees published an open letter Tuesday describing concerns about the artificial intelligence industry's rapid advancement despite a lack of oversight and an absence of whistleblower protections for those who wish to speak up.

"AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this," the employees wrote in the open letter.

OpenAI, Google, Microsoft, Meta and other companies are at the helm of a generative AI arms race — a market that is predicted to top $1 trillion in revenue within a decade — as companies in seemingly every industry rush to add AI-powered chatbots and agents to avoid being left behind by competitors.

The current and former employees wrote AI companies have "substantial non-public information" about what their technology can do, the extent of the safety measures they've put in place and the risk levels that technology has for different types of harm.

"We also understand the serious risks posed by these technologies," they wrote, adding that the companies "currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily."

The letter also details the current and former employees' concerns about insufficient whistleblower protections for the AI industry, stating that without effective government oversight, employees are in a relatively unique position to hold companies accountable.

"Broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues," the signatories wrote. "Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated."

The letter asks AI companies to commit to not entering or enforcing non-disparagement agreements; to create anonymous processes for current and former employees to voice concerns to a company's board, regulators and others; to support a culture of open criticism; and to not retaliate against public whistleblowing if internal reporting processes fail.

Four anonymous OpenAI employees and seven former ones, including Daniel Kokotajlo, Jacob Hilton, William Saunders, Carroll Wainwright and Daniel Ziegler, signed the letter. Signatories also included Ramana Kumar, who formerly worked at Google DeepMind, and Neel Nanda, who currently works at Google DeepMind and formerly worked at Anthropic. Three famed computer scientists known for advancing the artificial intelligence field also endorsed the letter: Geoffrey Hinton, Yoshua Bengio and Stuart Russell.

"We agree that rigorous debate is crucial given the significance of this technology and we'll continue to engage with governments, civil society and other communities around the world," an OpenAI spokesperson told CNBC, adding that the company has an anonymous integrity hotline, as well as a Safety and Security Committee led by members of the board and OpenAI leaders.

Microsoft declined to comment.

Mounting controversy for OpenAI

Last month, OpenAI backtracked on a controversial decision to make former employees choose between signing a non-disparagement agreement that would never expire, or keeping their vested equity in the company. The internal memo, viewed by CNBC, was sent to former employees and shared with current ones.

The memo, addressed to each former employee, said that at the time of the person's departure from OpenAI, "you may have been informed that you were required to execute a general release agreement that included a non-disparagement provision in order to retain the Vested Units [of equity]."

"We're incredibly sorry that we're only changing this language now; it doesn't reflect our values or the company we want to be," an OpenAI spokesperson told CNBC at the time.

Tuesday's open letter also follows OpenAI's decision last month to disband its team focused on the long-term risks of AI just one year after the Microsoft-backed startup announced the group, a person familiar with the situation confirmed to CNBC at the time.

The person, who spoke on condition of anonymity, said some of the team members are being reassigned to multiple other teams within the company.

The team's disbandment followed team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announcing their departures from the startup last month. Leike wrote in a post on X that OpenAI's "safety culture and processes have taken a backseat to shiny products."

Ilya Sutskever, Russian Israeli-Canadian computer scientist and co-founder and Chief Scientist of OpenAI, speaks at Tel Aviv University in Tel Aviv on June 5, 2023.Jack Guez | AFP | Getty Images

CEO Sam Altman said on X he was sad to see Leike leave and that the company had more work to do. Soon after, OpenAI co-founder Greg Brockman posted a statement attributed to himself and Altman on X, asserting that the company has "raised awareness of the risks and opportunities of AGI so that the world can better prepare for it."

"I joined because I thought OpenAI would be the best place in the world to do this research," Leike wrote on X. "However, I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point."

Leike wrote he believes much more of the company's bandwidth should be focused on security, monitoring, preparedness, safety and societal impact.

"These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there," he wrote. "Over the past few months my team has been sailing against the wind. Sometimes we were struggling for [computing resources] and it was getting harder and harder to get this crucial research done."

Leike added that OpenAI must become a "safety-first AGI company."

"Building smarter-than-human machines is an inherently dangerous endeavor," he wrote. "OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products."

The high-profile departures come months after OpenAI went through a leadership crisis involving Altman.

In November, OpenAI's board ousted Altman, saying in a statement that Altman had not been "consistently candid in his communications with the board."

The issue seemed to grow more complex each day, with The Wall Street Journal and other media outlets reporting that Sutskever trained his focus on ensuring that artificial intelligence would not harm humans, while others, including Altman, were instead more eager to push ahead with delivering new technology.

Altman's ouster prompted resignations or threats of resignations, including an open letter signed by virtually all of OpenAI's employees, and uproar from investors, including Microsoft. Within a week, Altman was back at the company, and board members Helen Toner, Tasha McCauley and Ilya Sutskever, who had voted to oust Altman, were out. Sutskever stayed on staff at the time but no longer in his capacity as a board member. Adam D'Angelo, who had also voted to oust Altman, remained on the board.

American actress Scarlett Johansson at Cannes Film Festival 2023. Photocall of the film Asteroid City. Cannes (France), May 24th, 2023Mondadori Portfolio | Mondadori Portfolio | Getty Images

Meanwhile, last month, OpenAI launched a new AI model and desktop version of ChatGPT, along with an updated user interface and audio capabilities, the company's latest effort to expand the use of its popular chatbot. One week after OpenAI debuted the range of audio voices, the company announced it would pull one of the viral chatbot's voices named "Sky."

"Sky" created controversy for resembling the voice of actress Scarlett Johansson in "Her," a movie about artificial intelligence. The Hollywood star has alleged that OpenAI ripped off her voice even though she declined to let them use it.

OTHER NEWS

31 minutes ago

EXCLUSIVE: Ancestry reveals Kevin Costner is related to Civil War Union soldier

31 minutes ago

Concerns 'first electric vehicle could be your last'

31 minutes ago

The Australian city with the worst traffic isn't Sydney or Melbourne

31 minutes ago

Rising competition still wouldn't impact Nvidia's growth, says Futurum's Daniel Newman

31 minutes ago

D.C. is dreading tonight's presidential debate

31 minutes ago

Jim Cramer breaks down Walgreens' worst day on record

31 minutes ago

EasyJet cancellations chaos brings misery for Brits 'forced to pay hundreds'

31 minutes ago

Carla Denyer: ‘Reform barely have councillors but get more coverage than Greens’

37 minutes ago

Video: Major police operation underway as suspicious item is found at Sydney's Jewish Museum

37 minutes ago

Video: Wellies, soggy tents and raincoats - then T-shirts! Glastonbury revellers soaked in downpours before the sun breaks through again - as festival-goers battle rail chaos at Paddington on pilgrimage to Worthy Farm

37 minutes ago

Video: Melbourne mother Danielle Brown's act of kindness reveals a heartbreaking story about her husband's cancer battle

38 minutes ago

Bronny James picked by Lakers in second-round 2024 NBA Draft scheme

38 minutes ago

Eric Adams’ stunning super booster Eleonora Srugo is a ‘RHONY’ candidate, top NYC broker and fervent supporter of the mayor

38 minutes ago

Former ‘American Ninja Warrior’ champ sentenced to 10 years for child porn, exploitation

38 minutes ago

Prince Harry set to receive Pat Tillman Award for Service at 2024 ESPYs

38 minutes ago

Shock and anger grips local communities after two are mothers killed in gang crossfire

38 minutes ago

Huge BBC soap to be off air for weeks as live sport forces schedule shake-up

39 minutes ago

Nine to cut up to 200 jobs as ‘economic headwinds’ bite

39 minutes ago

‘Sacrifice’ needed for Aussie’s Tour success

39 minutes ago

Japanese World War 2 bomb discovered in Darwin Harbour near CBD

39 minutes ago

Venezuela into Copa quarters after Mexico defeat, Jamaica out

39 minutes ago

A fresh start the key to Aussie Bruce's Olympic dreams

39 minutes ago

Prince William steps out and talks green tech at Earthshot event

39 minutes ago

Iconic Australian lolly Spearmint Leaves set to make a return

39 minutes ago

Oilers, GM Ken Holland part ways after 5 seasons following their trip to the Stanley Cup Final

39 minutes ago

Bengals knock Browns from playoff spot in new projections

39 minutes ago

Last woolly mammoths were inbred, but that’s not why they became extinct

39 minutes ago

The one thing these Euros lacked was a true underdog. Enter Georgia

39 minutes ago

NRL News: Bennett takes stunning whack at Bunker, refs, send-off rule, Flanagan 'extremely proud' of Dad's milestone

39 minutes ago

NSW Police gather to receive coercive control training from victims’ families and international prosecutors

39 minutes ago

Families of those lost in opioid crisis 'devastated' by Supreme Court's decision to reject Purdue settlement

39 minutes ago

Root Awakening: Do not eat Fukugi Tree’s fruit

39 minutes ago

Ailing Waratahs bring Raiwalui home in director role

39 minutes ago

The Chic Home: Colourful living in East Coast walk-up apartment

39 minutes ago

Princess Eugenie shows off scars from scoliosis surgery in new post supporting sufferers

39 minutes ago

Private plans for light rail to revive Sydney’s Parramatta Road

39 minutes ago

The labourers and flyers erased from Origin - the NRL's best who could do it at state

39 minutes ago

Crown Prince marks 30th birthday

39 minutes ago

Clippers leave an empty seat for Jerry West in their war room at 'bittersweet' draft without him

42 minutes ago

Japanese yen weakens to fresh 38-year lows; top currency diplomat replaced