
A coalition of former OpenAI employees, top academics, and Nobel laureates is calling on state officials to stop the ChatGPT maker from converting to a for-profit model. They warn that doing so could abandon the company’s original mission to protect humanity from dangerous AI.
The pushback comes in the form of an open letter sent this week to the attorneys general of California and Delaware. The group argues that OpenAI’s planned restructuring, which would hand control to a for-profit entity, risks sidelining critical safety measures in favor of corporate interests.
In the letter, the signatories — 10 ex-OpenAI workers, three Nobel Prize winners, and several AI pioneers — asked the states’ top law enforcement officers to block the proposed restructuring.
Who controls AI when it outsmarts us?
Founded in 2015 with a mission to safely develop artificial general intelligence (AGI) for public benefit, OpenAI now faces accusations of straying from those principles.
“Ultimately, I’m worried about who owns and controls this technology once it’s created,” said Page Hedley, a former OpenAI policy and ethics adviser, in an interview with The Associated Press. Hedley, one of 10 ex-employees who signed the letter, fears profit motives could override safeguards as AI grows more powerful.
OpenAI insists the restructuring will strengthen its nonprofit arm while allowing it to compete with rivals like Anthropic and Elon Musk’s xAI. “Any changes to our existing structure would be in service of ensuring the broader public can benefit from AI,” the company said in a statement.
But critics aren’t convinced. Under the proposed model, OpenAI’s for-profit wing — a public benefit corporation (PBC) — would take full operational control, leaving the nonprofit to manage charitable projects. Signatories warn that this would dismantle key protections, like an independent board and capped investor returns.
DOWNLOAD: How to Keep AI Trustworthy from TechRepublic Premium
Duty to humanity vs. investor demands
The letter highlights concerns that OpenAI is rushing products to outpace competitors, cutting corners on safety. “The costs of those decisions will continue to go up as the technology becomes more powerful,” Hedley told AP.
Former engineer Anish Tondwalkar pointed to OpenAI’s “stop-and-assist clause,” which requires the company to help rivals if they near AGI breakthroughs. “If OpenAI is allowed to become a for-profit, these safeguards can vanish overnight,” he said in a statement as reported by AP.
Nisan Stiennon, an ex-OpenAI engineer, put it bluntly: “OpenAI may one day build technology that could get us all killed. It is to OpenAI’s credit that it’s controlled by a nonprofit with a duty to humanity. This duty precludes giving up that control.”
“Do not allow the restructuring to proceed as planned,” the letter urges California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings. “We urge you to protect OpenAI’s charitable purpose by preserving the core governance safeguards that OpenAI and Mr. Altman have repeatedly claimed are important to its mission.”
Regulators under pressure
The appeal puts California AG Rob Bonta and Delaware AG Kathy Jennings in a tight spot. Jennings previously said she’d review OpenAI’s plans to “ensure the public’s interests are protected,” while Bonta’s office has remained silent, citing an ongoing probe.
The backlash isn’t new — Elon Musk, a co-founder who left in 2018, is suing OpenAI for allegedly betraying its mission. But this challenge comes from within, with former staff and experts arguing that profit-driven AI could have irreversible consequences.
As OpenAI races toward a 2025 restructuring deadline — reportedly tied to billions in funding — the question remains: Can it balance competition with its pledge to keep AI safe? For now, regulators hold the cards.