OpenAI Legal Battle: Former Employee Testifies For-Profit Shift Undermined AI Safety Mission
By admin | May 07, 2026 | 3 min read
Elon Musk’s legal challenge to dismantle OpenAI may ultimately depend on how its for-profit subsidiary either helps or hinders the lab’s original mission: ensuring that artificial general intelligence benefits all of humanity. On Thursday, a federal court in Oakland heard testimony from a former employee and board member who argued that the company’s push to commercialize AI products weakened its dedication to safety. Rosie Campbell joined OpenAI’s AGI readiness team in 2021 and left in 2024 after her team was dissolved. Another safety-focused group, the Super Alignment team, was also shut down around the same time. “When I joined, it was very research-oriented, and people frequently discussed AGI and safety issues,” she testified. “Over time, it became more like a product-driven organization.”
During cross-examination, Campbell acknowledged that building AGI likely required substantial funding, but she maintained that creating a super-intelligent computer model without proper safety measures would contradict the mission of the organization she initially joined. She pointed to an incident where Microsoft deployed a version of OpenAI’s GPT-4 model in India via its Bing search engine before the model had been reviewed by the company’s Deployment Safety Board (DSB). While the model itself posed no major risk, she said, the company needed “to set strong precedents as the technology becomes more powerful. We need solid safety processes that we know are being followed reliably.”
OpenAI’s attorneys also got Campbell to admit that, in her “speculative opinion,” OpenAI’s safety approach is better than that of xAI, the AI company Musk founded and which was acquired by SpaceX earlier this year. OpenAI publishes evaluations of its models and shares a safety framework publicly, but the company declined to comment on its current approach to AGI alignment. Dylan Scandinaro, now OpenAI’s head of Preparedness, was hired from Anthropic in February. Altman said the hire would help him “sleep better tonight.”
The deployment of GPT-4 in India, however, was one of the warning signs that led OpenAI’s nonprofit board to briefly fire CEO Sam Altman in 2023. That incident occurred after employees, including then-chief scientist Ilya Sutskever and then-CTO Mira Murati, complained about Altman’s conflict-averse management style. Tasha McCauley, a board member at the time, testified about concerns that Altman was not forthcoming enough with the board for its unusual structure to work effectively. McCauley also discussed a well-documented pattern of Altman misleading the board. Notably, Altman lied to another board member about McCauley’s intention to remove Helen Toner, a third board member who had published a white paper containing implied criticism of OpenAI’s safety policy. Altman also failed to inform the board about the decision to launch ChatGPT publicly, and members worried about his lack of disclosure regarding potential conflicts of interest. “We are a nonprofit board, and our mandate was to oversee the for-profit beneath us,” McCauley told the court. “Our primary way of doing that was being called into question. We had very little confidence that the information we were receiving allowed us to make informed decisions.”
However, the decision to oust Altman coincided with a tender offer to company employees. McCauley said that when OpenAI’s staff began siding with Altman and Microsoft worked to restore the status quo, the board ultimately reversed course, with members opposed to Altman stepping down. The nonprofit board’s apparent inability to influence the for-profit organization directly supports Musk’s argument that OpenAI’s transformation from a research lab into one of the world’s largest private companies broke the founders’ implicit agreement. David Schizer, a former Dean of Columbia Law School hired by Musk’s team as an expert witness, echoed McCauley’s concerns. “OpenAI has emphasized that a key part of its mission is safety and that it will prioritize safety over profits,” Schizer said. “Part of that is taking safety rules seriously—if something needs to undergo safety review, it must happen. The process issue matters.”
With AI already deeply integrated into for-profit companies, the issue extends far beyond a single lab. McCauley said the failures of internal governance at OpenAI should encourage stronger government regulation of advanced AI. “[If] it all comes down to one CEO making those decisions, and we have the public good at stake, that’s very suboptimal.”
Comments
Please log in to leave a comment.
No comments yet. Be the first to comment!