Powered by Smartsupp

Elon Musk’s AI Safety Lawsuit: Expert Witness Testifies OpenAI Lost Its Way



By admin | May 04, 2026 | 3 min read


Elon Musk’s AI Safety Lawsuit: Expert Witness Testifies OpenAI Lost Its Way

When should we start taking AI alarmists seriously? That question lies at the heart of Elon Musk’s legal effort to block OpenAI from transitioning into a for-profit enterprise. His legal team contends that OpenAI was originally established as a charitable organization dedicated to AI safety, but that it has since strayed from that mission in pursuit of financial gain. To support this claim, they point to old emails and statements from the organization’s founders, which emphasized the need for a public-minded alternative to Google DeepMind.

On the stand today, they called their sole expert witness: Peter Russell, a University of California, Berkeley computer science professor with decades of AI research under his belt. His role was to provide background on AI and to establish that the technology poses risks serious enough to warrant concern. Russell was a co-signer of an open letter in March 2023 that called for a six-month halt in AI research. Notably, Musk also signed that same letter—even as he was busy launching xAI, his own for-profit AI lab.

During his testimony before Judge Yvonne Gonzalez Rodgers and the jury, Russell outlined a range of risks linked to AI development, from cybersecurity threats to issues of misalignment and the winner-take-all dynamics surrounding the creation of Artificial General Intelligence (AGI). He ultimately argued that there is a fundamental tension between the pursuit of AGI and ensuring safety. However, his broader concerns about the existential dangers of unconstrained AI were not fully aired in open court, as objections from OpenAI's attorneys led the judge to limit his testimony.

Russell has long criticized the arms-race mentality fostered by frontier labs worldwide competing to reach AGI first, and he has called for tighter government regulation of the field. On cross-examination, OpenAI’s attorneys worked to show that Russell was not directly assessing the organization’s corporate structure or its specific safety protocols. Still, as this reporter—and likely the judge and jurors as well—will be weighing, there is a question of how much value to assign to the link between corporate greed and AI safety concerns.

Virtually every one of OpenAI’s founders has issued stark warnings about the risks of AI, even as they have also highlighted its potential benefits, pushed to build AI as rapidly as possible, and devised plans for for-profit AI ventures they would control. From an outside perspective, a key issue here is the growing realization within OpenAI after its founding that the organization simply needed more computing power to succeed—and that such funding could only come from for-profit investors. The founding team’s fear of AGI falling into the hands of a single organization drove them to seek the capital that ultimately fractured the team, creating the arms race we witness today and leading to this lawsuit.

The same dynamic is now playing out on a national scale: Senator Bernie Sanders’ push for legislation imposing a moratorium on data center construction draws on AI fears voiced by Musk, Sam Altman, Geoffrey Hinton, and others. Both sides of this case are asking the court to take part of Altman and Musk’s arguments seriously while discounting the parts that are less convenient for their legal positions.




RELATED AI TOOLS CATEGORIES AND TAGS

Comments

Please log in to leave a comment.

No comments yet. Be the first to comment!