The federal courthouse in Oakland, California, recently became the epicenter of a high-stakes legal drama as Elon Musk faced off against OpenAI, the AI powerhouse he co-founded. In the trial’s opening week, Musk, impeccably dressed in a black suit and tie, testified that OpenAI CEO Sam Altman and President Greg Brockman had allegedly manipulated him into funding their venture. His testimony was a whirlwind of dramatic accusations, dire warnings about AI’s future, and surprising admissions.
Musk warned the jury that artificial intelligence has the potential to “destroy us all,” painting a grim picture of a “Terminator situation.” Yet, he also confessed that his own AI company, xAI, creators of the Grok chatbot, relies on OpenAI’s models for training. The courtroom buzzed with activity, packed with legal teams, journalists, and a handful of concerned OpenAI employees, while protesters outside urged boycotts of ChatGPT and Tesla.
Musk’s Grievances and the Core Accusation
Musk, appearing calm and even sharing occasional quips, expressed deep remorse, stating, “I was a fool who provided them free funding to create a startup.” He claimed that when he co-founded OpenAI in 2015 with Altman and Brockman, his intention was to support a nonprofit dedicated to developing AI for humanity’s benefit, not to enrich executives. He asserted that his $38 million in “essentially free funding” laid the groundwork for what has become an $800 billion company.
At the heart of Musk’s lawsuit is a demand for the court to remove Altman and Brockman from their leadership roles and to dismantle the corporate restructuring that enabled OpenAI’s for-profit subsidiary. The trial’s outcome could significantly impact OpenAI’s anticipated IPO, which is eyeing a staggering $1 trillion valuation. Meanwhile, Musk’s xAI is reportedly preparing to go public as part of SpaceX, with a target valuation of $1.75 trillion, potentially as early as June.
The central question of the trial revolves around Musk’s motivations for suing OpenAI. He maintains he is striving to uphold OpenAI’s original mission of safe AI development by reinstating its nonprofit structure. However, OpenAI’s lawyer, William Savitt, who once represented Musk and Tesla, argued that Musk was “never committed to OpenAI being a nonprofit” and is instead seeking to undermine a competitor.
The Battle Over AI Safety and Intent
During his direct examination, Musk portrayed himself as a long-standing champion of AI safety, explaining he co-founded OpenAI to act as a “counterbalance to Google.” He recounted a conversation with Google co-founder Larry Page, who, when asked about AI wiping out humanity, allegedly responded, “That will be fine as long as artificial intelligence survives.” This exchange, Musk suggested, solidified his conviction that a truly safety-focused AI entity was vital.
Savitt, however, challenged Musk’s portrayal as an “paladin of safety and regulation,” citing xAI’s lawsuit against the state of Colorado over an AI law designed to prevent algorithmic discrimination. This sparked a heated exchange between the legal teams, each vying to establish their client as the true guardian of AI safety. Judge Yvonne Gonzalez Rogers intervened, sternly noting that despite his warnings, Musk was creating a company “in the exact space,” implying concerns about placing “the future of humanity in Mr. Musk’s hands.”
Savitt further pressed Musk on his commitment to OpenAI remaining a nonprofit, suggesting he waited too long to file the lawsuit, past the statute of limitations. Musk responded by outlining “three phases” of his evolving views on OpenAI: initial enthusiastic support, followed by a period of losing confidence in their truthfulness, and finally, a conviction that they were “looting the nonprofit.”
Discussions in 2017 among OpenAI co-founders did include creating a for-profit subsidiary to secure capital for developing artificial general intelligence (AGI). Musk himself sought a majority interest in this subsidiary, control over the board, and even proposed Tesla acquiring OpenAI before he ultimately left in 2018. He clarified his stance, stating, “I was not opposed to there being a small for-profit that provides funding to the nonprofit, as long as the tail didn’t wag the dog.”
The “Bait and Switch” and Distillation Revelation
Musk testified that his trust in Altman finally eroded in late 2022, a turning point he attributed to learning about Microsoft’s substantial $10 billion investment in OpenAI. He texted Altman, calling it a “bait and switch,” arguing that such a massive investment signaled Microsoft’s expectation of “a very big financial return,” which he felt contradicted OpenAI’s foundational nonprofit mission.
Savitt, in his cross-examination, argued that Musk’s lawsuit was ultimately an attempt to hobble a competitor to his sprawling tech empire, which includes Tesla, Neuralink, and xAI, founded in 2023. Savitt presented emails, including one from 2017 where Musk, after hiring OpenAI co-founder Andrej Karpathy for Tesla, wrote, “The OpenAI guys are gonna want to kill me. But it had to be done.” Musk grew flustered, claiming Karpathy had already decided to leave, asserting, “I believe it’s a free world.”
Another email from 2017, where Musk suggested Neuralink could “hire independently or directly from OpenAI,” further underscored Savitt’s point. Musk again retorted, “It’s a free country. I can’t restrict their ability to hire people from other companies.” Savitt highlighted that Musk’s own socially beneficial companies, like Tesla and SpaceX, are for-profit, as is xAI, which operates with a closed-source model.
However, the most significant revelation came when Musk, under relentless questioning, admitted that xAI “partly” distills OpenAI’s models. This admission, met with audible gasps in the courtroom, refers to a technique where a smaller AI model learns to mimic the behavior of larger, more powerful models. While Musk defended it as “standard practice to use other AIs to validate your AI,” this practice is a contentious issue in the industry, with OpenAI and others actively pushing back against competitors using their models in this manner.
The trial is set to continue with further testimony, including that of computer scientist Stuart Russell on AI safety, and Greg Brockman, who has been diligently taking notes throughout Musk’s testimony. This landmark case continues to unfold, promising further insights into the complex world of AI development, corporate governance, and the clashing visions of its most prominent figures.
Source: MIT Tech Review – AI