By -

A nine-person jury was selected in late April for the trial in which Elon Musk is facing off with OpenAI founder Sam Altman and Microsoft. The stage has been set for the tech titans to do battle over OpenAI’s transition from a supposedly non-profit to a behemoth worth $US852 million.

In the matter of Elon Musk, et al. v. Samuel Altman, et al, Musk is essentially demanding that Altman and his company compensate Musk for funds that were invested in the company in 2015. The trial is expected to reveal the intentions of the major AI architects behind hardware and software that is becoming an integral part of international business and personal daily life. It also raises questions over the power of investors to retrospectively make claims on profits on the basis that they invested in a non-profit initially.

Musk is asking for $US150 billion in damages, plus the removal of Altman and Brockman from leadership roles. He is also seeking the return of OpenAI to a non-profit structure. On 24 April, the judge dismissed Musk’s fraud claims against Altman and OpenAI, but allowed his accusations of breach of charitable trust and unjust enrichment to proceed to trial.

The result of the trial, initiated in 2024, may have significant consequences for investors and other AI companies, owing to the impending initial public offering (IPO). Valued at approximately $US1 trillion, OpenAI is seeking to go public on the US stock market later in 2026. If there are changes to the company’s corporate structure or leadership, that IPO and the valuation of the company are threatened.

If the nine-person jury finds that OpenAI is liable, Judge Yvonne Gonzalez Rogers will have the final say on any remedies. The trial is expected to last around three weeks.

Thousands of pages of internal documents have been revealed in court since March 2024, when Elon Musk sued OpenAI, its chief executive Sam Altman, and co-founder, OpenAI president Greg Brockman. The duelling tech titans have battled over the nature of the company and the basis for Musk’s original investment in OpenAI.

Judge Yvonne Gonzalez Rogers is overseeing proceedings in a Californian federal court. Musk is claiming that OpenAI, Altman and Brockman had presented OpenAI to investors as a non-profit. He is seeking up to $US134 billion in damages, though he has since changed from seeking this as a personal gain and wants “all ill-gotten gains” to go to the OpenAI charity.

In March 2024, Musk’s lawyers filed a suit claiming Musk was approached in 2015 by Altman and Brockman, resulting in an agreement as co-founders to form a non-profit technology laboratory that would develop artificial general intelligence for the “benefit of humanity.”

In 2018, Musk stepped down from the OpenAI board, and six years later, filed the suit against his former allies. The suit, as reported by CNBC in 2024, alleges that OpenAI has lost its supposed initial aim of benefiting humanity, and instead operates as “a closed-source de facto subsidiary of the largest technology company in the world: Microsoft.”

Microsoft reportedly invested $US13 billion in OpenAI in April 2023, only one of the AI companies that Microsoft has either invested in or partnered with.

As a defendant in the lawsuit, Microsoft is – amongst other claims – accused by Musk of aiding and abetting OpenAI, Altman and Brockman’s alleged misconduct.

The filing said: “Under its new Board, it is not just developing but is actually refining an AGI to maximize profits for Microsoft, rather than for the benefit of humanity.”

In the March trial, as reported by CNBC at the time, Musk told the court that his motivation to start OpenAI was to serve as a counterweight to rival tech company Google.

“I could have started it as a for profit and I chose not to,” Musk told the court.

In 2024, Musk’s lead trial lawyer, Steven Molo, proposed the key questions to be resolved by the jury during the trial (as court reporters shared via CNBC):

  • “Did OpenAI have a charitable mission to operate as a non-profit, to develop safe AI, open source, for the good of humanity?”
  • “Did Altman and Brockman violate that mission through what they’ve done with the for-profit business?”
  • “Did Microsoft know about the charitable mission and substantially assist Altman and Brockman in breaching it?”

The same court reporters shared that William Savitt, OpenAI’s lead counsel, countered that Musk “never expressed the view that OpenAI had to remain purely non-profit, or even that he thought it should be.”

Savitt added that Musk only “supported a for-profit, so long as he was in control.”

Lesley Sutton, Partner at Gilbert + Tobin, says, “Whilst this case is ostensibly about the governance and control of OpenAI as an organisation, it speaks to something much broader: how are decisions made that shape the direction of AI development itself? At its core is a tension – how developers balance public interest imperatives like safety, transparency, accountability and fairness against the commercial pressures driving innovation. AI development isn’t just technical—it’s fundamentally shaped by human judgment calls, trade-offs, and values.”

“Musk became a vocal critic, arguing that [OpenAI] drifted from its original mission and concentrated too much power.

From Sutton’s perspective, Musk argues that he helped fund the early development OpenAI with the idea that AI should be developed safely and shared broadly.

“But as OpenAI evolved—especially moving toward capped-profit models and closer partnerships with for-profit companies —Musk became a vocal critic, arguing that it drifted from its original mission and concentrated too much power. OpenAI, on the other hand, has argued that scaling AI safely requires enormous resources, and that partnerships and commercialisation are necessary to compete and to fund alignment research.”

That tension highlights something unavoidable for developers and organisations: there is no “neutral” path in AI development, Sutton opines. “Every major decision—open vs. closed models, speed vs. safety, centralisation vs. decentralisation—comes with consequences.”

She says the key takeaways from this trial so far include:

  • Trade-offs are inevitable, not accidental.
    You can’t maximize openness, safety, speed, and competitiveness all at once. Choosing one direction means compromising another.
  • Intentions don’t lock in outcomes.
    Even if a project starts with a clear mission (e.g., “benefit humanity”), changing technical realities and competitive pressures can force shifts. Developers need to revisit and sometimes redefine their principles over time.
  • Governance matters as much as code.
    This case isn’t about a specific algorithm—it’s about who controls powerful systems and how decisions are made.
  • Personal philosophies shape technical ecosystems.
    Musk tends to emphasise existential risk and decentralisation, while OpenAI emphasises controlled deployment and iterative safety. Adopting either of these positions influences what actually gets built and released.
  • “Doing nothing” is also a decision.
    Slowing down development to reduce risk can mean falling behind actors who don’t share those concerns. Moving fast can increase risk. Developers are always choosing a direction, even when they think they’re being cautious.

Sutton says, “In short, the conflict shows that AI developers and leaders aren’t just engineers—they’re decision-makers shaping how an AI system enters the world.”

Sides trade arguments

The first day of the trial, on 28 April this year, heard dramatic opening statements that echoed the 2024 allegations and accusations. On the following day of trial, Musk told the court, as reported by The Guardian, that Altman “stole a charity” and would pose a danger to humanity through the application of AI. Musk was chided by the judge on multiple occasions for his lengthy responses to yes-or-no questions, and his accusations against OpenAI’s counsel that it was seeking to trick and mislead him on the stand.

Savitt also suggested Musk was being disingenuous in presenting a strictly non-profit attitude towards AI business. A year after OpenAI released ChatGPT, Musk launched the for-profit xAI, a competing AI company that made the chatbot Grok.

OpenAI’s Altman and Brockman have claimed that what Musk describes as his $38m investment into the non-profit was actually a tax-deductible donation. Hence, Musk is not entitled to any say over the operation, nor motivation, of OpenAI, which Altman and Brockman say is still overseen by the original non-profit.

Microsoft, which has a 27.5 per cent stake in OpenAI, will receive a share of OpenAI’s revenue through 2030, while a recent corporate restructuring gives greater autonomy to OpenAI, diminishing Microsoft’s IP exclusivity over OpenAI products, and opens the doors to OpenAI partnering with other tech businesses, like Amazon and Google.

Barron’s has reported that Nvidia has planned to invest $US100 billion in OpenAI. So too, Amazon plans to invest $US50 billion in OpenAI, with $15 billion up front and $35 billion expected over the coming months.

Amongst the thousands of internal documents to be revealed are emails between Musk and his co-founders indicating that the non-profit, charity status of OpenAI was never fully agreed upon. Musk was also accused of poaching staff from OpenAI for his burgeoning electric vehicle company, Tesla.

OpenAI counsel Savitt told the court on 30 April that in his role on OpenAI’s board until the end of February 2018, Musk was legally obligated to act in the best interest of the company. Nonetheless, Musk was allegedly poaching employees for Tesla. As reported in The Guardian, an email from June 2017 from Musk to Jim Keller, the vice-president of autopilot at Tesla, related to the recruitment of a lead engineer from OpenAI to Tesla. Musk told Keller: “the OpenAI guys are going to want to kill me.”

In 2019, a year after Musk’s departure, OpenAI created a for-profit subsidiary. In 2025, the company transitioned into a for-profit public benefit corporation, under the OpenAI foundation.

Musk’s repeated statements regarding the humanity-endangering nature of AI have been dramatic, including likening it to a “Terminator”.

“I have extreme concerns over AI,” Musk told the court.

On 29 April, family members of victims of one of Canada’s deadliest mass shootings, in which nine children died, sued OpenAI and CEO Sam ​Altman in a Californian federal court. Seven lawsuits allege that OpenAI knew eight months before the attack that the shooter was planning it on ChatGPT but failed to notify ‌police. There is a further allegation that the company did not warn authorities because drawing attention to the proliferation of violence-related conversations on ChatGPT was a threat to the company’s impending IPO.

At the same time, The Wall Street Journal reported that OpenAI is missing its own revenue targets, and that finance chief Sarah Friar is worried about meeting its future spending contracts if sales don’t improve.

Friar has expressed reservations about OpenAI’s plans to go public, slated for the end of this year, and told company executives board directors that the group may not be ready to meet the reporting standards demanded of a public company, the paper reported.

On 1 May, Savitt questioned Musk on whether he had read a term sheet that Altman forwarded on 31 August, 2017, relating to OpenAI’s shift from a non-profit to a for-profit overseen by a non-profit.

Musk responded, “My testimony is I didn’t read the fine print, just the headline.”