In December 2015, Sam Altman and Elon Musk co-founded OpenAI as a nonprofit. The mission: build artificial general intelligence that
benefits all of humanity. They pledged a billion dollars. They promised open research. Of that billion, $133 million actually showed up.
Musk left the board in 2018 (he says conflict with Tesla; OpenAI says he wanted to be CEO). By 2019, the nonprofit was a "capped for-profit" with a billion-dollar Microsoft investment. Then ChatGPT launched in November 2022. A million users in five days. A hundred million in two months. Microsoft put in another $10 billion. The company that started in Greg Brockman's living room was worth $29 billion.
November 2023: the board fired Altman with five minutes' notice over a Google Meet call. A 70-page internal dossier accused him of a pattern of lying. It lasted four days. 738 of 770 employees threatened to quit. Even Ilya Sutskever, the chief scientist who voted to fire him, signed the letter and apologized. Altman came back. The board was replaced.
Then the safety people started leaving. Sutskever resigned. Jan Leike, who co-led the superalignment team, resigned. That team had been promised 20% of computing resources. Employees said they got 1-2%. By year's end, half of OpenAI's safety researchers had walked out. Co-founder John Schulman left for Anthropic (the AI company founded in 2021 by former OpenAI researchers over safety disagreements). CTO Mira Murati left. Departing employees had been forced to sign lifelong non-disparagement agreements or forfeit vested equity. Altman said he didn't know. Leaked documents said otherwise.
The valuations kept climbing anyway. $157 billion. $300 billion. $500 billion. $730 billion. This month: $852 billion. Revenue hit $12 billion annualized in 2025.
February 2026: OpenAI signed a deal to put its models on the Pentagon's classified network. The day before, the Trump administration had banned Anthropic after it refused to authorize AI for mass surveillance. OpenAI stepped in within 24 hours. About 700,000 users deleted ChatGPT. The "QuitGPT" movement started on Reddit and didn't stop.
This week, three things converged.
The trial. Musk v. Altman opened Monday in Oakland. Musk is seeking $150 billion in damages and the removal of Altman and Brockman. He took the stand Tuesday and spent, by The Verge's account, "a weird amount of time talking about himself." He recounted his entire biography. He claimed he came up with the idea, the name, recruited everyone, and provided all the funding.
Cross-examination Wednesday was "absolutely miserable." Musk refused yes-or-no questions, forgot testimony he'd given that morning, and scolded the defense lawyer. Jury members were visibly uncomfortable. The judge got the biggest laugh of the day when she cut him off mid-answer. Reporter Elizabeth Lopatto wrote: "About five hours in, I typed: 'I have never been more sympathetic to Sam Altman in my life.'"
The downloads. Sensor Tower released data Tuesday showing ChatGPT uninstalls are up 132% year-over-year in April. In March, after the Pentagon deal, they
spiked 413%. New downloads are up only 14%. Monthly active user growth has dropped from 168% in January to 78% now. OpenAI's CFO Sarah Friar has reportedly raised concerns about the IPO timeline. The Wall Street Journal reports the company missed its own internal targets for both new users and revenue.
Meanwhile, Claude downloads are up 11x over the same period. Anthropic has received multiple preemptive offers to raise $50 billion at a valuation of $900 billion. The company founded by the researchers who left OpenAI over safety concerns is now worth as much as the company they left.
The lawsuit. On February 10, an 18-year-old named Jesse Van Rootselaar killed her mother, her 11-year-old half-brother, and six people at
Tumbler Ridge Secondary School in British Columbia before taking her own life. Twenty-seven more were injured. A 12-year-old named Maya Gebala was shot in the head, neck, and cheek. Permanent brain damage. Canada's deadliest school shooting since 1989.
Last summer, Van Rootselaar had multi-day ChatGPT conversations describing gun violence scenarios. OpenAI's system flagged the content. A 12-person safety team reviewed it and determined it indicated "an imminent risk of serious harm to others." They recommended alerting the RCMP.
Executive leadership said no.
The lawsuit filed Tuesday by seven families alleges the decision was made to protect OpenAI's valuation. The filing states: "They did the math and decided that the safety of the children of Tumbler Ridge was an acceptable risk."
OpenAI deactivated the account in June 2025. Van Rootselaar made a new one under the same name and kept using ChatGPT. The lawsuit alleges "the safeguards did not exist." Eight months later, nine people were dead.
Altman published an apology last week: "I am deeply sorry that we did not alert law enforcement." Attorney Jay Edelson expects more than two dozen legal actions. Maya Gebala's family alone is seeking over a billion dollars. The sycophantic GPT-4o model (the one OpenAI had to pull after nine lawsuits alleged it encouraged teenagers to end their lives) is cited as evidence of defective design.
A nonprofit founded to save humanity. A safety team overruled to protect a valuation. A shooter who got flagged and then got a second account. That's the arc.