A former OpenAI worker has publicly accused the company’s boss, Sam Altman, of lying and putting profit before safety. According to Futurism, Todor Markov, who now works at rival AI company Anthropic, filed legal papers criticizing Altman’s leadership and questioning OpenAI’s commitment to developing AI safely.
The Non-Disparagement Agreement Controversy
At the heart of Markov’s complaint are what’s called “non-disparagement agreements.” These are special contracts that stop people from saying negative things about a company or person. Think of them like a “promise not to criticize” that companies sometimes make employees sign when they leave.
Markov claims Altman lied about these agreements. Last year, news came out that OpenAI was making departing workers sign lifetime non-disparagement clauses. If workers didn’t sign, they could lose millions of dollars in company shares they had earned.
When this news broke, Altman publicly said on social media platform X that he didn’t know about these agreements. Markov’s legal filing challenges this claim, suggesting Altman was fully aware of these practices.
Serious Allegations Against OpenAI’s Leadership
Markov’s criticisms go beyond just the non-disparagement agreements. His legal filing (called an “amicus brief,” which is a document filed by someone who is not directly involved in a case but wants to offer information) makes several serious claims:
- OpenAI’s charter about safe AI development is just a “smokescreen” to attract talented workers
- The company has moved away from its original mission to develop AI that benefits everyone
- OpenAI’s shift to making money has come at the expense of safety concerns
- The company isn’t taking the risks of powerful AI seriously enough
Markov isn’t alone in his concerns. His filing is part of a broader criticism involving 11 other former OpenAI employees who share similar worries.
OpenAI’s Shift from Nonprofit to For-Profit
A big part of the controversy is how OpenAI has changed its business structure. The company started as a nonprofit organization in 2015, which means it was set up to benefit the public, not to make money for owners or investors.
However, starting in 2019, OpenAI began shifting to a “for-profit model” – a business structure focused on making money. The company plans to become a “public benefit corporation” by 2025, which is a special type of company that aims to make profits while also doing good for society.
This change has allowed OpenAI to raise huge amounts of money – $6.6 billion in recent funding, with the company now valued at $157 billion (about ₹13 lakh crore). Critics, including Tesla boss Elon Musk and former OpenAI workers, worry that this focus on money is dangerous when developing artificial general intelligence (AGI) – a future type of AI that would be as smart or smarter than humans across all tasks.
Timing of the Non-Disparagement Agreement Revelations
The controversy about OpenAI’s non-disparagement agreements first became public on May 17, 2024, through an article published by Vox reporter Kelsey Piper. The article shared leaked emails showing how OpenAI pressured departing employees to sign these lifetime agreements.
The timing was especially troubling because it came after several important safety researchers had quit OpenAI. When the news broke, Sam Altman publicly apologized on X (formerly Twitter), claiming he didn’t know about these restrictive clauses.
Broader Concerns About AI Safety
The heart of Markov’s criticism is about how OpenAI is approaching the development of increasingly powerful AI systems. He and other critics worry that the company’s drive for profit might lead to rushing the development of artificial general intelligence without proper safety measures.
This case highlights the growing tension in the AI industry between:
- Moving quickly to develop more powerful AI systems
- Taking time to ensure these systems are safe and beneficial
- Being transparent about how AI companies operate
- Allowing former employees to speak freely about concerns
The legal challenge has prompted calls for more oversight of AI companies, especially as they develop technologies that could have significant impacts on society. As AI systems become more powerful, the debate about how to develop them responsibly will likely grow more urgent.
For everyday people, this controversy matters because it raises questions about who controls the development of AI technology that will increasingly affect our lives, and whether these companies are prioritizing safety and public benefit over profit.