Another OpenAI researcher quits, raising fresh concerns about the company’s commitment to AI safety. Steven Adler, a former AI safety researcher at OpenAI, has revealed his departure from the company. In a post on X, he expressed deep concerns over the accelerating race toward Artificial General Intelligence (AGI). Adler described the competition as a “very risky gamble” with significant consequences.
“No lab has a solution to AI alignment today,” he stated. “The faster we race, the less likely anyone finds one in time.” AI alignment refers to ensuring AI systems act in accordance with human values and objectives.
Adler worked on AI safety programs at OpenAI, contributing to research and product safety measures. In his post, he reflected on his time at the company, calling it a “wild ride” but admitted to feeling “pretty terrified” by the rapid pace of AI development.
Concerns about AGI extend beyond Adler. Stuart Russell, a computer science professor at UC Berkeley, compared the AGI race to heading toward “the edge of a cliff.” He warned that even AI industry leaders acknowledge the risks of creating intelligence beyond human control.
Global AI Race Heats Up
As another OpenAI researcher quits, debates over the risks of AGI development intensify. Adler’s statements come amid growing competition between the U.S. and China in AI development. Reports suggest that Chinese firm DeepSeek has built a model comparable to top U.S. AI systems at a fraction of the cost. This development has unsettled American investors and drawn reactions from industry leaders, including OpenAI CEO Sam Altman.
Altman acknowledged DeepSeek’s progress and indicated OpenAI would accelerate its own releases. He reiterated the company’s goal of advancing AI toward AGI and beyond.
Exodus of Safety Experts from OpenAI
OpenAI has seen a wave of departures among safety researchers. High-profile exits include Ilya Sutskever and Jan Leike, who led OpenAI’s Superalignment team. This division focused on controlling AI systems that surpass human intelligence. The AI industry faces scrutiny as another OpenAI researcher quits due to alignment concerns.
Leike criticized OpenAI for deprioritizing safety concerns in favor of rapid product development. In a post on X, he stated that disagreements over the company’s core priorities led to his exit. Sutskever has not openly criticized OpenAI but has consistently highlighted AI safety challenges.
Other former employees, including Daniel Kokotajlo, have voiced similar concerns. Kokotajlo noted that nearly half of OpenAI’s long-term AI risk researchers have left. Many departing employees were alarmed by the company’s aggressive pursuit of AGI without adequate safety measures.
Whistleblowers Raise Alarms
Internal concerns escalated in 2024 when former OpenAI employees filed a complaint with the U.S. Securities and Exchange Commission (SEC). They alleged that OpenAI’s non-disclosure agreements restricted whistleblowing, violating SEC protections.
The complaint emphasized the need for AI employees to have the freedom to report potential risks. It warned that without regulatory oversight, AI development could become increasingly reckless.
AI Safety Debate Intensifies
Prominent AI experts have echoed these fears. Geoffrey Hinton, known as the “Godfather of AI,” resigned from Google in 2023 over ethical concerns. He admitted to regretting his role in advancing AI technology.
Other tech leaders, including Elon Musk, Bill Gates, and Yoshua Bengio, have also warned of AI’s potential dangers. They argue that the rapid pursuit of AGI could lead to catastrophic consequences if safety is not prioritized.
The debate over AI safety continues to grow as companies race to develop more advanced systems. Whether the industry will implement meaningful safeguards remains an open question.