In recent months, a significant number of AI safety researchers have left OpenAI, raising concerns about the company’s commitment to managing the risks associated with artificial general intelligence (AGI). AGI refers to AI systems capable of performing tasks as economically valuable as those done by humans
According to Daniel Kokotajlo, a former researcher, nearly half of AGI safety staffers have left the company due to a perceived shift in focus from safety to commercialization. These exits include notable names such as Jan Hendrik Kirchner, Collin Burns, Jeffrey Wu, Jonathan Uesato, Steven Bills, Yuri Burda, Todor Markov, and co-founder John Schulman.
OpenAI has been recognized for its mission to develop AGI in a way that benefits humanity. However, concerns have grown that such systems could potentially escape human control and pose existential threats. To mitigate these risks, OpenAI initially employed a substantial number of researchers dedicated to “AGI safety,” focusing on developing strategies to ensure future AGI systems are safe.
However, according to Kokotajlo, recent resignations indicate a shift in the company’s focus away from AGI safety toward product development and commercialization. This change has led to what Kokotajlo described as a “chilling effect” within OpenAI, with less emphasis on research aimed at mitigating the risks associated with AGI. The company has recently hired several executives with backgrounds in finance and product management, including Sarah Friar as CFO, Kevin Weil as chief product officer, and Irina Kofman to lead strategic initiatives, further indicating a shift in priorities.
Concerns Over Declining Emphasis on Safety
The departures of key researchers have raised alarms about whether OpenAI is sufficiently prioritizing safety as it pursues AGI development. Kokotajlo suggested that some researchers felt sidelined as the company shifted its focus.
The exit of chief scientist Ilya Sutskever and Jan Leike, who co-led OpenAI’s “superalignment” team, further fueled concerns. The superalignment team was responsible for developing methods to control “artificial superintelligence,” a concept that envisions AI systems more capable than all humans combined.
In his resignation statement, Leike indicated that the focus on safety had diminished in favor of developing “shiny products.” Kokotajlo echoed this sentiment, noting that the company seemed to be moving away from its original mission to carefully consider AGI’s risks. Instead, there appears to be increased influence from OpenAI’s communications and lobbying divisions, which might impact the publication of research related to AGI risks.
Diverging Views on AI Safety and Regulation
OpenAI’s strategic shift towards product development has contributed to an environment where nearly half of AGI safety staffers have left. While the departure of researchers raises concerns, not everyone in the AI community shares the same level of worry.
Some AI leaders, including Andrew Ng, Fei-Fei Li, and Yann LeCun, believe that the focus on AI’s potential existential threats is exaggerated. They argue that AGI is still far from being realized and that AI can play a critical role in addressing more immediate existential risks, such as climate change and pandemics.
These critics also express concerns that an overemphasis on AGI risks could lead to regulatory measures that might stifle innovation. For example, California’s SB 1047 bill, which aims to regulate powerful AI models, has sparked considerable debate. Kokotajlo criticized OpenAI’s opposition to the bill, suggesting that it reflects a departure from earlier commitments to evaluate AGI’s risks and support appropriate regulations.
As OpenAI focuses on scaling its commercial products, nearly half of AGI safety staffers have left, which could compromise long-term risk management. Despite the recent resignations, some researchers remain at OpenAI, continuing to work on AI safety within other teams. Following the dissolution of the superalignment team, OpenAI has established a new safety and security committee to oversee critical safety decisions. Additionally, Carnegie Mellon University professor Zico Kolter, known for his work in AI security, has been appointed to OpenAI’s board of directors.