OpenAI, a key player in AI development, is grappling with a series of high-profile departures that have raised alarms about the company’s future direction. Recently, co-founder Greg Brockman announced a sabbatical leave extending through the end of the year. John Schulman, a researcher known for his work on AI alignment, has left to join Anthropic. These exits follow a tumultuous period marked by CEO Sam Altman’s controversial firing and quick reinstatement by OpenAI’s board. Jan Leike, previously leading the super alignment team, also departed after clashes over safety protocols, claiming that the company’s focus had shifted too much toward flashy products at the expense of robust safety measures.
Daniel Kokotajlo, a former OpenAI researcher, disclosed that more than half of the super alignment team has resigned recently. He attributes this exodus to a growing disillusionment with the company’s changing priorities. Nearly half of those focused on long-term AI risks have left, a troubling trend given OpenAI’s mission to develop artificial general intelligence (AGI) safely.
Shift from Research to Commercial Focus
OpenAI, celebrated for its AI assistant ChatGPT, was founded with the aim of advancing AGI to benefit humanity. This involves creating systems that can perform complex tasks similar to human abilities. However, the departure of key researchers indicates a troubling shift towards commercial objectives rather than focusing on AGI safety.
Recent hires, such as Sarah Friar as CFO and Kevin Weil as Chief Product Officer, highlight this pivot toward product development and market presence. This shift has alarmed former employees like Kokotajlo, who worry that the company’s profit-driven focus might overshadow critical safety considerations.
Safety Concerns and Diminished Focus
The exits of researchers dedicated to AGI safety, including Jan Hendrik Kirchner, Collin Burns, and Jeffrey Wu, have intensified concerns about OpenAI’s commitment to managing the risks associated with its technology. These researchers played a crucial role in ensuring that AI systems did not produce harmful or dangerous outputs. Their departure suggests a weakening of the company’s dedication to these essential safety measures.
An OpenAI spokesperson defended the company’s approach, asserting that it remains committed to developing safe and effective AI systems. They emphasized the importance of ongoing debate about AI risks and reaffirmed OpenAI’s commitment to collaborating with governments and global communities.
Cultural and Strategic Shifts
Kokotajlo, who worked at OpenAI from 2022 until April 2023, observed a noticeable shift in company culture even before the upheaval surrounding Altman’s firing. This period saw the removal of board members focused on AGI safety, suggesting a consolidation of power among Altman and Brockman. Kokotajlo believes that those concentrating on AGI safety are being increasingly sidelined as the company focuses more on immediate commercial goals.
Critics, including prominent AI experts like Andrew Ng and Fei-Fei Li, argue that the existential threat posed by AGI is overstated. They suggest that AI should instead be leveraged to address pressing global issues like climate change and pandemics. They also caution that a narrow focus on AGI risks could lead to overly restrictive legislation that hampers innovation.
Legislative and Industry Reactions
Kokotajlo has expressed disappointment with OpenAI’s opposition to California’s SB 1047, a bill aimed at regulating powerful AI models. He, along with former colleague William Saunders, criticized OpenAI’s stance as inconsistent with its earlier commitments to assess AGI risks and support responsible regulations.
Despite these challenges, Kokotajlo acknowledged that some within OpenAI continue to focus on AI safety. The company has restructured its approach, dissolving the super alignment team but establishing a new safety and security committee to oversee critical decisions. Recently, Carnegie Mellon University professor Zico Kolter, an expert in AI security, joined OpenAI’s board.