OpenAI has announced a significant shift in its internal operations, with CEO Sam Altman stepping down from the company’s Safety and Security Committee. Altman exits the AI safety commission amid growing concerns about OpenAI’s safety protocols. This move follows the company’s efforts to transform the committee into an independent oversight body. Established in May 2024, the committee was originally created to evaluate safety concerns surrounding OpenAI’s AI models. Altman’s involvement in the group had raised concerns about the objectivity of safety assessments.
The revamped committee will now be chaired by Zico Kolter, Director of the Machine Learning Department at Carnegie Mellon University. Joining Kolter on the committee are Quora CEO Adam D’Angelo, former NSA chief General Paul Nakasone, and Nicole Seligman, former Executive Vice President at Sony. All of them are existing OpenAI board members.
Oversight of AI Model Safety
The Safety and Security Committee’s role is to review the safety of OpenAI’s AI models before their release, with a particular focus on ensuring they meet security and ethical standards. This includes overseeing model launches and having the authority to delay releases if safety concerns are not fully addressed. The committee recently reviewed OpenAI’s new reasoning-based AI model, o1, assessing its safety and security before approving its launch.
The committee will continue to receive regular updates from the company’s leadership on major AI releases and maintain the power to halt deployments until safety standards are met. This change comes as OpenAI faces increasing scrutiny from lawmakers and stakeholders over its rapid expansion.
Recommendations and Governance Enhancements
As part of a 90-day review, the committee issued several recommendations to improve OpenAI’s safety measures. Key suggestions include establishing independent governance for safety and security, enhancing cybersecurity practices, and increasing transparency about the company’s work. The recommendations also encourage greater collaboration with external organizations to address safety challenges more effectively.
OpenAI has pledged to act on these recommendations, promising to unify its safety frameworks and adopt more rigorous security protocols. Additionally, the company is considering setting up an Information Sharing and Analysis Center (ISAC) to promote cooperation within the AI industry on security-related matters.
Challenges and Criticism
As Altman exits the AI safety commission, Zico Kolter has taken over as the new chair of the committee. OpenAI’s safety practices have been under scrutiny for some time. Concerns were raised by former employees, including Jan Leike and Ilya Sutskever, who exited the company in May 2024. Leike criticized the firm for prioritizing product development over safety, leading to the disbandment of OpenAI’s “superalignment” team, which was focused on ensuring AI systems remained under human control.
OpenAI has also faced criticism for its stance on AI regulation. The company has lobbied against California’s proposed AI safety bill, while more than 30 current and former employees have publicly supported the legislation.
Industry experts are divided on the impact after Altman exits the AI safety commission. OpenAI is reportedly seeking to raise more than $6.5 billion in new funding, potentially valuing the company at $150 billion. There is speculation that OpenAI may abandon its hybrid nonprofit structure in favor of a traditional corporate model, allowing investors greater returns, but possibly moving away from the organization’s original mission of developing AI for the benefit of humanity.
Growing Lobbying Efforts
As OpenAI grows, its lobbying efforts have also expanded significantly. The company has spent $800,000 on lobbying in the first half of 2024, a substantial increase from the $260,000 spent in 2023. Altman himself has joined the Department of Homeland Security’s AI Safety and Security Board, which advises on AI’s role in national security.
Despite Altman’s exit from the internal committee, some remain skeptical about whether the group can act independently of OpenAI’s commercial interests. Former board members, including Helen Toner and Tasha McCauley, have called for external regulation to ensure AI companies remain accountable for their safety measures.
Also Read: Trump Launches His Family’s Cryptocurrency Venture, Sparks New Debate