Two former researchers from OpenAI have expressed their disapproval of the company’s stance on a proposed California bill, SB 1047, which aims to introduce strict safety protocols for artificial intelligence (AI) development, including implementing a “kill switch” mechanism. The former OpenAI researchers warn of ‘catastrophic harm’ after the company opposed the AI safety bill, expressing concerns about the lack of adequate safety measures.
William Saunders and Daniel Kokotajlo, the former employees, voiced their concerns in a letter addressed to California Governor Gavin Newsom and other state lawmakers. Initially shared with Politico, the letter described OpenAI’s opposition to the bill as “disappointing but not surprising.”
“We joined OpenAI because we wanted to ensure the safety of the incredibly powerful AI systems the company is developing,” the researchers stated. “But we resigned from OpenAI because we lost trust that it would safely, honestly, and responsibly develop its AI systems.”
Concerns Over AI Safety
In a recent letter, former OpenAI researchers warn of ‘catastrophic harm’ after the company opposed the AI safety bill that seeks to implement strict safety protocols. Saunders and Kokotajlo argued that developing advanced AI models without sufficient safety measures could lead to “catastrophic harm to the public.” They pointed out the inconsistency between OpenAI CEO Sam Altman’s public support for AI regulation and the company’s opposition to actual legislative efforts. The researchers cited Altman’s congressional testimony advocating for government intervention but criticized OpenAI’s resistance when specific regulations like SB 1047 are proposed.
Despite their concerns, OpenAI remains firm in its stance against the bill. A spokesperson for OpenAI told Business Insider that the company “strongly disagrees with the mischaracterization” of its position on SB 1047. The spokesperson directed attention to a separate letter from OpenAI’s Chief Strategy Officer, Jason Kwon, to California Senator Scott Wiener, the bill’s sponsor.
OpenAI’s Position on Federal Regulation
In his letter, Kwon argued that AI regulation should be handled at the federal level rather than through a patchwork of state laws. “A federally-driven set of AI policies, rather than a patchwork of state laws, will foster innovation and position the U.S. to lead the development of global standards,” Kwon wrote. He emphasized that national security concerns and the global impact of AI development necessitate a unified federal approach.
Kwon’s letter, dated just one day before the former researchers’ letter, acknowledged that SB 1047 “has inspired thoughtful debate” and stated that OpenAI supports some safety provisions within the bill. However, he argued that the regulation should not be left to individual states.
Skepticism from Former Employees
Saunders and Kokotajlo are not convinced that the call for federal regulation is the sole reason behind OpenAI’s opposition to SB 1047. They argue that the company’s objections to the bill “are not constructive and don’t seem in good faith.” They further stated, “We cannot wait for Congress to act—they’ve explicitly said that they aren’t willing to pass meaningful AI regulation.” They suggested that if Congress eventually takes action, it could override California’s legislation.
The former researchers concluded their letter with a call for the California Legislature and Governor Newsom to pass SB 1047 into law. They hope that with proper regulation, OpenAI might still fulfill its mission of safely developing artificial general intelligence (AGI).
Former OpenAI researchers warn of ‘catastrophic harm’ after the company opposes the AI safety bill, arguing that unchecked AI development could endanger public safety. The debate over SB 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, continues to gain attention. The bill, introduced by Senator Wiener, seeks to establish safety standards for the development of more advanced AI models. It includes requirements for pre-deployment safety testing, and whistleblower protections, and gives the California Attorney General the authority to take legal action if AI models cause harm.
Also Read: Broadcom Secures Major Wins with OpenAI, Google, and Meta: Targets Expansion in the $150B AI Market.