Two former researchers from OpenAI have raised concerns about CEO Sam Altman’s stance on artificial intelligence regulation. The Ex-OpenAI researchers warn that Sam Altman’s AI regulation support is a public relations facade. They argue that his public support for AI safety may be more about maintaining a positive image than genuinely protecting the public. According to them, this could pose a risk to public safety.
Reports indicate that OpenAI is facing financial difficulties, with Business Insider citing losses of up to $5 billion. The company is allegedly on the brink of bankruptcy. In light of these challenges, OpenAI has opposed a proposed measure, SB 1047, which aimed to establish safety protocols to prevent AI from becoming overly advanced.
This move has sparked criticism from various quarters, including two former researchers, William Saunders and Daniel Kokotajlo. They have publicly criticized OpenAI’s stance on this legislation, expressing concerns about privacy and security issues.
Researchers Resign Over Safety Concerns
In a letter, Ex-OpenAI researchers warn that Sam Altman’s AI regulation support is a public relations facade, claiming OpenAI prioritizes rapid product launches over safety. In a letter explaining their decision to leave OpenAI, Saunders and Kokotajlo stated they initially joined the company to help ensure the safe development of its powerful AI systems. However, they eventually resigned, citing a loss of trust in OpenAI’s ability to develop these systems safely and responsibly. The letter, shared by Windows Central, claims that OpenAI develops sophisticated AI models without sufficient safeguards to prevent them from becoming uncontrollable.
The ex-OpenAI researchers warn that Sam Altman’s AI regulation support is a public relations facade. They point to OpenAI’s opposition to specific regulations, like California’s SB 1047, which proposed clear safety measures for AI development.
Controversy Over GPT-40 Launch
The controversy further deepened with OpenAI’s recent launch of GPT-40. Reports suggest the company rushed the launch, sending out invitations for an event even before testing had begun. It has been acknowledged that OpenAI’s alignment and safety teams were understaffed and overburdened, which left little time for thorough testing. Despite these issues, OpenAI insists that it did not compromise on the quality of its product during the launch. However, critics, including the former researchers, argue that prioritizing product launches over rigorous safety processes could pose “foreseeable risks of catastrophic harm to the public.”
Debate Over AI Regulation
The debate over AI regulation continues to intensify. While OpenAI CEO Sam Altman has publicly called for regulation, he has also expressed a preference for an agency-based approach, similar to the regulatory framework for airplanes. Altman argues that writing AI regulations into law could become obsolete within a short period due to the fast pace of technological advancements.
As Ex-OpenAI researchers warn Sam Altman’s AI regulation support is a public relations facade, they emphasize the urgent need for genuine regulatory oversight. They claim that when real regulatory measures are proposed, such as SB 1047, Altman tends to oppose them. An OpenAI spokesperson countered these claims, stating the company strongly disagrees with the portrayal of its position on the legislation.
In a letter to California Senator Scott Wiener, who sponsored the bill, OpenAI’s Chief Strategy Officer Jason Kwon argued for a federal approach to AI policy. Kwon suggested that a unified federal framework would better support innovation and help the U.S. lead in setting global standards.
The fate of the proposed AI bill remains uncertain. OpenAI has proposed several amendments to the legislation, but it is not clear whether these changes will be accepted. The former researchers have warned that waiting for federal action may not be an option, as Congress has shown little willingness to pass comprehensive AI regulation. They argue that if federal action ever occurs, it could potentially override state laws like California’s proposed legislation.
Also Read: OpenAI in Turmoil as Daniel Kokotajlo Reports Nearly Half of AGI Safety Staffers Have Left.