California Governor Gavin Newsom has vetoed a groundbreaking artificial intelligence (AI) safety bill that sought to regulate the development and deployment of AI technologies. The California governor blocks the AI safety bill aimed at regulating advanced artificial intelligence models in the state. The legislation, seen as one of the first serious attempts in the U.S. to regulate AI, faced significant opposition from major tech companies.
The bill, authored by Democratic Senator Scott Wiener, aimed to impose new safety protocols on AI systems, requiring advanced models to undergo safety tests. Newsom, however, raised concerns that the bill could hinder innovation, driving AI developers out of the state.
The proposed legislation was designed to ensure that developers of advanced AI models incorporated safety measures, such as a “kill switch” to shut down AI systems if they posed a threat. It also called for mandatory oversight of the most powerful AI models, known as “Frontier Models.”
In his veto statement, Newsom acknowledged the need for AI regulation but argued that the bill imposed unnecessary restrictions even on basic AI systems. He suggested that it failed to distinguish between high-risk applications and simpler AI uses, which could stifle technological growth.
Tech Industry Opposition
Major tech companies, including OpenAI and Google, voiced opposition as California’s governor blocks the AI safety bill. Industry critics argued that the regulations could slow down AI development and make California less attractive for AI innovation. Former U.S. House Speaker Nancy Pelosi also warned that the bill would “kill California tech” by discouraging investment in the state’s AI sector.
Despite these concerns, proponents of the bill, including billionaire Elon Musk and AI research firm Anthropic, supported the legislation. They highlighted the importance of transparency and accountability in the development of AI systems, which are becoming increasingly complex and powerful.
Concerns Over AI Risks
Supporters of the bill emphasized the need for AI regulation, citing potential risks associated with unregulated AI development. The bill aimed to safeguard against the misuse of AI in critical areas, such as disabling infrastructure or creating harmful technologies like chemical weapons. It also included whistleblower protections for employees raising concerns about unsafe AI practices.
Senator Wiener, the bill’s author, expressed disappointment over the veto, stating that it leaves AI companies with “no binding restrictions” from U.S. lawmakers. He warned that allowing AI development to continue without oversight could lead to significant risks for society.
Newsom’s Alternative Approach
While Newsom blocked the bill, he announced plans to collaborate with AI experts to develop more targeted safety measures. He has partnered with AI pioneer Fei-Fei Li to create guidelines that will protect the public from AI risks without stifling innovation. Newsom’s administration also pledged to continue working on laws to tackle specific AI concerns, such as deepfakes and misinformation.
Recently, the governor signed 17 other AI-related bills, including some of the toughest laws in the U.S. aimed at combating deepfakes and protecting workers from unauthorized AI usage.
A Debate That’s Not Over
The tech industry celebrated the veto as a victory, having spent considerable effort lobbying against the bill. Alongside the California Chamber of Commerce, AI companies worked to persuade lawmakers and the governor to halt the legislation, fearing that it would impose burdensome restrictions on AI development.
Despite the veto, experts believe the AI safety bill could inspire similar legislation in other states. While the California governor blocks the AI safety bill, other states to might impose regulations on artificial intelligence development. Tatiana Rice, deputy director of the Future of Privacy Forum, noted that the ideas behind the California bill are likely to resurface in future legislative sessions as concerns about AI risks grow. For now, California remains at the center of the debate on how to balance innovation with public safety in the evolving AI landscape.
Also Read: OpenAI Removes Non-Profit and Grants Altman Equity in Major Restructure.