California legislators are preparing to vote on a new bill, that aims to regulate the development and deployment of artificial intelligence (AI) within the state. The California AI regulation bill SB 1047, which has already sparked significant debate, could be voted on as soon as this week. Several tech companies have voiced strong opposition to the proposal, citing concerns about its impact on innovation and AI research.
As California’s legislators prepare to vote on SB 1047, the outcome could have significant implications for the future of AI development in the state. The tech industry, lawmakers, and experts remain divided on whether SB 1047 strikes the right balance or poses a threat to the state’s leadership in AI innovation.
Overview of SB 1047
The California AI regulation bill SB 1047, authored by State Senator Scott Wiener, proposes stringent measures to ensure the safety and ethical development of AI technologies. SB 1047 would require safety testing for advanced AI models that cost more than $100 million to develop or utilize substantial computing power. This would include implementing a “kill switch” to shut down AI models if they malfunction or pose a threat.
Furthermore, the bill grants the state attorney general the authority to take legal action against developers who fail to comply with safety standards, particularly in situations where there is an ongoing threat, such as an AI model taking over critical infrastructure like the power grid. The legislation also mandates that AI developers hire independent third-party auditors to assess their safety practices and offers protections for whistleblowers who expose unethical practices in AI development.
Key Provisions of the Bill
SB 1047 aims to balance AI innovation with public safety by setting clear standards for developers. Key provisions of the bill include:
- Requiring pre-deployment safety testing, cybersecurity safeguards, and post-deployment monitoring for advanced AI models that surpass a certain threshold of computing power and development cost.
- Establishing whistleblower protections for employees in AI laboratories.
- Empowering the California Attorney General to take legal action if an AI model causes significant harm or if developers’ negligence poses an imminent public safety threat.
- Creating a public cloud computing resource, CalCompute, to support startups, researchers, and community groups in developing large-scale AI systems aligned with California’s values.
SB 1047 has already passed the California State Senate with a 32-1 vote and recently advanced through the State Assembly’s appropriations committee. If the bill passes the full Assembly vote by the end of the legislative session on August 31, it will be sent to Governor Gavin Newsom, who will have until September 30 to either sign or veto it.
The bill has undergone several revisions since its introduction, following feedback from the tech industry, including companies like Anthropic, which is backed by Amazon and Alphabet. The latest version no longer includes criminal penalties; instead, it retains civil penalties, which can only be pursued after harm has occurred. Additionally, the legal requirement for developers has been adjusted from providing a “reasonable assurance” of safety to exercising “reasonable care,” a lower standard of compliance.
Criticisms from the Tech Industry
As California’s proposed AI regulation bill, SB 1047, approaches a crucial vote, it has sparked significant debate within the tech community and among lawmakers. While some argue that the bill is necessary to ensure AI safety, others contend that it could stifle innovation and drive companies out of California.
A large portion of the tech industry has voiced concerns over SB 1047, arguing that the bill imposes excessive regulatory burdens on AI developers. Critics, including prominent AI firm Anthropic, have expressed that earlier versions of the legislation could have led to complex legal obligations. One major concern was the bill’s provision allowing the California Attorney General to sue AI developers for negligence, even if no safety incident had occurred. This, they argued, would discourage innovation by creating a challenging legal landscape for AI companies.
OpenAI, another key player in the AI space, suggested that the bill might prompt companies to relocate from California to avoid its stringent requirements. OpenAI also argued that AI regulation should be handled at the federal level to avoid a confusing patchwork of state regulations.
Senator Scott Wiener, the bill’s author, dismissed concerns about companies leaving California as a “tired argument,” pointing out that the bill’s requirements would still apply to companies providing services to Californians, regardless of their location.
Opposition from Lawmakers
The bill has also faced opposition from members of the U.S. Congress. Last week, eight congressional representatives urged Governor Gavin Newsom to veto SB 1047, citing concerns about the obligations it would place on companies developing and using AI. Among the opponents is Rep. Nancy Pelosi, who described the bill as “well-intentioned but ill-informed.” Pelosi’s opposition is noteworthy as she represents a significant tech hub and has a potential political rivalry with Senator Wiener, who has been suggested as a possible candidate for her House seat.
Supporters of the opposition also include Dr. Fei-Fei Li, a Stanford computer scientist and former Google researcher, who is known as the “Godmother of AI.” Li argued that the bill would harm California’s growing AI ecosystem, especially small developers already at a competitive disadvantage.
Who Supported the Bill?
Despite the criticisms, the California AI regulation bill SB 1047 has found support among various AI startups and leading AI figures. Notable supporters include Yoshua Bengio and Geoffrey Hinton, often referred to as the “godfathers” of AI. They argue that SB 1047 represents a “positive and reasonable step” toward making AI safer while still promoting innovation.
Proponents of the bill believe that without adequate safety measures, there could be severe consequences from unchecked AI development, such as threats to critical infrastructure and the potential misuse of AI in creating dangerous technologies, like nuclear weapons.
Senator Wiener defended the bill as a “common-sense, light-touch” approach that primarily targets the largest AI companies, requiring them to adopt necessary safety measures. He emphasized California’s leadership in tech policy and expressed skepticism that Congress would enact meaningful AI legislation soon. Wiener highlighted California’s proactive role in filling the gaps left by federal inaction on issues like data privacy and social media regulation.
Recent Amendments
In response to feedback from the AI industry, recent amendments to SB 1047 have addressed several concerns. The latest version of the bill replaces criminal penalties with civil penalties for false statements to the government and removes a proposed new state regulatory body for AI models.
Anthropic, in a letter to Governor Newsom, acknowledged that the amended bill’s benefits likely outweigh potential drawbacks, emphasizing transparency about AI safety and encouraging companies to invest in reducing risks. However, Anthropic remains cautious about the potential for broad enforcement and extensive reporting requirements.
Dario Amodei, CEO of Anthropic, emphasized the need for a framework to manage advanced AI systems that aligns with basic safety and transparency standards, regardless of whether that framework is SB 1047.
California lawmakers have until August 31 to pass the bill, and if approved, it will move to Governor Gavin Newsom, who has until the end of September to decide whether to sign it into law. The governor has not yet indicated his stance on the legislation, leaving its future uncertain.
Also Read: OpenAI Aims to Release a New AI Model Strawberry in Fall, Promising Groundbreaking Capabilities.