California’s AI Regulation Bill Heads to Assembly Vote After Major Amendments

California’s AI Regulation Bill Heads to Assembly Vote After Major Amendments

California’s proposed AI regulation, Senate Bill 1047, is moving to a full vote in the state Assembly after a key committee passed a significantly amended version. The bill, originally designed to enforce safety protocols on AI companies, has undergone changes aimed at easing concerns from major tech companies, particularly OpenAI, Google, and Meta. Startups like Anthropic also weighed in, with some of their feedback shaping the final draft.

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), sponsored by Democratic State Senator Scott Wiener, aims to establish safety requirements for companies developing powerful AI systems.

The original bill required developers of large AI models, those costing at least $100 million to train, to implement safety protocols, undergo audits, and provide “reasonable assurance” that their models would not cause catastrophic harm. The bill also proposed creating a new government body, the Frontier Model Division, to oversee AI safety standards.

VC firm Andreessen Horowitz and its associated network have been among the loudest voices against the legislation. Stanford professors Fei-Fei Li, who leads the stealth billion-dollar AI company World Labs, and Andrew Ng, CEO of Landing AI and DeepLearning AI, have also come out in opposition following meetings with Senator Wiener. Adding to the chorus of dissent are Meta's Chief AI Scientist Yann LeCun and U.S. Representatives Ro Khanna and Zoe Lofgren.

Opponents argue that tech companies have already committed to ensuring their models won't cause catastrophic harm. They express concern that the bill's requirements could slow AI progress and innovation. Many worry that the proposed Frontier Model Division would lack the expertise to effectively regulate such a rapidly evolving technology. A particular point of contention is the potential impact on open-source AI development, with critics fearing the bill could make it too risky to release open-source models, as developers might be held liable for harmful modifications made by others.

In response to this backlash, several key amendments were made. First, the creation of the new Frontier Model Division was scrapped. Its responsibilities for evolving computing thresholds and issuing safety guidelines were transferred to the existing California Government Operations Agency. This shift streamlined the regulatory process and reduced the institutional overhead that tech companies had criticized.

Additionally, the bill now limits civil penalties to instances where AI models cause actual harm or pose imminent threats to public safety. This represents a significant softening of the original bill, which could have penalized developers for noncompliance even if no damage had occurred. The bill’s author, Senator Scott Wiener, emphasized that these adjustments aim to balance innovation with safety, ensuring that AI companies can continue their work while adhering to reasonable safeguards.

One of the most notable changes is the removal of a requirement for developers to certify compliance under penalty of perjury. Now, developers only need to submit “statements of compliance” to the state Attorney General, aligning the bill with existing standards for public document submissions. This adjustment alleviated some concerns about the legal risks AI developers would face under the bill’s previous iteration.

Despite these concessions, concerns remain. The bill’s critics continue to argue that state-level regulation of AI could hinder open-source development and disproportionately impact smaller AI startups. Lauren Wagner, an investor and former Google and Meta researcher, argued that AI regulation should be handled at the federal level, reflecting fears that California’s legislation could set a difficult precedent for other states to follow.

Anthropic, one of the companies that helped shape the amendments, expressed cautious optimism about the bill’s new direction but stopped short of endorsing it fully. OpenAI and Meta, two of the other major players in the space, have not publicly commented on the revised bill.

As the bill heads for a vote, it remains a contentious piece of legislation with the potential to shape how AI is regulated across the U.S. Should it pass, California would once again find itself at the forefront of tech regulation, much as it did with its landmark privacy and child safety laws in recent years. Governor Gavin Newsom has yet to indicate whether he will sign the bill into law if it passes the Assembly vote expected later this month.

Chris McKay is the founder and chief editor of Maginative. His thought leadership in AI literacy and strategic AI adoption has been recognized by top academic institutions, media, and global brands.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.

Subscribe