Anthropic backs California’s New AI safety Bill, SB 53

Anthropic backs California’s New AI safety Bill, SB 53

Anthropic is throwing its weight behind California's latest attempt to regulate AI, endorsing Senate Bill 53—a transparency-focused measure that ditches the heavy-handed approach that killed last year's controversial SB 1047. The AI company's support comes as the industry walks a tightrope between self-regulation and government oversight, with many frontier labs already implementing the practices SB 53 would legally require.

Key Points:

  • SB 53 requires AI companies to publish safety frameworks and transparency reports—practices many already follow voluntarily
  • The bill takes a "trust but verify" approach with disclosure requirements instead of prescriptive technical mandates
  • Anthropic sees gaps in the bill's computing power thresholds and wants more detailed testing requirements

The endorsement marks a notable shift from the fierce industry battles over SB 1047, which Governor Gavin Newsom vetoed in September last year after months of heated debate. That bill would have imposed strict requirements on AI models costing over $100 million to train, including mandatory shutdown capabilities and third-party audits that critics argued would stifle innovation.

California Governor Vetoes Landmark AI Safety Bill
In his veto message, Newsom stated that while the bill was “well-intentioned,” it failed to consider the deployment context of AI systems.

This time around, Senator Scott Wiener has crafted legislation that reads more like a formalization of existing industry practices than a regulatory hammer. SB 53 emerged from recommendations by Governor Newsom's Joint California Policy Working Group—a brain trust of academics and industry experts who endorsed a "trust but verify" approach to AI governance.

The requirements aren't exactly groundbreaking. Companies developing powerful AI systems would need to develop and publish safety frameworks describing how they manage catastrophic risks—think mass casualty incidents or major financial damage. They'd release transparency reports before deploying new models, report critical safety incidents within 15 days, and provide whistleblower protections for employees who spot safety violations.

If this sounds familiar, that's because most frontier AI companies are already doing versions of this. At the AI Seoul Summit in May 2024, 16 companies formally committed to developing and publishing frameworks for managing severe AI risks. Anthropic publishes its Responsible Scaling Policy, Google DeepMind has its Frontier Safety Framework, and OpenAI maintains its Preparedness Framework. The difference? Now there'd be legal teeth behind these commitments.

"Labs with increasingly powerful models could face growing incentives to dial back their own safety and disclosure programs in order to compete," Anthropic argues in its endorsement. The bill creates what the company calls a level playing field where disclosure is mandatory, not optional—preventing a race to the bottom on safety standards.

Anthropic’s support matters because other frontier labs have been more cautious in public. OpenAI and Google DeepMind have said they support “responsible regulation” but haven’t endorsed SB 53 outright. The bill’s emphasis on transparency could help prevent a race to the bottom, where companies cut back on safety disclosures to move faster than rivals.

The bill smartly sidesteps the pitfalls that doomed SB 1047. Where the previous legislation tried to prescribe specific technical requirements and threatened enforcement before any harm occurred, SB 53 focuses on transparency and accountability. Companies set their own safety standards but must publicly commit to them and face penalties if they fail to follow through.

Yet Anthropic isn't giving the bill a blank check. The company flags three areas where it wants improvements.

  • First, the computing power threshold of 10^26 FLOPS for determining which models fall under regulation is "an acceptable starting point" but might miss some powerful models.
  • Second, developers should provide more granular details about their testing and evaluation procedures—echoing the detailed red-teaming and safety research the company already shares through initiatives like the Frontier Model Forum.
  • Third, the regulations need built-in flexibility to evolve as AI technology advances.

After SB 1047's failure, many expected California to retreat from AI regulation entirely. Instead, the state appears to be threading the needle between innovation and safety with a bill that even one of the AI companies it would regulate is willing to endorse.

Anthropic's closing argument cuts to the heart of the matter: "The question isn't whether we need AI governance—it's whether we'll develop it thoughtfully today or reactively tomorrow."

The bill still faces the gauntlet of California's legislative process and ultimately needs Governor Newsom's signature. But if it succeeds where SB 1047 failed, it might offer a template for other states—or even federal regulators—looking to balance AI innovation with public safety.

The vote is expected later this fall.

Chris McKay is the founder and chief editor of Maginative. His thought leadership in AI literacy and strategic AI adoption has been recognized by top academic institutions, media, and global brands.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.

Subscribe