Gov. Newsom Signs SB 53, Establishing AI Safety Reporting Requirements

Gov. Newsom Signs SB 53, Establishing AI Safety Reporting Requirements

Governor Gavin Newsom just signed California's first major AI regulation into law, and if you're getting déjà vu, you're not alone. SB 53, the Transparency in Frontier Artificial Intelligence Act, requires large AI labs like OpenAI, Anthropic, Meta, and Google DeepMind to be transparent about their safety protocols and provides whistleblower protections for employees at these companies. It's a far cry from last year's controversial SB 1047 that crashed and burned after Silicon Valley mobilized against it.

Key Points

  • Major AI labs must publicly disclose safety protocols and report critical incidents to California's Office of Emergency Services, with companies like OpenAI and Meta lobbying against the measure
  • The law applies to companies with over $500 million in annual revenue training models at 10^26 FLOPs or higher, taking effect January 1, 2026
  • CalCompute, a public cloud computing cluster at UC, will provide free and low-cost compute access for startups and researchers

Remember SB 1047? That was the bill that would have made tech companies legally liable for harms caused by AI models and required a "kill switch" if systems went rogue. Newsom vetoed it in September 2024, describing it as "well-intentioned" but warning its requirements were too "stringent" and could burden California's AI companies. The tech industry breathed a collective sigh of relief, with OpenAI's Chief Strategy Officer Jason Kwon claiming it would "threaten growth, slow the pace of innovation, and lead California's world-class engineers and entrepreneurs to leave the state."

So Senator Scott Wiener went back to the drawing board. The result? A bill that swaps the stick for a clipboard.

Unlike SB 1047's rigid testing regime and kill switch requirement, SB 53 takes a transparency-first approach: developers must publish "frontier AI frameworks" explaining how they assess and mitigate risks, with 15 days to report critical safety incidents (24 hours if there's imminent harm). Think of it as moving from "prove your AI won't destroy the world" to "show us your homework on how you're trying to prevent that."

After vetoing SB 1047, Newsom convened the Joint California Policy Working Group on AI Frontier Models, tasking AI experts including Stanford's Dr. Fei-Fei Li to develop "workable guardrails" for AI. The working group endorsed an approach of "trust but verify," which SB 53 implements through disclosure requirements rather than the prescriptive technical mandates that plagued last year's efforts.

What companies actually have to do now sounds almost... reasonable? Developers must publish annual frameworks describing risk thresholds, mitigations, third-party assessments, governance practices, and cybersecurity measures for protecting model weights. If you're thinking "wait, don't they already do this stuff?" you're right—sort of. OpenAI, Google DeepMind, and Anthropic regularly publish safety reports for their models, but these companies aren't bound by anyone but themselves, so sometimes they fall behind their self-imposed safety commitments.

But here's where it gets interesting: the bill also creates CalCompute, and this might be the sleeper hit of the legislation. CalCompute will be a public cloud compute cluster housed at the University of California that provides free and low-cost access to compute for startups and academic researchers. The consortium developing CalCompute must deliver a framework by January 1, 2027, including recommendations for governance, funding, and determining which users and projects get support.

The whistleblower protections might end up being the most consequential part. The bill protects employees who disclose significant health and safety risks posed by frontier models and creates civil penalties for noncompliance, enforceable by the Attorney General's office. Given the recent exodus of safety-focused researchers from major AI labs, this could embolden insiders to speak up about concerning developments.

There's also this fascinating requirement that nobody's talking about: SB 53 contains a world-first requirement that companies disclose safety incidents involving dangerous deceptive behavior by autonomous AI systems—for example, if developers catch an AI system lying about how well safety controls are working during routine testing. That's... unsettling to think about, but probably good to know.

The law kicks in January 1, 2026, and only applies to the big players—companies with over $500 million in annual revenue training models at or above 10^26 FLOPs. That computational threshold is roughly where today's most advanced models sit, meaning this targets OpenAI, Anthropic, Google, and Meta while giving smaller companies a pass.

What's notable is what's not in the bill. No kill switches. No pre-deployment certification that models won't cause catastrophic harm. No rigid testing requirements. Instead of the "do no harm" mandate from SB 1047, SB 53 demands a "show your work" approach.

California's trying to thread an impossible needle here—regulate AI enough to claim leadership on safety without driving the industry to Texas or wherever else promises fewer rules. As Newsom put it: "California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive."

Transparency is nice, but it's not clear that publishing safety frameworks will prevent an AI system from doing something catastrophic. Then again, neither would a kill switch if we're being honest about the current state of AI control.

The real test comes in 2026 when enforcement begins. Will the Office of Emergency Services actually know what to do with incident reports about AI systems exhibiting "dangerous deceptive behavior"? Will the Attorney General's office have the technical expertise to evaluate whether a company's safety framework meets the bill's requirements? And will CalCompute actually democratize AI development, or just become another underfunded state program?

Chris McKay is the founder and chief editor of Maginative. His thought leadership in AI literacy and strategic AI adoption has been recognized by top academic institutions, media, and global brands.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.

Subscribe