NY Lawmakers Pass Landmark Bill Targeting Frontier AI Risks

NY Lawmakers Pass Landmark Bill Targeting Frontier AI Risks

A seatbelt for super-intelligent software? That’s how New York State Senator Andrew Gounardes pitched the Responsible AI Safety and Education (RAISE) Act after it cleared the legislature this week. Now the nation’s most aggressive state-level frontier-AI measure is on Governor Kathy Hochul’s desk.

Key Points

  • Applies only to labs that spent $100 million+ training a frontier model
  • Demands public safety protocols, third-party audits, and 72-hour incident reporting
  • Sets “critical harm” threshold at 100 deaths or $1 billion in damages
  • New York joins a growing patchwork of state rules filling the federal vacuum

On Thursday, lawmakers approved the RAISE Act, a bill that treats cutting-edge generative AI models a bit like power plants or pharmaceuticals—dangerous when misused, therefore subject to strict oversight. “Would you let your child ride in a car with no seatbelt or airbags? Of course not,” Gounardes said on the Senate floor, arguing that AI should meet the same commonsense standard of care we expect from any risky product.

If signed by Hochul, the act will only apply to “large developers” that have dropped at least $100 million on compute to train a single system. That effectively ropes in OpenAI, Google DeepMind, Anthropic, and a few well-financed challengers. Those players would have to publish detailed safety and security protocols, hire an independent auditor every year, and—crucially—alert the state attorney general within 72 hours if something goes wrong or if a model is stolen.

What counts as “something going wrong”? The bill borrows a page from existential-risk research. A “critical harm” incident is defined as the death or serious injury of at least 100 people or more than $1 billion in damage—think large-scale bio-terror rather than a chat-bot hallucinating your calendar. Violations can draw civil penalties of up to $10 million for a first offense and $30 million after that.

Employees get new leverage, too. Anti-retaliation clauses protect whistleblowers who flag “substantial risk of critical harm,” a nod to recent high-profile departures from top AI labs who said they felt muzzled when raising safety concerns.

New York isn’t starting from scratch. Earlier this spring Hochul expanded the state-university-led Empire AI consortium and signed separate rules to police AI “companion” apps aimed at kids. Add in last year’s LOADinG Act, which forces state agencies to inventory their own algorithms, and Albany suddenly looks like the de facto AI-policy lab of the United States.

That matters because Washington is still fumbling toward consensus. The EU’s sweeping AI Act already bans some “unacceptable” uses and threatens fines up to seven percent of global revenue, while the US is plodding forward state by state—a déjà-vu of the GDPR privacy saga, according to legal analysts. Colorado, Utah, and California have each passed narrower laws, but none zero in on frontier-model catastrophes the way RAISE does.

Industry reaction has been muted so far. OpenAI and Anthropic point to voluntary commitments they made at the White House last year, promising red-team tests and model-card transparency. Google says it supports “thoughtful regulation” but has lobbied against overlapping state rules in the past. The bill’s supporters argue voluntary pledges aren’t enough when models are scaling faster than regulators can read the papers.

Hochul hasn’t signaled where she stands. She could sign, veto, or send the measure back for tweaks. Business groups warn that balkanized rules will slow innovation; labor advocates counter that unchecked AI could do a lot more than slow apps—it could enable turnkey biothreats. The governor’s decision may come down to whether the RAISE Act clashes with her tech-friendly economic agenda or complements it by building public trust.

Either way, the countdown has started. If Hochul inks the bill, the world’s most advanced AI labs will have 180 days to file their first safety dossiers with Albany—or face the first real seatbelt law for artificial intelligence.

Chris McKay is the founder and chief editor of Maginative. His thought leadership in AI literacy and strategic AI adoption has been recognized by top academic institutions, media, and global brands.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.

Subscribe