Anthropic has taken a nuanced stance on California's proposed AI legislation, SB 1047. In a letter to Governor Gavin Newsom, the company acknowledged improvements made to the bill after recent amendments, stating that the new bill is "substantially improved, to the point where we believe its benefits likely outweigh its costs."
SB 1047, introduced by State Senator Scott Wiener, aims to establish robust safety standards for powerful AI models. The bill would apply to AI systems that cost more than $100 million to develop and would require companies to implement safety and security protocols (SSPs) for these models. Key provisions include mandatory pre-deployment safety testing for catastrophic risks, the ability to shut down AI systems if they pose significant threats, and increased transparency around AI development practices. The bill also clarifies legal liability for model makers and aims to incentivize investment in AI risk reduction research.
Anthropic co-founder Jack Clark shared the company's letter on social media, emphasizing that it "isn't an endorsement but rather a view of the costs and benefits of the bill." This nuanced stance highlights the complex landscape of AI regulation and the diverse perspectives within the tech industry.
Anthropic's letter outlines several positive aspects of the legislation, including the requirement for AI companies to develop and transparently disclose their SSPs. The company also praised the bill's potential to drive forward the science of AI risk reduction by creating incentives for companies to take seriously the question of foreseeable risks associated with their models.
However, Anthropic has expressed reservations about certain aspects of the legislation. These include potential government overreach through ambiguous auditing requirements and "overly expansive whistleblower protections that are subject to abuse."
SB 1047 has undergone multiple revisions to address the tech industry’s concerns. Among the notable changes was the removal of the Frontier Model Division, a proposed government body intended to oversee AI safety. Instead, its responsibilities were shifted to the California Government Operations Agency (GovOps), a move aimed at streamlining regulation and reducing bureaucratic complexity. Additionally, civil penalties were softened, with enforcement now tied to actual harm rather than potential risks.
Anthropic's stance contrasts with that of OpenAI, which has voiced strong opposition to the bill. In a letter to Senator Wiener, OpenAI's chief strategy officer, Jason Kwon, argued that SB 1047 could stifle innovation and drive AI companies out of California. Kwon stressed that AI regulation, particularly concerning national security, should be managed at the federal level.
The debate over SB 1047 echoes broader conversations in the global AI landscape. This week, Meta CEO Mark Zuckerberg and Spotify CEO Daniel Ek issued a joint statement criticizing the European Union’s AI regulations under the EU AI Act. Their concerns, like those raised by OpenAI and Anthropic, center around the fear that overly restrictive policies could hinder the development of open-weight models and slow innovation.
With AI regulation becoming a contentious issue both domestically and internationally, the outcome of SB 1047 could set a precedent for how advanced AI systems are governed in the future. The bill has already passed the state Senate and is set for a final vote in the Assembly by the end of August. If it clears this final hurdle, it will land on Governor Newsom’s desk for consideration.