OpenAI has joined other major tech companies in opposing California's controversial AI safety bill, SB 1047. The bill, which aims to establish safety standards for powerful AI models, has sparked a heated debate between lawmakers and the tech industry.
SB 1047, introduced by State Senator Scott Wiener in February, aims to regulate AI systems that cost more than $100 million to develop. It would require pre-deployment safety testing for catastrophic risks, as well as the ability to shut down AI systems if they pose significant threats. Proponents argue that this “kill switch” and other safety measures are necessary to prevent AI from causing harm, such as through cyberattacks or enabling bioweapons.
In a letter to State Senator Scott Wiener, OpenAI's chief strategy officer Jason Kwon argued that the bill could stifle innovation and drive AI companies out of California. Kwon emphasized that regulation of AI, especially concerning national security, should be managed at the federal level rather than through state laws.
The bill has undergone significant amendments in response to tech industry pushback. One of the most notable changes was the removal of the proposed Frontier Model Division, a new government body that would have overseen AI safety standards. Instead, its responsibilities were shifted to the existing California Government Operations Agency. This move was designed to streamline the regulatory process and ease concerns over bureaucratic overhead.
Additionally, civil penalties were softened, limiting them to instances where AI models cause actual harm or pose imminent threats. Other changes included removing the requirement for developers to certify compliance under penalty of perjury, instead requiring only a “statement of compliance” submitted to the state Attorney General.
Despite these changes, many in the tech industry remain skeptical of state-level AI regulation. OpenAI now joins concerns voiced by other tech giants like Meta and Anthropic. These companies worry that the bill's stringent requirements could hamper innovation and disadvantage California in the global AI race. They also argue that the bill’s provisions could expose open-source developers to significant legal liabilities, hindering smaller startups from growing.
Despite these concerns, Senator Wiener has defended the bill, asserting that it’s designed to ensure AI labs are held accountable for the safety of their most powerful models. “What’s notable about the OpenAI letter is that it doesn’t criticize a single provision of the bill,” Wiener stated in response.
He dismissed OpenAI's argument about companies leaving California, pointing out that the bill applies to any company doing business in the state, regardless of their headquarters location. He also highlighted California’s history of leading in tech regulation, particularly with data privacy laws, which similarly faced industry pushback yet never resulted in the predicted exodus of companies.
Wiener stressed that the bill’s requirements align with what companies like OpenAI have already committed to doing voluntarily. “SB 1047 asks large AI labs to do what they’ve already committed to doing, namely, test their large models for catastrophic safety risk,” Wiener explained. He also highlighted the bill’s support from national security experts, including retired Lieutenant General Jack Shanahan, who praised it for addressing the near-term risks AI poses to civil society and national security.
He also addressed OpenAI's call for federal regulation, saying, "I agree that ideally Congress would handle this. However, Congress has not done so, and we are skeptical Congress will do so."
The bill has already passed the state Senate and is set for a final vote in the California Assembly by the end of August. If approved, it will move to Governor Gavin Newsom's desk for consideration.