California Governor Gavin Newsom today vetoed the Frontier Artificial Intelligence Model Security Innovation Act (SB 1047). In his veto message, Governor Newsom cited multiple factors in his decision, including the burden the bill would place on artificial intelligence companies, California’s leadership in the field, and criticism that the bill may be too broad.
“While well-intentioned, SB 1047 does not consider whether an AI system is deployed in a high-risk environment, involves critical decisions, or uses sensitive data. Instead, the bill imposes strict standards for even the most basic functions. — as long as large systems deploy it, I don’t think that’s the best way to protect the public from the real threats posed by this technology.
Newsom wrote that the bill could “give the public a false sense of security about controlling this rapidly evolving technology.”
“Smaller, specialized models could be as dangerous or even more dangerous than those targeted by SB 1047 — with the potential cost of limiting innovation that drives progress that benefits the public good.”
The governor said he agrees there should be safety protocols and guardrails in place, as well as “clear and enforceable” consequences for bad actors. However, he said he believed the country should not “accept a solution without an empirical trajectory analysis of AI systems and capabilities.”
Here’s the full deprecation message:
Sen. Scott Wiener, the bill’s lead author, said in an article on Major decisions in large corporations are frustrating for anyone overseeing them.
“This veto confronts us with the troubling reality that companies aiming to create extremely powerful technologies will not face binding restrictions from U.S. policymakers, especially given that Congress is struggling to regulate them in any meaningful way. The tech industry continues to be paralyzed.
In late August, SB 1047 arrived on Governor Newsom’s desk and is expected to become the most stringent legal framework in the field of artificial intelligence in the United States, with a deadline of September 30 for signing or vetoing.
It would apply to artificial intelligence companies operating in California whose models cost more than $100 million to train and more than $10 million to fine-tune, and adds requirements for developers to implement safeguards such as “kill switches” and institutes testing Agreements reduce the likelihood of catastrophic events such as cyberattacks or epidemics. The text also provides for the protection of whistleblowers who report breaches and enables the AG to sue for damages caused by security incidents.
Changes since its introduction include scrapping proposals to create a new regulatory agency and giving state attorneys general the power to prosecute developers before potential incidents occur. Most companies covered by the law opposed the legislation, although some stopped criticizing after the amendments.
OpenAI chief strategist Jason Kwon said in a letter to bill author Senator Wiener that SB 1047 will slow down progress and that the federal government should be responsible for artificial intelligence regulation. Meanwhile, Anthropic CEO Dario Amodei wrote to the governor after the bill was revised, outlining what he saw as its pros and cons, saying: “…the new SB 1047 is a substantial improvement and we believe its benefits likely outweigh its costs.
The Chamber of Progress, a coalition representing Amazon, Meta and Google, similarly warned that the law would “hinder innovation.”
Meta Public Affairs Manager Jamie Radice emailed Meta’s veto statement to edge:
“We are pleased that Governor Newsom vetoed SB1047. The bill would stifle AI innovation, hurt business growth and job creation, and break the state’s longstanding tradition of promoting open source development. We support responsible AI regulations and remain committed to Yu works with lawmakers to promote better approaches.
Opponents of the bill include former House Speaker Nancy Pelosi, San Francisco Mayor London Breed and eight congressional Democrats from California. On the other hand, outspoken supporters include Elon Musk, Mark Hamill, Alyssa Milano, Shonda Rhimes and J.J. Abrams and other Hollywood celebrities, as well as unions including SAG-AFTRA and SEIU.
The federal government is also looking at ways to regulate artificial intelligence. In May, the Senate proposed a $32 billion road map that covers several areas that lawmakers should study, including the impact of artificial intelligence on elections, national security, copyrighted content and more.