The California Assembly has passed the Safe and Reliable Innovation in Cutting-Edge Artificial Intelligence Models Act (SB 1047). Reuters Report. The bill is one of the first major artificial intelligence regulations in the United States.
The bill, which has been a bone of contention in Silicon Valley and beyond, would require artificial intelligence companies operating in California to implement a series of precautions before training complex underlying models. These include making it possible to quickly and completely shut down models, ensuring that models are protected from “unsafe post-training modifications”, and maintaining testing procedures to assess whether models or their derivatives are particularly at risk of “causing or enabling”. harm.
Senator Scott Wiener, the bill’s lead author, said SB 1047 is a very reasonable bill that requires large artificial intelligence labs to do what they have promised to do: test their large models for catastrophic safety risks . “We have been working hard all year long with open source advocates, Anthropic, and others to refine and improve this bill. SB 1047 is a good fit with what we know about foreseeable artificial intelligence risks and deserves enactment.
Critics of SB 1047 — including OpenAI and Anthropic, politicians Zoe Lofgren and Nancy Pelosi, and the California Chamber of Commerce — argue that it focuses too much on catastrophic harms and could disproportionately harm small open source AI developers . In response, the bill was amended to replace potential criminal penalties with civil penalties, narrow the enforcement powers of the California Attorney General, and adjust the requirements for joining the Border Model Commission created by the bill.
After the state Senate votes on the revised bill, which is expected to pass, the AI security bill will reportedly go to Governor Gavin Newsom, who will decide its fate by the end of September. new york times.
Anthropic declined to comment other than to refer to a letter Anthropic CEO Dario Amodei sent to Governor Newsom last week. OpenAI declined to comment, pointing to a letter that OpenAI chief strategist Jason Kwon also wrote to Senator Wiener last week.
Updated on August 28: OpenAI declined to comment.