OpenAI chief strategist Jason Kwon insists in a new letter that regulation of artificial intelligence should be the responsibility of the federal government. As previously reported BurundiKwon said California is considering a new artificial intelligence safety bill that could slow progress and cause companies to leave the state.
A federally driven set of AI policies, rather than a patchwork of state laws, would foster innovation and enable the United States to lead the development of global standards. Therefore, we join other AI labs, developers, experts, and members of California’s congressional delegation in respectfully opposing SB 1047 and welcome the opportunity to outline some of our key issues.
The letter was addressed to California Senator Scott Wiener, who originally introduced SB 1047, also known as the Frontier Artificial Intelligence Model Security Innovation Act.
It sets standards before developing more powerful AI models, requires precautions such as pre-deployment security testing and other safeguards, adds whistleblower protections for AI lab employees, and Gives the California Attorney General the power to take legal action if AI models cause harm and calls for the establishment of a “public cloud computer cluster” called CalCompute.
In response to the letter published late Wednesday, Weiner noted that the proposed requirements would apply to any company doing business in California, regardless of whether it is headquartered in the state, making the argument “meaningless.” He also wrote that OpenAI “…would not criticize any of the provisions of the bill,” concluding by saying, “SB 1047 is a very reasonable bill that requires the big AI labs to do what they have already committed to do, which is , testing their large-scale models for catastrophic security risks.
The bill is currently awaiting a final vote before heading to Gov. Gavin Newsom’s desk.
The following is the full text of OpenAI’s letter: