Artificial intelligence is moving quickly. It’s now able to mimic humans convincingly enough to fuel massive phone scams or spin up nonconsensual deepfake imagery of celebrities to be used in harassment campaigns. The urgency to regulate this technology has never been more critical — so, that’s what California, home to many of AI’s biggest players, is trying to do with a bill known as SB 1047.
SB 1047, which passed the California State Assembly and Senate in late August, is now on the desk of California Governor Gavin Newsom — who will determine the fate of the bill. While the EU and some other governments have been hammering out AI regulation for years now, SB 1047 would be the strictest framework in the US so far. Critics have painted a nearly apocalyptic picture of its impact, calling it a threat to startups, open source developers, and academics. Supporters call it a necessary guardrail for a potentially dangerous technology — and a corrective to years of under-regulation. Either way, the fight in California could upend AI as we know it, and both sides are coming out in force.
AI’s power players are battling California — and each other
The original version of SB 1047 was bold and ambitious. Introduced by state Senator Scott Wiener as the California Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, it set out to tightly regulate advanced AI models with a sufficient amount of computing power, around the size of today’s largest AI systems (which is 10^26 FLOPS). The bill required developers of these frontier models to conduct thorough safety testing, including third-party evaluations, and certify that their models posed no significant risk to humanity. Developers also had to implement a “kill switch” to shut down rogue models and report safety incidents to a newly established regulatory agency. They could face potential lawsuits from the attorney general for catastrophic safety failures. If they lied about safety, developers could even face perjury charges, which include the threat of prison (however, that’s extremely rare in practice).
California’s legislators are in a uniquely powerful position to regulate AI. The country’s most populous state is home to many leading AI companies, including OpenAI, which publicly opposed the bill, and Anthropic, which was hesitant on its support before amendments. SB 1047 also seeks to regulate models that wish to operate in California’s market, giving it a far-reaching impact far beyond the state’s borders.
Unsurprisingly, significant parts of the tech industry revolted. At a Y Combinator event regarding AI regulation that I attended in late July, I spoke with Andrew Ng, cofounder of Coursera and founder of Google Brain, who talked about his plans to protest SB 1047 in the streets of San Francisco. Ng made a surprise appearance onstage later, criticizing the bill for its potential harm to academics and open source developers as Wiener looked on with his team.
“When someone trains a large language model…that’s a technology. When someone puts them into a medical device or into a social media feed or into a chatbot or uses that to generate political deepfakes or non-consensual deepfake porn, those are applications,” Ng said onstage. “And the risk of AI is not a function. It doesn’t depend on the technology — it depends on the application.”
Critics like Ng worry SB 1047 could slow progress, often invoking fears that it could impede the lead the US has against adversarial nations like China and Russia. Representatives Zoe Lofgren and Nancy Pelosi and California’s Chamber of Commerce worry that the bill is far too focused on fictional versions of catastrophic AI, and AI pioneer Fei-Fei Li warned in a Fortune column that SB 1047 would “harm our budding AI ecosystem.” That’s also a pressure point for the chair of the Federal Trade Commission Lina Khan, who’s concerned about federal regulation stifling the innovation in open-source AI communities.
Onstage at the YC event, Khan emphasized that open source is a proven driver of innovation, attracting hundreds of billions in venture capital to fuel startups. “We’re thinking about what open source should mean in the context of AI, both for you all as innovators but also for us as law enforcers,” Khan said. “The definition of open source in the context of software does not neatly translate into the context of AI.” Both innovators and regulators, she said, are still navigating how to define, and protect, open-source AI in the context of regulation.
A weakened SB 1047 is better than nothing
The result of the criticism was a significantly softer second draft of SB 1047, which passed out of committee on August 15th. In the new SB 1047, the proposed regulatory agency has been removed, and the attorney general can no longer sue developers for major safety incidents. Instead of submitting safety certifications under the threat of perjury, developers now only need to provide public “statements” about their safety practices, with no criminal liability. Additionally, entities spending less than $10 million on fine-tuning a model are not considered developers under the bill, offering protection to small startups and open source developers.
Still, that doesn’t mean the bill isn’t worth passing, according to supporters. Even in its weakened form, if SB 1047 “causes even one AI company to think through its actions, or to take the alignment of AI models to human values more seriously, it will be to the good,” wrote Gary Marcus, emeritus professor of psychology and neural science at NYU. It will still offer critical safety protections and whistleblower shields, which some may argue is better than nothing.
Anthropic CEO Dario Amodei said the bill was “substantially improved, to the point where we believe its benefits likely outweigh its costs” after the amendments. In a statement in support of SB 1047 reported by Axios, 120 current and former employees of OpenAI, Anthropic, Google’s DeepMind, and Meta said they “believe that the most powerful AI models may soon pose severe risks, such as expanded access to biological weapons and cyberattacks on critical infrastructure.”
“It is feasible and appropriate for frontier AI companies to test whether the most powerful AI models can cause severe harms, and for these companies to implement reasonable safeguards against such risks,” the statement said.
Meanwhile, many detractors haven’t changed their position. “The edits are window dressing,” Andreessen Horowitz general partner Martin Casado posted. “They don’t address the real issues or criticisms of the bill.”
There’s also OpenAI’s chief strategy officer, Jason Kwon, who said in a letter to Newsom and Wiener that “SB 1047 would threaten that growth, slow the pace of innovation, and lead California’s world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere.”
“Given those risks, we must protect America’s AI edge with a set of federal policies — rather than state ones — that can provide clarity and certainty for AI labs and developers while also preserving public safety,” Kwon wrote.
Newsom’s political tightrope
Though this highly amended version of SB 1047 has made it to Newsom’s desk, he’s been noticeably quiet about it. It’s not exactly news that regulating technology has always involved a degree of political maneuvering and that much is being signaled by Newsom’s tight-lipped approach on such controversial regulation. Newsom may not want to rock the boat with technologists just ahead of a presidential election.
Many influential tech executives are also major donors to political campaigns, and in California, home to some of the world’s largest tech companies, these executives are deeply connected to the state’s politics. Venture capital firm Andreessen Horowitz has even enlisted Jason Kinney, a close friend of Governor Newsom and a Democratic operative, to lobby against the bill. For a politician, pushing for tech regulation could mean losing millions in campaign contributions. For someone like Newsom, who has clear presidential ambitions, that’s a level of support he can’t afford to jeopardize.
What’s more, the rift between Silicon Valley and Democrats has grown, especially after Andreessen Horowitz’s cofounders voiced support for Donald Trump. The firm’s strong opposition to SB 1047 means if Newsom signs it into law, the divide could widen, making it harder for Democrats to regain Silicon Valley’s backing.
So, it comes down to Newsom, who’s under intense pressure from the world’s most powerful tech companies and fellow politicians like Pelosi. While lawmakers have been working to strike a delicate balance between regulation and innovation for decades, AI is nebulous and unprecedented, and a lot of the old rules don’t seem to apply. For now, Newsom has until September to make a decision that could upend the AI industry as we know it.