Here is what’s illegal under California’s 8 (and counting) new AI laws


California Governor Gavin Newsom is currently considering 38 AI-related bills, including the highly contentious SB 1047, which the state’s legislature sent to his desk for final approval. These bills try to address the most pressing issues in artificial intelligence: everything from futuristic AI systems creating existential risk, deepfake nudes from AI image generators, to Hollywood studios creating AI clones of dead performers.

“Home to the majority of the world’s leading AI companies, California is working to harness these transformative technologies to help address pressing challenges while studying the risks they present,” said Governor Newsom’s office in a press release.

So far, Governor Newsom has signed eight of them into law, some of which are America’s most far reaching AI laws yet.

Deepfake nudes

Newsom signed two laws that address the creation and spread of deepfake nudes on Thursday. SB 926 criminalizes the act, making it illegal to blackmail someone with AI-generated nude images that resemble them.

SB 981, which also became law on Thursday, requires social media platforms to establish channels for users to report deepfake nudes that resemble them. The content must then be temporarily blocked while the platform investigates it, and permanently removed if confirmed.

Watermarks

Also on Thursday, Newsom signed a bill into law to help the public identify AI-generated content. SB 942 requires widely used generative AI systems to disclose they are AI-generated in their content’s provenance data. For example, all images created by OpenAI’s Dall-E now need a little tag in their metadata saying they’re AI generated.

Many AI companies already do this, and there are several free tools out there that can help people read this provenance data and detect AI-generated content.

Election deepfakes

Earlier this week, California’s governor signed three laws cracking down on AI deepfakes that could influence elections.

One of California’s new laws, AB 2655, requires large online platforms, like Facebook and X, to remove or label AI deepfakes related to elections, as well as create channels to report such content. Candidates and elected officials can seek injunctive relief if a large online platform is not complying with the act.

Another law, AB 2839, takes aim at social media users who post, or repost, AI deepfakes that could deceive voters about upcoming elections. The law went into effect immediately on Tuesday, and Newsom suggested Elon Musk may be at risk of violating it.

AI-generated political advertisements now require outright disclosures under California’s new law, AB 2355. That means moving forward, Trump may not be able to get away with posting AI deepfakes of Taylor Swift endorsing him on Truth Social (she endorsed Kamala Harris). The FCC has proposed a similar disclosure requirement at a national level and has already made robocalls using AI-generated voices illegal.

Actors and AI

Two laws that Newsom signed on Tuesday — which SAG-AFTRA, the nation’s largest film and broadcast actors union, was pushing for — create new standards for California’s media industry. AB 2602 requires studios to obtain permission from an actor before creating an AI-generated replica of their voice or likeness.

Meanwhile, AB 1836 prohibits studios from creating digital replicas of deceased performers without consent from their estates (e.g., legally cleared replicas were used in the recent “Alien” and “Star Wars” movies, as well as in other films).

What’s left?

Governor Newsom still has 30 AI-related bills to decide on before the end of September. During a chat with Salesforce CEO Marc Benioff on Tuesday during the 2024 Dreamforce conference, Newsom may have tipped his hat about SB 1047, and how he’s thinking about regulating the AI industry more broadly.

“There’s one bill that is sort of outsized in terms of public discourse and consciousness; it’s this SB 1047,” said Newsom onstage Tuesday. “What are the demonstrable risks in AI and what are the hypothetical risks? I can’t solve for everything. What can we solve for? And so that’s the approach we’re taking across the spectrum on this.”

Check back on this article for updates on what AI laws California’s governor signs, and what he doesn’t.

California weakens bill to prevent AI disasters before final vote, taking advice from Anthropic


California’s bill to prevent AI disasters, SB 1047, has faced significant opposition from many parties in Silicon Valley. Today, California lawmakers bent slightly to that pressure, adding in several amendments suggested by AI firm Anthropic and other opponents.

On Thursday the bill passed through California’s Appropriations Committee, a major step toward becoming law, with several key changes, Senator Wiener’s office told TechCrunch.

“We accepted a number of very reasonable amendments proposed, and I believe we’ve addressed the core concerns expressed by Anthropic and many others in the industry,” said Senator Wiener in a statement to TechCrunch. “These amendments build on significant changes to SB 1047 I made previously to accommodate the unique needs of the open source community, which is an important source of innovation.”

SB 1047 still aims to prevent large AI systems from killing lots of people, or causing cybersecurity events that cost over $500 million, by holding developers liable. However, the bill now grants California’s government less power to hold AI labs to account.

What does SB 1047 do now?

Most notably, the bill no longer allows California’s attorney general to sue AI companies for negligent safety practices before a catastrophic event has occurred. This was a suggestion from Anthropic.

Instead, California’s attorney general can seek injunctive relief, requesting a company to cease a certain operation it finds dangerous, and can still sue an AI developer if its model does cause a catastrophic event.

Further, SB 1047 no longer creates the Frontier Model Division (FMD), a new government agency formerly included in the bill. However, the bill still creates the Board of Frontier Models — the core of the FMD — and places them inside the existing Government Operations Agency. In fact, the board is bigger now, with nine people instead of five. The Board of Frontier Models will still set compute thresholds for covered models, issue safety guidance and issue regulations for auditors.

Senator Wiener also amended SB 1047 so that AI labs no longer need to submit certifications of safety test results “under penalty of perjury.” Now, these AI labs are simply required to submit public “statements” outlining their safety practices, but the bill no longer imposes any criminal liability.

SB 1047 also now includes more lenient language around how developers ensure AI models are safe. Now, the bill requires developers to provide “reasonable care” AI models do not pose a significant risk of causing catastrophe, instead of the “reasonable assurance” the bill required before.

Further, lawmakers added a protection for open source fine-tuned models. If someone spends less than $10 million fine-tuning a covered model, they are explicitly not considered a developer by SB 1047. The responsibility will still be on the original, larger developer of the model.

Why all the changes now?

While the bill has faced significant opposition from U.S. congressmen, renowned AI researchers, Big Tech and venture capitalists, the bill has flown through California’s legislature with relative ease. These amendments are likely to appease SB 1047 opponents and present Governor Newsom with a less controversial bill he can sign into law without losing support from the AI industry.

While Newsom has not publicly commented on SB 1047, he’s previously indicated his commitment to California’s AI innovation.

Anthropic tells TechCrunch it’s reviewing SB 1047’s changes before it takes a position. Not all of Anthropic’s suggested amendments were adopted by Senator Wiener.

“The goal of SB 1047 is—and has always been—to advance AI safety, while still allowing for innovation across the ecosystem,” said Nathan Calvin, senior policy counsel for the Center for AI Safety Action Fund. “The new amendments will support that goal.”

That said, these changes are unlikely to appease staunch critics of SB 1047. While the bill is notably weaker than before these amendments, SB 1047 still holds developers liable for the dangers of their AI models. That core fact about SB 1047 is not universally supported, and these amendments do little to address it.

“The edits are window dressing,” said Andreessen Horowitz general partner Martin Casado in a tweet. “They don’t address the real issues or criticisms of the bill.”

In fact, moments after SB 1047 passed on Thursday, eight United States Congress members representing California wrote a letter asking Governor Newsom to veto SB 1047. They write the bill “would not be good for our state, for the start-up community, for scientific development, or even for protection against possible harm associated with AI development.”

What’s next?

SB 1047 is now headed to California’s Assembly floor for a final vote. If it passes there, it will need to be referred back to California’s Senate for a vote due to these latest amendments. If it passes both, it will head to Governor Newsom’s desk, where it could be vetoed or signed into law.