In late July, Elon Musk reposted a video to X that appeared to feature Kamala Harris exposing herself as an incompetent “diversity hire” and a “deep state puppet” – words spoken by a deepfake clone of the presidential nominee’s voice. “This is amazing,” said Musk, celebrating the descent of his $40 billion platform into a swamp of right-wing misinformation. Now, though, that could be set to change.
This week (September 17), California Governor Gavin Newsom signed several laws addressing the use of AI deepfakes that could influence elections, as well as other laws related to the AI cloning of Hollywood actors. Together, these represent one of the strictest crackdowns on AI companies to date, in the state that many of them call home.
One of the laws, AB 2655, takes aim at large online platforms such as Facebook, X, and TikTok. It requires these platforms to remove content deemed “materially deceptive” in relation to elections in the state, and to label other inauthentic or fake content. They’re also required to create channels for users to report fake content, with candidates and elected officials given the power to take legal action if platforms don’t comply.
Another law, AB 2355, requires the labelling of political ads that use AI. This comes just a few weeks after Donald Trump posted AI-generated images to Truth Social, which showed Taylor Swift and her fans endorsing his presidential run. Needless to say, these were totally fake. Swift has openly endorsed the Democratic candidate (for better or for worse), leading Trump to declare: “I HATE TAYLOR SWIFT.”
Both of these laws deal with state elections, so won’t necessarily affect the race for the presidency. But they do lay the blueprint for future AI regulations at a national level, with the FCC already proposing a nationwide requirement for disclosing AI-generated political ads.
“Safeguarding the integrity of elections is essential to democracy, and it’s critical that we ensure AI is not deployed to undermine the public’s trust through disinformation – especially in today’s fraught political climate,” says Newsom in a statement on the newly signed bills. “These measures will help to combat the harmful use of deepfakes in political ads and other content, one of several areas in which the state is being proactive to foster transparent and trustworthy AI.”
AI was also a focal point of the long-running actors’ strike in California last year, with many fearing replacement by the technology. Newsom also addressed these concerns in laws signed this week. AB 2602 requires studios to obtain permission from a performer before creating a “digital replica” in their image, while AB 1836 protects dead personalities from AI-based resurrection, unless studios obtain consent from their estates.
“No one should live in fear of becoming someone else’s unpaid digital puppet,” says SAG-AFTRA chief negotiator Duncan Crabtree-Ireland in a statement. “Governor Newsom has led the way in protecting people – and families – from AI replication without real consent.”
Despite the unusually direct laws signed this week, some have been critical of Newsom for failing to address the more existential risks of AI technologies. These are largely covered in a proposed AI law, SB 1047, which would impose various guardrails on the most advanced (or ‘frontier’) machine learning models. AI critics have accused Newsom of using Hollywood deepfake regulations to distract from the bigger-picture issues expressed in this bill.
On Tuesday, onstage at San Francisco’s Dreamforce conference, Newsom himself expressed reluctance toward SB 1047, which has support from the likes of Elon Musk and OpenAI rival Anthropic. The governor said that he was concerned about the “chilling effect” of the bill on AI development, adding: “We dominate this space, and I don’t want to lose that.” Of course, there might not be any space to dominate if companies spark an AI apocalypse, but that remains to be seen.