California Gov. Newsom vetoes bill SB 1047 that aims to prevent AI disasters

California Gov. Gavin Newsom has vetoed bill SB 1047, which aims to prevent bad actors from using AI to cause "critical harm" to humans. The California state assembly passed the legislation by a margin of 41-9 on August 28, but several organizations including the Chamber of Commerce had urged Newsom to veto the bill. In his veto message on Sept. 29, Newsom said the bill is "well-intentioned" but "does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it." 

SB 1047 would have made the developers of AI models liable for adopting safety protocols that would stop catastrophic uses of their technology. That includes preventive measures such as testing and outside risk assessment, as well as an "emergency stop" that would completely shut down the AI model. A first violation would cost a minimum of $10 million and $30 million for subsequent infractions. However, the bill was revised to eliminate the state attorney general's ability to sue AI companies with negligent practices if a catastrophic event does not occur. Companies would only be subject to injunctive relief and could be sued if their model caused critical harm.

This law would apply to AI models that cost at least $100 million to use and 10^26 FLOPS for training. It also would have covered derivative projects in instances where a third party has invested $10 million or more in developing or modifying the original model. Any company doing business in California would be subject to the rules if it meets the other requirements. Addressing the bill's focus on large-scale systems, Newsom said, "I do not believe this is the best approach to protecting the public from real threats posed by the technology." The veto message adds:

By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 - at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.

The earlier version of SB 1047 would have created a new department called the Frontier Model Division to oversee and enforce the rules. Instead, the bill was altered ahead of a committee vote to place governance at the hands of a Board of Frontier Models within the Government Operations Agency. The nine members would be appointed by the state's governor and legislature.

The bill faced a complicated path to the final vote. SB 1047 was authored by California State Sen. Scott Wiener, who told TechCrunch: "We have a history with technology of waiting for harms to happen, and then wringing our hands. Let’s not wait for something bad to happen. Let’s just get out ahead of it." Notable AI researchers Geoffrey Hinton and Yoshua Bengio backed the legislation, as did the Center for AI Safety, which has been raising the alarm about AI's risks over the past year.

"Let me be clear - I agree with the author - we cannot afford to wait for a major catastrophe to occur before taking action to protect the public," Newsom said in the veto message. The statement continues:

California will not abandon its responsibility. Safety protocols must be adopted. Proactive guardrails should be implemented, and severe consequences for bad actors must be clear and enforceable. I do not agree, however, that to keep the public safe, we must settle for a solution that is not informed by an empirical trajectory analysis of AI systems and capabilities. Ultimately, any framework for effectively regulating AI needs to keep pace with the technology itself.

SB 1047 drew heavy-hitting opposition from across the tech space. Researcher Fei-Fei Li critiqued the bill, as did Meta Chief AI Scientist Yann LeCun, for limiting the potential to explore new uses of AI. The trade group repping tech giants such as Amazon, Apple and Google said SB 1047 would limit new developments in the state's tech sector. Venture capital firm Andreeson Horowitz and several startups also questioned whether the bill placed unnecessary financial burdens on AI innovators. Anthropic and other opponents of the original bill pushed for amendments that were adopted in the version of SB 1047 that passed California's Appropriations Committee on August 15. 

This article originally appeared on Engadget at https://www.engadget.com/ai/california-gov-newsom-vetoes-bill-sb-1047-that-aims-to-prevent-ai-disasters-220826827.html?src=rss

Three men charged in connection with the Trump campaign hack

The US Department of Justice charged three Iranian nationals as part of an effort to hack into the emails and computers used by President Donald Trump’s campaign staff and other political connections.

The Washington Post reported that DOJ officials filed charges against Masoud Jalili, Seyyed Ali Aghamiri and Yasar Balaghi in an indictment filed Thursday in the US District Court for the District of Columbia. The indictment alleges the three men “prepared for and engaged in a wide-ranging hacking campaign” against current and former US officials, political campaigns and the media.

According to the indictment Jalili, Aghamiri and Balaghi’s "activity is part of Iran’s continuing efforts to [...] erode confidence in the US electoral process." They also face possible charges such as providing material support to a designated foreign terrorist organization, wire fraud and aggravated identity theft.

The suspects are accused of running a targeted hacking campaign committed in Iran over a four-year period. Their victims include current and former officials with the US State Department, the Central Intelligence Agency, the US Ambassador to Israel and an Iranian human rights organization.

Then last May, the three hackers successfully gained access to accounts belonging to Trump campaign officials. (Attempts to breach Biden campaign staff were, apparently, unsuccessful.) President Joe Biden’s campaign staffers as well as news outlets like The Washington Post and Politico received unsolicited emails from an AOL account owned by “Robert” that contained materials stolen from the Trump campaign. They included some internal poll results and the vetting dossier for Trump’s running mate Senator J.D. Vance.

Because of extradition laws, it's unlikely these hackers will be brought to justice on US soil.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/three-men-charged-in-connection-with-the-trump-campaign-hack-191154617.html?src=rss

X suspends journalist Ken Klippenstein after he published J.D. Vance dossier

X suspended journalist Ken Klippenstein’s account earlier this afternoon. X’s Safety account says they issued the temporary suspension “for violating our rules on posting unredacted private personal information, specifically Sen. [J.D.] Vance’s physical address and the majority of his social security number.”

Several news outlets that received the vetting dossier of the Republican vice presidential candidate leaked by hackers chose not to publish the sensitive document since it contained personal information. Klippenstein felt the dossier was newsworthy and decided to publish it on Substack and his social media channels and one of them took down his account.

Engadget has viewed the dossier and can confirm the details mentioned by X’s Safety team are present and unredacted in Klippenstein’s copy of the document except for the last four numbers of Vance’s social security number.

Klippenstein explained his decision to buck the media’s trend and release Sen. Vance’s dossier on his Substack. President Trump’s campaign has accused Iran’s government on more than one occasion of hacking into its files and releasing the dossier back in June. Other news outlets chose not to release the document but Klippenstein says he felt they declined “in fear of finding itself at odds with the [US} government’s campaign against ‘foreign malign influence’” referring to the National Counterterrorism Center’s organization of the same name that seeks to prevent interference in elections.

“I disagree,” Klippenstein added. “The dossier has been offered to me and I’ve decided to publish it because it’s of keen public interest in an election season.”

The suspension extends beyond Klippenstein’s account. X has flagged the link to the dossier and automatically prevents anyone who attempts to post it. Those who do receive a warning from X saying “We can’t complete this request because this link has been identified by X or our partners as being potentially harmful.”

X (then Twitter) updated its policy on “hacked materials” after it blocked stories about Hunter Biden’s laptop in 2020, saying it would allow stories about hacked materials but not links to the material if it was published by the hacker or someone working “in concert” with them.

Update, September 27 2024, 1:55PM ET: Meta will also block the sharing of the newsletter containing Vance's personal info, according to a Washington Post report. The company told the Post sharing the dossier was in contravention of its policies on hacked materials and foreign meddling.

This article originally appeared on Engadget at https://www.engadget.com/social-media/x-suspends-journalist-ken-klippenstein-after-he-published-jd-vance-dossier-214219066.html?src=rss

FCC fines political consultant $6 million for deepfake robocalls

The Federal Communications Commission (FCC) has officially issued its full recommended fine against political consultant Steve Kramer for a series of illegal robocalls using deepfake AI technology and caller ID spoofing during the New Hampshire primaries. Kramer must pay $6 million in fines in the next 30 days or the Department of Justice will handle collection, according to a FCC statement.

Kramer violated the Truth in Caller ID Act passed in 2009 that prohibits anyone from “knowingly transmit misleading or inaccurate caller identification information with the intent to defraud, cause harm or wrongfully obtain anything of value,” according to legislative records. The law preceded the widespread usage of AI, but the FCC voted unanimously to have it apply to such deepfakes this past February.

The phony robocalls delivered pre-recorded audio of President Biden’s voice using deepfake AI technology to New Hampshire residents leading up to the 2024 presidential primary election. The phony President Biden told voters not to vote in the upcoming primary saying “Your vote makes a difference in November, not this Tuesday,” according to an earlier report from CBS New York. The robocalls were spoofed so as to appear to originate from the former chairwoman of the New Hampshire Democratic Party, according to the New York Times.

Kramer hired New Orleans magician (no, really, an actual magician) Paul Carpenter to make the phony recordings. Carpenter showed NBC News how he made the deepfake audio files of President Biden using an AI voice generator called ElevenLabs. The recordings, he claims, only took around 20 minutes to make. Carpenter says Kramer paid him through Venmo and he thought the work he was doing was authorized by President Biden’s campaign. Eleven Labs has since shut down Carpenter’s account.

Kramer claims he sent the robocalls to raise awareness about the dangers and misuse of the technology. His apparent experiment only cost him $500 but, according to the political consultant, resulted in a massive return. “For me to do that and get $5 million worth of exposure, not for me,” Kramer told CBS New York. “I kept myself anonymous so the regulations could just play themselves out or begin to play themselves out. I don’t need to be famous. That’s not my intention. My intention was to make a difference.”

Kramer doesn’t just face a hefty FCC fine, he’s also facing criminal charges. New Hampshire Attorney General John M. Formella announced last May that Kramer received 13 felony counts of voter suppression and 13 misdemeanor counts of impersonation of a candidate.

This article originally appeared on Engadget at https://www.engadget.com/ai/fcc-fines-political-consultant-6-million-for-deepfake-robocalls-190050186.html?src=rss

TikTok removes Russian state-owned media accounts for ‘covert influence’

TikTok has announced in its US Elections Integrity Hub that it has removed accounts associated with Rossiya Segodnya and TV-Novosti, which own and run Russia state media outlets Sputnik and RT. The company said that it kicked the accounts off the social media platform for "engaging in covert influence operations" that go against its guidelines for spam and deceitful behavior. TikTok clarified that the accounts' content weren't shown in the For You feed under its state-affiliated media policy and were also labeled as such. Their videos were already restricted in the EU and the UK, as well, but now the accounts had been permanently banned and are no longer visible to anyone in the world. 

As CNBC notes, Sputnik has issued a statement on X that says "TikTok users and [its] 86,000 subscribers are no longer allowed to know the truth about most urgent geopolitical issues and laugh at Western politicians' gaffes in Sputnik International videos."

TikTok didn't give specific examples of how the outlets are trying to spread misinformation and to manipulate this year's presidential elections in the US. But the Office of the Director of National Intelligence and the FBI have just told reporters that Russia has generated the most AI content related to the election, so far. It has reportedly created and spread AI-made text, images, audios and videos online, mostly to "denigrate the Vice President and the Democratic Party" and to sow division by focusing on topics like immigration. 

Earlier this month, the US government formally issued sanctions against Rossiya Segodnya and TV-Novosti, accusing RT of moving "beyond being simply a media outlet." It said the Russian government embedded a cyber operational team with ties to Russian intelligence within RT, and that team allegedly focuses on "influence and intelligence operations all over the world." That team even pays social media personalities to spread "unbranded content" meant to influence foreign government elections, the feds said. 

Meta banned Russian state media outlets on its products, including Facebook and Instagram, "for foreign interference activity" shortly after the US government announced the sanction. It said it found evidence in the past that the outlets tried to hide foreign interference activities and that it expects them to continue with their deceptive practices. 

If you're wondering what kind of fake videos Russia has been releasing, Microsoft detailed a few in a recent threat analysis report. One video showed "Kamala Harris" attacking Trump rally attendees, while another video used an actor to accuse the Vice President of being involved in a 2011 hit-and-run incident that paralyzed a 13-year-old girl. There's also another fake video showing a New York City billboard claiming that Harris wants to change children's gender. The company warned that more Russian-made staged and AI-generated videos are bound to circulate online as the US gets closer to the election. 

This article originally appeared on Engadget at https://www.engadget.com/social-media/tiktok-removes-russian-state-owned-media-accounts-for-covert-influence-133006441.html?src=rss

Biden administration seeks ban on auto software from China

The Biden administration just announced a comprehensive plan to ban Chinese software and some hardware from internet-connected cars in the US. This is being framed as a national security measure, with the administration stating that this software poses “new threats to our national security, including through our supply chains.”

This is the same reasoning behind a recent ban of telecommunications equipment from Chinese companies like Huawei and ZTE. In that case, the claims had teeth, as documents reportedly showed how Huawei was involved in the country’s surveillance efforts. Today’s announcement goes on to say that China “could use critical technologies” from connected vehicles “within our supply chains for surveillance and sabotage to undermine national security.”

The rules announced today go beyond mere software. It would also cover any piece of hardware that connects a vehicle to the outside world, which includes Bluetooth, cellular, Wi-Fi and satellite components. It also includes cameras, sensors and onboard computers. The software ban would go into effect in model year 2027, with the related hardware prohibition starting in model year 2030.

The proposed ban also includes Russian auto software. The country has a fairly robust EV industry, but primarily for domestic use. There’s nothing in Russia that’s globally lusted after like the cheap EVs from Chinese companies like BYD.

This leads us to a major point. While this proposed ban is primarily for internet-connected software, it would effectively block all Chinese auto imports. The software is pretty much baked in, as are the items of hardware that allow for connectivity. It’s already tough to get one of these vehicles stateside, due to the recent tariffs placed on Chinese EVs, but this would make it nearly impossible.

Government officials, however, have held steadfast that this is a move to improve national security, and not to ban cheaper EVs from another market. “Connected vehicles and the technology they use bring new vulnerabilities and threats, especially in the case of vehicles or components developed in the P.R.C. [People's Republic of China] and other countries of concern,” said Jake Sullivan, President Biden’s national security adviser. These remarks were given to reporters over the weekend and were transcribed by The New York Times

Sullivan went on to reference something called Volt Typhoon, which is an alleged Chinese effort to insert malicious code into American power systems, pipelines and other critical infrastructure. US officials fear that this program could be used to cripple American military bases in the event of a Chinese invasion of Taiwan or a similar military excursion.

Peter Harrell, who was previously the National Security Council’s senior director for international economics during the Biden administration, told The New York Times that “this is likely to be opening the door, over a number of years, to a much broader governmental set of actions” that would “likely see a continuation” no matter who wins the presidential election.

It’s worth noting that the BYD Seagull, as an example, sells for around $10,000. This makes it much cheaper than American EVs, even after getting slapped by that fat 100 percent tariff. A full-featured EV for $20,000 sounds pretty nice right about now. Oh well. It was fun to dream.

This article originally appeared on Engadget at https://www.engadget.com/transportation/evs/biden-administration-seeks-ban-on-auto-software-from-china-154025671.html?src=rss

Iranian hackers tried to send Trump leaks to Biden campaign

In late June and early July, Iranian hackers sent unsolicited emails to people associated with President Biden's camp. Those emails contained excerpts from materials not available to the public that had been stolen from former President Trump's campaign, according to a joint statement issued by the Office of the Director of National Intelligence, the FBI and the Cybersecurity and Infrastructure Security Agency. The feds clarified that there's no evidence that those recipients replied to the sender. In addition, the bad actors sent stolen materials to news publications, including The Washington Post and Politico

The Post reported in August that the FBI was investigating Iranian hackers' attempt to infiltrate both Trump's and Biden's (now Kamala Harris') campaigns using spear-phishing techniques. Feds didn't find any evidence that anybody from the Democratic Party fell for their scheme. But the bad actors were reportedly able to take control of an email account owned by Roger Stone, a long-time Trump adviser, which they then used to send more emails with spear-phishing links to his contact list. 

"As the lead for threat response, the FBI has been tracking this activity, has been in contact with the victims, and will continue to investigate and gather information in order to pursue and disrupt the threat actors responsible," the authorities said in their announcement. 

The stolen materials were sent from an AOL account through emails signed with the name "Robert," according to The Post. When asked by the publication, they denied that they were connected to Iranian cyber attackers. While the feds didn't say what materials were sent out, The Post says they include the Trump campaign's research on Republican vice-presidential nominee JD Vance, as well as internal poll results. 

Trump's camp is now calling for the Harris camp to disclose what materials it received, while asking news publications not to publish the stolen information. Harris spokesperson Morgan Finkelstein said the Democratic campaign is cooperating with authorities, since some of their people were also targeted on their personal emails, but they're "not aware of any material being sent" to them directly.

Microsoft previously found evidence that a group linked to the Iranian government created a website that throws attacks and insults at former President Trump. But Iran isn't the only country that's attempting to interfere with this year's presidential election in the US. Microsoft recently reported that Kremlin-affiliated Russian troll farms are running disinformation campaigns focused on discrediting Harris and her running mate Tim Walz. These Russian troll farms have been releasing inauthentic videos showing the Democratic nominees in a bad light, including one that used a fake actor to accuse Harris of being involved in a 2011 hit-and-run incident that paralyzed a 13-year-old girl. 

This article originally appeared on Engadget at https://www.engadget.com/general/iranian-hackers-tried-to-send-trump-leaks-to-biden-campaign-120017606.html?src=rss

Microsoft says Russian troll farms are targeting the Harris-Walz campaign

Kremlin-affiliated Russian troll farms are running disinformation campaigns that aim to interfere with this year's US presidential elections, and according to Microsoft, they're focusing their efforts on discrediting Kamala Harris and Tim Walz. The company has published a new report detailing the movements of two troll farms being monitored by the Microsoft Threat Analysis Center. 

These Kremlin-backed actors apparently struggled to find the right approach shortly after President Biden stepped down as a candidate, but in late August and early September, one of them started circulating inauthentic videos that managed to generate millions of views. One video depicted a supposed attack by Harris supporters on Trump rally attendees. Another video used an actor to accuse Harris of being involved in a 2011 hit-and-run incident that paralyzed a 13-year-old girl. The second video, which went viral, was released by a days-old website pretending to be a San Francisco based media outlet. 

Meanwhile, the second troll farm stopped producing content about the 2024 Paris Olympics games and started creating videos showing Harris in a bad light. One fake video showed a New York City billboard claiming that Harris wants to change children's gender. It was initially published on Telegram, before being shared on X and getting more than 100,000 views within just a few hours.

Microsoft warned that people should expect more Russian-made disinformation materials, including more staged and AI-edited videos, to circulate online as we get closer to the election. Earlier this month, the US government indicted two employees of Russian state media outlet RT, accusing them of planning to pay a Tennessee company $10 million to spread 2,000 propaganda videos on social media. The Treasury Department also sanctioned ANO Dialog, a Russian nonprofit that was allegedly involved with a campaign known as "Doppelganger," to create fake websites that would appear to American readers as legitimate major news sites. Microsoft said in its new report that it suspended more than 20 accounts connected to ANO Dialog. 

Meta also recently banned RT and other Russian state media outlets "for foreign interference activity." According to its notes, which the company shared with Engadget, it had seen Russian state-controlled media try to interfere with foreign governments and to evade detection in the past. It said that it expects them to keep trying to "engage in deceptive influence attempts across the internet."

It's not just Russia that's trying to influence the outcome of this year's US presidential elections, though. Microsoft, Google and even the feds published reports back in August that Iranian hackers had been trying to spear-phish several advisers of the Biden-Harris and Trump campaigns. Microsoft also found campaigns made to sway votes in the US by groups connected with the Iranian government. One such group created a website that attacks and insults former President Donald Trump.

This article originally appeared on Engadget at https://www.engadget.com/general/microsoft-says-russian-troll-farms-are-targeting-the-harris-walz-campaign-031321352.html?src=rss

More electronic devices reportedly exploded in Lebanon a day after coordinated pager attack

An attack in Lebanon reportedly killed eight people and injured over 2,700. Hundreds of pagers belonging to Hezbollah members detonated simultaneously on Tuesday, leading the Iran-backed militant organization to blame Israel. The New York Times reported that Israel was behind the attacks and conducted it by hiding explosive material inside the pagers. A second wave of attacks, these targeting handheld radios used by Hezbollah members, was reported on Wednesday by The Washington Post.

A day after Israeli leaders warned of escalating its military campaign against Hezbollah, pagers belonging to the Lebanese group’s members exploded at once. Witnesses reported seeing smoke emanating from the victims’ pockets, followed by sounds reminiscent of fireworks or gunshots.

Lebanon’s health minister said 200 of the injured were in critical condition. He added that many victims had facial injuries, especially to the eyes. Hand and stomach injuries were also common, according to the health minister. Among those wounded was Mojtaba Amini, Iran’s ambassador to Lebanon, according to Iranian state media.

A second wave of attacks across different areas of Lebanon on Wednesday reportedly killed one person and injured over 100 others. The latest attacks reportedly targeted “wireless devices.” One of the explosions, triggered by a handheld radio, was reported at a funeral for four victims of Tuesday’s blasts. “Anyone who has a device, take out the battery now!” The Washington Post reported that Hezbollah security members yelled at the mourners. “Turn off your phones, switch it to airplane mode!”

Israel hasn’t commented on the attacks. But NYT reports that officials (including American ones) briefed on the operation said Israel was behind them. They claim as little as one to two ounces of explosive material were planted next to each pager’s battery, along with a switch allowing for remote detonations. At 3PM in Lebanon on Tuesday, the pagers received a message (appearing to be from Hezbollah leadership) that triggered the coordinated explosions, according to officials. The devices allegedly beeped for several seconds before detonating.

The Washington Post reports that the logo of Taiwanese pager maker Gold Apollo was seen on the sabotaged pagers. However, Gold Apollo claimed the devices were “entirely handled” by a Hungarian company, BAC Consulting Kft, which was authorized to use Gold Apollo’s branding in some regions. “That product isn’t ours,” Gold Apollo’s founder and president, Hsu Ching-Kuang, told The New York Times. “They just stick on our company brand.”

Officials speaking with NYT claimed the devices were tampered with before reaching Lebanon. Most were Gold Apollo’s AR924 model, which the company displayed an image of on its website before removing them on Wednesday.

The attacks sparked a wave of fear of using mobile devices. NYT reports some in Lebanon were scared to use their phones after Tuesday’s attacks, with one resident crying out, “Please hang up, hang up!” to their caller.

The Times reports that Hezbollah, long suspicious of cellphone use near the Israeli border due to the devices’ geolocation capabilities, recently switched from mobile phones to pagers. In February, Hezbollah chief Hassan Nasrallah reportedly warned the group that their phones were dangerous and could be used by Israel as spy tools. He advised the group that they should “break or bury them.”

Experts reportedly don’t yet know precisely how the pagers were distributed to Hezbollah’s members. They say that Iran, given its history of supplying Hezbollah with arms, tech and other military aid, would have been pivotal to their adoption and delivery.

Update, September 18, 2024, 11:48AM ET: This story has been updated to add new details about Tuesday’s attacks and the second wave of reported blasts on Wednesday.

This article originally appeared on Engadget at https://www.engadget.com/mobile/pagers-explode-simultaneously-in-hundreds-of-hezbollah-members-pockets-190304565.html?src=rss

California passes landmark law requiring actors’ permission for AI likenesses

California has given the go-ahead to a landmark AI bill to protect performers' digital likenesses. On Tuesday, Governor Gavin Newsom signed Assembly Bill 2602, which will go into effect on January 1, 2025. The bill requires studios and other employers to get consent before using “digital replicas” of performers. Newsom also signed AB 1836, which grants similar rights to deceased performers, requiring their estate’s permission before using their AI likenesses.

AB 2602, introduced in April, covers film, TV, video games, commercials, audiobooks and non-union performing jobs. Deadline notes its terms are similar to those in the contract that ended the 2023 actors’ strike against Hollywood studios. SAG-AFTRA, the film and TV actors’ union that held out for last year’s deal, strongly supported the bill. The Motion Picture Association first opposed the legislation but later switched to a neutral stance after revisions.

The bill mandates that employers can’t use an AI recreation of an actor’s voice or likeness if it replaces work the performer could have done in person. It also prevents digital replicas if the actor’s contract doesn’t explicitly state how the deepfake will be used. It also voids any such deals signed when the performer didn’t have legal or union representation.

The bill defines a digital replica as a “computer-generated, highly realistic electronic representation that is readily identifiable as the voice or visual likeness of an individual that is embodied in a sound recording, image, audiovisual work, or transmission in which the actual individual either did not actually perform or appear, or the actual individual did perform or appear, but the fundamental character of the performance or appearance has been materially altered.”

Meanwhile, AB 1836 expands California’s postmortem right of publicity. Hollywood must now get permission from a decedent's estate before using their digital replicas. Deadline notes that exceptions were included for “satire, comment, criticism and parody, and for certain documentary, biographical or historical projects.”

“The bill, which protects not only SAG-AFTRA performers but all performers, is a huge step forward,” SAG-AFTRA chief negotiator Duncan Crabtree-Ireland told the The LA Times in late August. “Voice and likeness rights, in an age of digital replication, must have strong guardrails around licensing to protect from abuse, this bill provides those guardrails.”

AB2602 passed the California State Senate on August 27 with a 37-1 tally. (The lone holdout was from State Senator Brian Dahle, a Republican.) The bill then returned to the Assembly (which passed an earlier version in May) to formalize revisions made during Senate negotiations.

On Tuesday, SAG-AFTRA President Fran Drescher celebrated the passage, which the union fought for. “It is a momentous day for SAG-AFTRA members and everyone else, because the A.I. protections we fought so hard for last year are now expanded upon by California law thanks to the Legislature and Gov. Gavin Newsom,” Drescher said. 

This article originally appeared on Engadget at https://www.engadget.com/ai/california-passes-landmark-regulation-to-require-permission-from-actors-for-ai-deepfakes-174234452.html?src=rss