Microsoft, OpenAI, Google and others agree to combat election-related deepfakes

A coalition of 20 tech companies signed an agreement Friday to help prevent AI deepfakes in the critical 2024 elections taking place in more than 40 countries. OpenAI, Google, Meta, Amazon, Adobe and X are among the businesses joining the pact to prevent and combat AI-generated content that could influence voters. However, the agreement’s vague language and lack of binding enforcement call into question whether it goes far enough.

The list of companies signing the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” includes those that create and distribute AI models, as well as social platforms where the deepfakes are most likely to pop up. The signees are Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic and X (formerly Twitter).

The group describes the agreement as “a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters.” The signees have agreed to the following eight commitments:

  • Developing and implementing technology to mitigate risks related to Deceptive AI Election content, including open-source tools where appropriate

  • Assessing models in scope of this accord to understand the risks they may present regarding Deceptive AI Election Content

  • Seeking to detect the distribution of this content on their platforms

  • Seeking to appropriately address this content detected on their platforms

  • Fostering cross-industry resilience to deceptive AI election content

  • Providing transparency to the public regarding how the company addresses it

  • Continuing to engage with a diverse set of global civil society organizations, academics

  • Supporting efforts to foster public awareness, media literacy, and all-of-society resilience

The accord will apply to AI-generated audio, video and images. It addresses content that “deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can vote.”

The signees say they will work together to create and share tools to detect and address the online distribution of deepfakes. In addition, they plan to drive educational campaigns and “provide transparency” to users.

OpenAI CEO Sam Altman gestures during a session of the World Economic Forum (WEF) meeting in Davos on January 18, 2024. (Photo by Fabrice COFFRINI / AFP) (Photo by FABRICE COFFRINI/AFP via Getty Images)
OpenAI CEO Sam Altman
FABRICE COFFRINI via Getty Images

OpenAI, one of the signees, already said last month it plans to suppress election-related misinformation worldwide. Images generated with the company’s DALL-E 3 tool will be encoded with a classifier providing a digital watermark to clarify their origin as AI-generated pictures. The ChatGPT maker said it would also work with journalists, researchers and platforms for feedback on its provenance classifier. It also plans to prevent chatbots from impersonating candidates.

“We’re committed to protecting the integrity of elections by enforcing policies that prevent abuse and improving transparency around AI-generated content,” Anna Makanju, Vice President of Global Affairs at OpenAI, wrote in the group’s joint press release. “We look forward to working with industry partners, civil society leaders and governments around the world to help safeguard elections from deceptive AI use.”

Notably absent from the list is Midjourney, the company with an AI image generator (of the same name) that currently produces some of the most convincing fake photos. However, the company said earlier this month it would consider banning political generations altogether during election season. Last year, Midjourney was used to create a viral fake image of Pope Benedict unexpectedly strutting down the street with a puffy white jacket. One of Midjourney’s closest competitors, Stability AI (makers of the open-source Stable Diffusion), did participate. Engadget contacted Midjourney for comment about its absence, and we’ll update this article if we hear back.

Only Apple is absent among Silicon Valley’s “Big Five.” However, that may be explained by the fact that the iPhone maker hasn’t yet launched any generative AI products, nor does it host a social media platform where deepfakes could be distributed. Regardless, we contacted Apple PR for clarification but hadn’t heard back at the time of publication.

Although the general principles the 20 companies agreed to sound like a promising start, it remains to be seen whether a loose set of agreements without binding enforcement will be enough to combat a nightmare scenario where the world’s bad actors use generative AI to sway public opinion and elect aggressively anti-democratic candidates — in the US and elsewhere.

“The language isn’t quite as strong as one might have expected,” Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center, told The Associated Press on Friday. “I think we should give credit where credit is due, and acknowledge that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we’ll be keeping an eye on whether they follow through.”

AI-generated deepfakes have already been used in the US Presidential Election. As early as April 2023, the Republican National Committee (RNC) ran an ad using AI-generated images of President Joe Biden and Vice President Kamala Harris. The campaign for Ron DeSantis, who has since dropped out of the GOP primary, followed with AI-generated images of rival and likely nominee Donald Trump in June 2023. Both included easy-to-miss disclaimers that the images were AI-generated.

BOSTON, UNITED STATES- DECEMBER 2: President Joe Biden participates in a International Brotherhood of Electrical Workers (IBEW) phone banking event on December 2nd, 2022 in Boston, Massachusetts for Senator Reverend Raphael Warnockâs (D-GA) re-election campaign. (Photo by Nathan Posner/Anadolu Agency via Getty Images)
In January, New Hampshire voters were greeted with a robocall of an AI-generated impersonation of President Biden’s voice — urging them not to vote.
Anadolu via Getty Images

In January, an AI-generated deepfake of President Biden’s voice was used by two Texas-based companies to robocall New Hampshire voters, urging them not to vote in the state’s primary on January 23. The clip, generated using ElevenLabs’ voice cloning tool, reached up to 25,000 NH voters, according to the state’s attorney general. ElevenLabs is among the pact’s signees.

The Federal Communication Commission (FCC) acted quickly to prevent further abuses of voice-cloning tech in fake campaign calls. Earlier this month, it voted unanimously to ban AI-generated robocalls. The (seemingly eternally deadlocked) US Congress hasn’t passed any AI legislation. In December, the European Union (EU) agreed on an expansive AI Act safety development bill that could influence other nations’ regulatory efforts.

“As society embraces the benefits of AI, we have a responsibility to help ensure these tools don’t become weaponized in elections,” Microsoft Vice Chair and President Brad Smith wrote in a press release. “AI didn’t create election deception, but we must ensure it doesn’t help deception flourish.”

This article originally appeared on Engadget at https://www.engadget.com/microsoft-openai-google-and-others-agree-to-combat-election-related-deepfakes-203942157.html?src=rss

Their children were shot, so they used AI to recreate their voices and call lawmakers

The parents of a teenager who was killed in Florida’s Parkland school shooting in 2018 have started a bold new project called The Shotline to lobby for stricter gun laws in the country. The Shotline uses AI to recreate the voices of children killed by gun violence and send recordings through automated calls to lawmakers, The Wall Street Journal reported

The project launched on Wednesday, six years after a gunman killed 17 people and injured more than a dozen at a high school in Parkland, Florida. It features the voice of six children, some as young as ten, and young adults, who have lost their lives in incidents of gun violence across the US. Once you type in your zip code, The Shotline finds your local representative and lets you place an automated call from one of the six dead people in their own voice, urging for stronger gun control laws. “I’m back today because my parents used AI to recreate my voice to call you,” says the AI-generated voice of Joaquin Oliver, one of the teenagers killed in the Parkland shooting. “Other victims like me will be calling too.” At the time of publishing, more than 8,000 AI calls had been submitted to lawmakers through the website.

“This is a United States problem and we have not been able to fix it,” Oliver’s father Manuel, who started the project along with his wife Patricia, told the Journal. “If we need to use creepy stuff to fix it, welcome to the creepy.”

To recreate the voices, the Olivers used a voice cloning service from ElevenLabs, a two-year-old startup that recently raised $80 million in a round of funding led by Andreessen Horowitz. Using just a few minutes of vocal samples, the software is able to recreate voices in more than two dozen languages. The Olivers reportedly used their son’s social media posts for his voice samples. Parents and legal guardians of gun violence victims can fill up a form to submit their voices to The Shotline to be added its repository of AI-generated voices.


The project raises ethical questions about using AI to generate deepfakes of voices belonging to dead people. Last week, the Federal Communications Commission declared that robocalls made using AI-generated voices were illegal, a decision that came weeks after voters in New Hampshire received calls impersonating President Joe Biden telling them to not vote in their state’s primary. An analysis by security company called Pindrop revealed that Biden’s audio deepfake was created using software from ElevenLabs.

The company’s co-founder Mati Staniszewski told the Journal that ElevenLabs allows people to recreate the voices of dead relatives if they have the rights and permissions. But so far, it's not clear whether parents of minors had the rights to their children's likenesses.

This article originally appeared on Engadget at https://www.engadget.com/their-children-were-shot-so-they-used-ai-to-recreate-their-voices-and-call-lawmakers-003832488.html?src=rss

Midjourney might ban Biden and Trump images this election season

With the rise of AI tools that can quickly create modified images and videos, making fake images to spread political misinformation leading to the upcoming US presidential election has become easier than ever. Midjourney's solution to that might be to ban political images altogether, according to Bloomberg. David Holz, Midjourney's CEO, reportedly told users during a chat session on Discord that the company is close to banning images such as those of Biden and Trump over the next 12 months.

"I know it's fun to make Trump pictures — I make Trump pictures," he told users who attended the session. "Trump is aesthetically really interesting. However, probably better to just not — better to pull out a little bit during this election. We'll see." As Bloomberg notes, people had previously used the company's AI to generate deepfakes of Trump getting arrested. The company ended free trials for its AI image generator after those images — along with those infamous deepfakes of the pope wearing a Balenciaga-inspired coat — went viral.

At the moment, the company already has rules in place prohibiting the creation of "misleading public figures" and "events portrayals" with the "potential to mislead." Bloomberg was still able to create modified images of Trump covered in spaghetti using the older version of Midjourney's system, though, whereas the newer version refused to generate modified images of the former President. Of course, even if Midjourney does ban images of high-profile politicians, it will only be protecting its platform from drawing the ire of critics and becoming the center of attention this election season. It will not prevent the use of AI tools in political disinformation campaigns or the spread fake information meant to manipulate the elections as a whole. 

Other tech companies have also taken steps to help prevent political disinformation, or at least to help make it easier to identify. ChatGPT will soon start tagging images created using DALL-E 3, while Meta is working to develop technology that can detect and signify whether an image, video or audio clip has been generated using AI.

This article originally appeared on Engadget at https://www.engadget.com/midjourney-might-ban-biden-and-trump-images-this-election-season-064442076.html?src=rss

The FCC says robocalls that use AI-generated voices are illegal

The Federal Communication Commission is moving forward with its plan to ban AI robocalls. Commissioners voted unanimously on Wednesday in favor of a Declaratory Ruling that was proposed in late January. Under the measure, the FCC deems robocalls made using AI-generated voices to be "artificial" voices per the Telephone Consumer Protection Act (TCPA). That makes the practice illegal. The ruling takes effect immediately.

“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities and misinform voters. We’re putting the fraudsters behind these robocalls on notice,” FCC Chairwoman Jessica Rosenworcel said in a statement. “State Attorneys General will now have new tools to crack down on these scams and ensure the public is protected from fraud and misinformation.”

The TCPA is a 1991 law that bans artificial or recorded voices being used to call residences without the receivers' consent. It's up to the FCC to create rules to enforce that legislation, as Ars Technica notes. As the FCC pointed out last month, under the TCPA, telemarketers need "to obtain prior express written consent from consumers before robocalling them. If successfully enacted, this Declaratory Ruling would ensure AI-generated voice calls are also held to those same standards."

The FCC vote in favor of the ban comes at somewhat of an inflection point for AI. Not only have such technologies become vastly more widespread over the last year or so, an AI-generated version of President Joe Biden's voice was used in a recent robocall that urged Democrats not to vote in New Hampshire's Presidential primary. A criminal investigation into that incident is underway.

Given that we're in an election year and the volume of misinformation and disinformation is already likely to rise, clamping down on AI robocalls now seems like a wise move. While stage AGs can take action against robocallers, the FCC also has the ability to fine them under the TCPA. Last year, the agency issued its largest ever fine of $300 million last year against a company that made more than 5 billion robocalls in a three-month period.

This article originally appeared on Engadget at https://www.engadget.com/the-fcc-says-robocalls-that-use-ai-generated-voices-are-illegal-162132319.html?src=rss

NASA’s Jet Propulsion Laboratory is laying off 570 workers

Even NASA is not immune to layoffs. The agency says it's cutting around 530 employees from its Jet Propulsion Laboratory (JPL) in California amid budget uncertainty. That's eight percent of the facility's workforce. JPL is laying off about 40 contractors too, just weeks after imposing a hiring freeze and canning 100 other contractors. Workers are being informed of their fates today.

"After exhausting all other measures to adjust to a lower budget from NASA, and in the absence of an FY24 appropriation from Congress, we have had to make the difficult decision to reduce the JPL workforce through layoffs," NASA said in a statement spotted by Gizmodo. "The impacts will occur across both technical and support areas of the Lab. These are painful but necessary adjustments that will enable us to adhere to our budget allocation while continuing our important work for NASA and our nation."

Uncertainty over the final budget that Congress will allocate to NASA for 2024 has played a major factor in the cuts. It's expected that the agency will receive around $300 million for Mars Sample Return (MSR), an ambitious mission in which NASA plans to launch a lander and orbiter to the red planet in 2028 and bring back soil. In its 2024 budget proposal, NASA requested just under $950 million for the project.

“While we still do not have an FY24 appropriation or the final word from Congress on our Mars Sample Return (MSR) budget allocation, we are now in a position where we must take further significant action to reduce our spending,” JPL Director Laurie Leshin wrote in a memo. "In the absence of an appropriation, and as much as we wish we didn’t need to take this action, we must now move forward to protect against even deeper cuts later were we to wait."

NASA has yet to provide a full cost estimate for MSR, though an independent report pegged the price at between $8 billion and $11 billion. In its proposed 2024 budget, the Senate Appropriations subcommittee ordered NASA to submit a year-by-year funding plan for MSR. If the agency does not do so, the subcommittee warned that the mission could be canceled.

That's despite MSR having enjoyed success so far. The Perseverance rover has dug up some soil samples that contain evidence of organic matter and would warrant closer analysis were NASA able to bring them back to Earth. The samples could help scientists learn more about Mars, such as whether the planet ever hosted life.

This article originally appeared on Engadget at https://www.engadget.com/nasas-jet-propulsion-laboratory-is-laying-off-570-workers-185336632.html?src=rss

The EU wants to criminalize AI-generated porn images and deepfakes

Back in 2022, the European Commission released a proposal for a directive on how to combat domestic violence and violence against women in other forms. Now, the European Council and Parliament have agreed with the proposal to criminalize, among other things, different types of cyber-violence. The proposed rules will criminalize the non-consensual sharing of intimate images, including deepfakes made by AI tools, which could help deter revenge porn. Cyber-stalking, online harassment, misogynous hate speech and "cyber-flashing," or the sending of unsolicited nudes, will also be recognized as criminal offenses.

The commission says that having a directive for the whole European Union that specifically addresses those particular acts will help victims in Member States that haven't criminalized them yet. "This is an urgent issue to address, given the exponential spread and dramatic impact of violence online," it wrote in its announcement. In addition, the directive will require member states to develop measures that can help users more easily identify cyber-violence and to know how to prevent it from happening if possible or how to seek help. It will require them to provide their residents with an online portal where they can send in reports, as well. 

In its reporting, Politico suggested that the recent spread of pornographic deepfake images using Taylor Swift's face urged EU officials to move forward with the proposal. If you'll recall, X even had to temporarily block searches for the musician's name after the images went viral. "The latest disgusting way of humiliating women is by sharing intimate images generated by AI in a couple of minutes by anybody," European Commission Vice President Věra Jourová told the publication. "Such pictures can do huge harm, not only to popstars but to every woman who would have to prove at work or at home that it was a deepfake." At the moment, though, the aforementioned rules are just part of a bill that representatives of EU member states still need to approve. "The final law is also pending adoption in Council and European Parliament," the EU Council said. According to Politico, if all goes well and the bill becomes a law soon, EU states will have until 2027 to enforce the new rules.

This article originally appeared on Engadget at https://www.engadget.com/the-eu-wants-to-criminalize-ai-generated-porn-images-and-deepfakes-105037524.html?src=rss

Phony AI Biden robocalls reached up to 25,000 voters, says New Hampshire AG

Two companies based in Texas have been linked to a spate of robocalls that used artificial intelligence to mimic President Joe Biden. The audio deepfake was used to urge New Hampshire voters not to participate in the state's presidential primary. New Hampshire Attorney General John Formella said as many as 25,000 of the calls were made to residents of the state in January.

Formella says an investigation has linked the source of the robocalls to Texan companies Life Corporation and Lingo Telecom. No charges have yet been filed against either company or Life Corporation's owner, a person named Walter Monk. The probe is ongoing and other entities are believed to be involved. Federal law enforcement officials are said to be looking into the case too.

“We have issued a cease-and-desist letter to Life Corporation that orders the company to immediately desist violating New Hampshire election laws," Formella said at a press conference, according to CNN. "We have also opened a criminal investigation, and we are taking next steps in that investigation, sending document preservation notices and subpoenas to Life Corporation, Lingo Telecom and any other individual or entity."

The Federal Communications Commission also sent a cease-and-desist letter to Lingo Telecom. The agency said (PDF) it has warned both companies about robocalls in the past.

The deepfake was created using tools from AI voice cloning company ElevenLabs, which banned the user responsible. The company says it is "dedicated to preventing the misuse of audio AI tools and [that it takes] any incidents of misuse extremely seriously."

Meanwhile, the FCC is seeking to ban robocalls that use AI-generated voices. Under the Telephone Consumer Protection Act, the agency is responsible for making rules regarding robocalls. Commissioners are to vote on the issue in the coming weeks.

This article originally appeared on Engadget at https://www.engadget.com/phony-ai-biden-robocalls-reached-up-to-25000-voters-says-new-hampshire-ag-205253966.html?src=rss

Fallout from the Fulton County cyberattack continues, key systems still down

Key systems in Fulton County, Georgia have been offline since last week when a 'cyber incident' hit government systems. While the county has tried its best to continue operations as normal, phone lines, court systems, property records and more all went down. The county has not yet confirmed details of the cyber incident, such as what group could be behind it or motivations for the attack. As of Tuesday, there did not appear to be a data breach, according to Fulton County Board of Commissioners Chairman Robb Pitts.

Fulton County made headlines in August as the place where prosecutors chose to bring election interference charges against former president Donald Trump. But don't worry, officials assured the public that the case had not been impacted by the attack. “All material related to the election case is kept in a separate, highly secure system that was not hacked and is designed to make any unauthorized access extremely difficult if not impossible,” said Fulton County District Attorney Fani Willis.

Despite this, Fulton County election systems did not appear to be the target of the attack. While Fulton County's Department of Registration and Elections went down, “there is no indication that this event is related to the election process,” Fulton County said in a statement. “In an abundance of caution, Fulton County and the (Georgia) Secretary of State’s respective technology systems were isolated from one another as part of the response efforts.”

So far, the impact of the attack ranges widely from delays getting marriage certificates to disrupted court hearings. On Wednesday, a miscommunication during the outage even let a murder suspect out of custody. A manhunt continues after officials mistakenly released the suspect while being transferred between Clayton County and Fulton County for a hearing.

The county has not released information on when it expects systems to be fully restored, but it is working with law enforcement on recovery efforts. In the meantime, while constituents have trouble reaching certain government services, Fulton County put out a list of contact information for impacted departments. Fulton County also released a full list of impacted systems.

While the government IT outages occurred, a local student also hacked into Fulton County Schools systems, according to StateScoop on Friday. The school system is still determining if any personal information may have been breached, but most services came back online by Monday.

This article originally appeared on Engadget at https://www.engadget.com/fallout-from-the-fulton-county-cyberattack-continues-key-systems-still-down-161505036.html?src=rss

The FCC wants to make robocalls that use AI-generated voices illegal

The rise of AI-generated voices mimicking celebrities and politicians could make it even harder for the Federal Communications Commission (FCC) to fight robocalls and prevent people from getting spammed and scammed. That's why FCC Chairwoman Jessica Rosenworcel wants the commission to officially recognize calls that use AI-generated voices as "artificial," which would make the use of voice cloning technologies in robocalls illegal. Under the FCC's Telephone Consumer Protection Act (TCPA), solicitations to residences that use an artificial voice or a recording are against the law. As TechCrunch notes, the FCC's proposal will make it easier to go after and charge bad actors. 

"AI-generated voice cloning and images are already sowing confusion by tricking consumers into thinking scams and frauds are legitimate," FCC Chairwoman Jessica Rosenworcel said in a statement. "No matter what celebrity or politician you favor, or what your relationship is with your kin when they call for help, it is possible we could all be a target of these faked calls." If the FCC recognizes AI-generated voice calls as illegal under existing law, the agency can give State Attorneys General offices across the country "new tools they can use to crack down on... scams and protect consumers."

The FCC's proposal comes shortly after some New Hampshire residents received a call impersonating President Joe Biden, telling them not to vote in their state's primary. A security firm performed a thorough analysis of the call and determined that it was created using AI tools by a startup called ElevenLabs. The company had reportedly banned the account responsible for the message mimicking the president, but the incident could end up being just one of the many attempts to disrupt the upcoming US elections using AI-generated content. 

This article originally appeared on Engadget at https://www.engadget.com/the-fcc-wants-to-make-robocalls-that-use-ai-generated-voices-illegal-105628839.html?src=rss

NSA admits to buying Americans’ web browsing data from brokers without warrants

The National Security Agency’s director has confirmed that the agency buys Americans’ web browsing data from brokers without first obtaining warrants. Senator Ron Wyden (D-OR) blocked the appointment of the NSA’s inbound director Timothy Haugh until the agency answered his questions regarding its collection of Americans’ location and Internet data. Wyden said he’d been trying for three years to “publicly release the fact that the NSA is purchasing Americans’ internet records.”

In a letter dated December 11, current NSA Director Paul Nakasone confirmed to Wyden that the agency does make such purchases from brokers. "NSA acquires various types of [commercially available information] for foreign intelligence, cybersecurity, and other authorized mission purposes, to include enhancing its signals intelligence (SIGINT) and cybersecurity missions," Nakasone wrote. "This may include information associated with electronic devices being used outside and, in certain cases, inside the United States."

Nakasone went on to claim that the NSA "does not buy and use location data collected from phones known to be used in the United States either with or without a court order. Similarly, NSA does not buy and use location data collected from automobile telematics systems from vehicles known to be located in the United States."

An NSA spokesperson told Reuters that the agency uses such data sparingly but that it has notable value for national security and cybersecurity purposes. "At all stages, NSA takes steps to minimize the collection of US [personal] information, to include application of technical filters," the spokesperson said.

Wyden has called the practice unlawful. "Such records can identify Americans who are seeking help from a suicide hotline or a hotline for survivors of sexual assault or domestic abuse," he said.

The senator urged Director of National Intelligence Avril Haines to order US intelligence agencies to stop buying Americans’ private data without consent. He also asked Haines to direct intelligence agencies to "conduct an inventory of the personal data purchased by the agency about Americans, including, but not limited to, location and internet metadata." Wyden said that any data that does not comply with Federal Trade Commission standards regarding personal data sales should be deleted.

Wyden pointed to an FTC settlement that this month banned a data broker from selling location data. The agency alleged that the information, which it claimed was sold to buyers including government contractors, "could be used to track people’s visits to sensitive locations such as medical and reproductive health clinics, places of religious worship and domestic abuse shelters."

The FTC stated in its complaint against the broker, formerly known as X-Mode Social, that by "failing to fully inform consumers how their data would be used and that their data would be provided to government contractors for national security purposes, X-Mode failed to provide information material to consumers and did not obtain informed consent from consumers to collect and use their location data."

The settlement was the first of its kind with a data broker. In a statement, Wyden, who has been investigating the data broker industry for several years, said he was "not aware of any company that provides such a warning to users [regarding their consent] before collecting their data."

The issue of US federal agencies buying phone location data isn't exactly new. In 2020, it emerged that Customs and Border Protection had been doing so. The following year, Wyden claimed the Defense Intelligence Agency and the Pentagon bought and used location data from Americans’ phones.

This article originally appeared on Engadget at https://www.engadget.com/nsa-admits-to-buying-americans-web-browsing-data-from-brokers-without-warrants-154904461.html?src=rss