Google’s Gemini will steer clear of election talk in India

Gemini, Google's AI chatbot, won't answer questions about India’s upcoming national elections, the company wrote in a blog post today. “Out of an abundance of caution on such an important topic, we have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses,” the company wrote. The restrictions are similar to the ones Google announced in December ahead of global elections in the US and the EU.

“As we shared last December, in preparation for the many elections happening around the world in 2024 and out of an abundance of caution, we’re restricting the types of election-related queries for which Gemini will return responses,” a Google spokesperson wrote to Engadget.

The guardrails are already in place in the US. When I asked Gemini for interesting facts about the 2024 US presidential election, it replied, “I’m still learning how to answer this question. In the meantime, try Google Search.” In addition to America’s Biden-Trump rematch (and down-ballot races that will determine control of Congress), at least 64 countries, representing about 49 percent of the world’s population, will hold national elections this year.

When I prompted OpenAI’s ChatGPT with the same question, it provided a long list of factoids. These included remarks about the presidential rematch, early primaries and Super Tuesday, voting demographics and more.

OpenAI outlined its plans to fight election-related misinformation in January. Its strategy focuses more on preventing wrong information than supplying none at all. Its approach includes stricter guidelines for DALL-E 3 image generation, banning applications that discourage people from voting, and preventing people from creating chatbots that pretend to be candidates or institutions.

It’s understandable why Google would err on the side of caution with its AI bot. Gemini got the company in hot water last month when social media users posted samples where the chatbot applied diversity filters to “historical images,” including presenting Nazis and America’s Founding Fathers as people of color. After a backlash (mainly from the internet’s “anti-woke” brigade), it paused Gemini’s ability to generate people until it could iron out the kinks. Google hasn’t yet lifted that block, and it now responds to prompts about images of people, “Sorry, I wasn’t able to generate the images you requested.”

This article originally appeared on Engadget at https://www.engadget.com/googles-gemini-will-steer-clear-of-election-talk-205135492.html?src=rss

Microsoft, OpenAI, Google and others agree to combat election-related deepfakes

A coalition of 20 tech companies signed an agreement Friday to help prevent AI deepfakes in the critical 2024 elections taking place in more than 40 countries. OpenAI, Google, Meta, Amazon, Adobe and X are among the businesses joining the pact to prevent and combat AI-generated content that could influence voters. However, the agreement’s vague language and lack of binding enforcement call into question whether it goes far enough.

The list of companies signing the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” includes those that create and distribute AI models, as well as social platforms where the deepfakes are most likely to pop up. The signees are Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic and X (formerly Twitter).

The group describes the agreement as “a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters.” The signees have agreed to the following eight commitments:

  • Developing and implementing technology to mitigate risks related to Deceptive AI Election content, including open-source tools where appropriate

  • Assessing models in scope of this accord to understand the risks they may present regarding Deceptive AI Election Content

  • Seeking to detect the distribution of this content on their platforms

  • Seeking to appropriately address this content detected on their platforms

  • Fostering cross-industry resilience to deceptive AI election content

  • Providing transparency to the public regarding how the company addresses it

  • Continuing to engage with a diverse set of global civil society organizations, academics

  • Supporting efforts to foster public awareness, media literacy, and all-of-society resilience

The accord will apply to AI-generated audio, video and images. It addresses content that “deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can vote.”

The signees say they will work together to create and share tools to detect and address the online distribution of deepfakes. In addition, they plan to drive educational campaigns and “provide transparency” to users.

OpenAI CEO Sam Altman gestures during a session of the World Economic Forum (WEF) meeting in Davos on January 18, 2024. (Photo by Fabrice COFFRINI / AFP) (Photo by FABRICE COFFRINI/AFP via Getty Images)
OpenAI CEO Sam Altman
FABRICE COFFRINI via Getty Images

OpenAI, one of the signees, already said last month it plans to suppress election-related misinformation worldwide. Images generated with the company’s DALL-E 3 tool will be encoded with a classifier providing a digital watermark to clarify their origin as AI-generated pictures. The ChatGPT maker said it would also work with journalists, researchers and platforms for feedback on its provenance classifier. It also plans to prevent chatbots from impersonating candidates.

“We’re committed to protecting the integrity of elections by enforcing policies that prevent abuse and improving transparency around AI-generated content,” Anna Makanju, Vice President of Global Affairs at OpenAI, wrote in the group’s joint press release. “We look forward to working with industry partners, civil society leaders and governments around the world to help safeguard elections from deceptive AI use.”

Notably absent from the list is Midjourney, the company with an AI image generator (of the same name) that currently produces some of the most convincing fake photos. However, the company said earlier this month it would consider banning political generations altogether during election season. Last year, Midjourney was used to create a viral fake image of Pope Benedict unexpectedly strutting down the street with a puffy white jacket. One of Midjourney’s closest competitors, Stability AI (makers of the open-source Stable Diffusion), did participate. Engadget contacted Midjourney for comment about its absence, and we’ll update this article if we hear back.

Only Apple is absent among Silicon Valley’s “Big Five.” However, that may be explained by the fact that the iPhone maker hasn’t yet launched any generative AI products, nor does it host a social media platform where deepfakes could be distributed. Regardless, we contacted Apple PR for clarification but hadn’t heard back at the time of publication.

Although the general principles the 20 companies agreed to sound like a promising start, it remains to be seen whether a loose set of agreements without binding enforcement will be enough to combat a nightmare scenario where the world’s bad actors use generative AI to sway public opinion and elect aggressively anti-democratic candidates — in the US and elsewhere.

“The language isn’t quite as strong as one might have expected,” Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center, told The Associated Press on Friday. “I think we should give credit where credit is due, and acknowledge that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we’ll be keeping an eye on whether they follow through.”

AI-generated deepfakes have already been used in the US Presidential Election. As early as April 2023, the Republican National Committee (RNC) ran an ad using AI-generated images of President Joe Biden and Vice President Kamala Harris. The campaign for Ron DeSantis, who has since dropped out of the GOP primary, followed with AI-generated images of rival and likely nominee Donald Trump in June 2023. Both included easy-to-miss disclaimers that the images were AI-generated.

BOSTON, UNITED STATES- DECEMBER 2: President Joe Biden participates in a International Brotherhood of Electrical Workers (IBEW) phone banking event on December 2nd, 2022 in Boston, Massachusetts for Senator Reverend Raphael Warnockâs (D-GA) re-election campaign. (Photo by Nathan Posner/Anadolu Agency via Getty Images)
In January, New Hampshire voters were greeted with a robocall of an AI-generated impersonation of President Biden’s voice — urging them not to vote.
Anadolu via Getty Images

In January, an AI-generated deepfake of President Biden’s voice was used by two Texas-based companies to robocall New Hampshire voters, urging them not to vote in the state’s primary on January 23. The clip, generated using ElevenLabs’ voice cloning tool, reached up to 25,000 NH voters, according to the state’s attorney general. ElevenLabs is among the pact’s signees.

The Federal Communication Commission (FCC) acted quickly to prevent further abuses of voice-cloning tech in fake campaign calls. Earlier this month, it voted unanimously to ban AI-generated robocalls. The (seemingly eternally deadlocked) US Congress hasn’t passed any AI legislation. In December, the European Union (EU) agreed on an expansive AI Act safety development bill that could influence other nations’ regulatory efforts.

“As society embraces the benefits of AI, we have a responsibility to help ensure these tools don’t become weaponized in elections,” Microsoft Vice Chair and President Brad Smith wrote in a press release. “AI didn’t create election deception, but we must ensure it doesn’t help deception flourish.”

This article originally appeared on Engadget at https://www.engadget.com/microsoft-openai-google-and-others-agree-to-combat-election-related-deepfakes-203942157.html?src=rss

Microsoft, OpenAI, Google and others agree to combat election-related deepfakes

A coalition of 20 tech companies signed an agreement Friday to help prevent AI deepfakes in the critical 2024 elections taking place in more than 40 countries. OpenAI, Google, Meta, Amazon, Adobe and X are among the businesses joining the pact to prevent and combat AI-generated content that could influence voters. However, the agreement’s vague language and lack of binding enforcement call into question whether it goes far enough.

The list of companies signing the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” includes those that create and distribute AI models, as well as social platforms where the deepfakes are most likely to pop up. The signees are Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic and X (formerly Twitter).

The group describes the agreement as “a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters.” The signees have agreed to the following eight commitments:

  • Developing and implementing technology to mitigate risks related to Deceptive AI Election content, including open-source tools where appropriate

  • Assessing models in scope of this accord to understand the risks they may present regarding Deceptive AI Election Content

  • Seeking to detect the distribution of this content on their platforms

  • Seeking to appropriately address this content detected on their platforms

  • Fostering cross-industry resilience to deceptive AI election content

  • Providing transparency to the public regarding how the company addresses it

  • Continuing to engage with a diverse set of global civil society organizations, academics

  • Supporting efforts to foster public awareness, media literacy, and all-of-society resilience

The accord will apply to AI-generated audio, video and images. It addresses content that “deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can vote.”

The signees say they will work together to create and share tools to detect and address the online distribution of deepfakes. In addition, they plan to drive educational campaigns and “provide transparency” to users.

OpenAI CEO Sam Altman gestures during a session of the World Economic Forum (WEF) meeting in Davos on January 18, 2024. (Photo by Fabrice COFFRINI / AFP) (Photo by FABRICE COFFRINI/AFP via Getty Images)
OpenAI CEO Sam Altman
FABRICE COFFRINI via Getty Images

OpenAI, one of the signees, already said last month it plans to suppress election-related misinformation worldwide. Images generated with the company’s DALL-E 3 tool will be encoded with a classifier providing a digital watermark to clarify their origin as AI-generated pictures. The ChatGPT maker said it would also work with journalists, researchers and platforms for feedback on its provenance classifier. It also plans to prevent chatbots from impersonating candidates.

“We’re committed to protecting the integrity of elections by enforcing policies that prevent abuse and improving transparency around AI-generated content,” Anna Makanju, Vice President of Global Affairs at OpenAI, wrote in the group’s joint press release. “We look forward to working with industry partners, civil society leaders and governments around the world to help safeguard elections from deceptive AI use.”

Notably absent from the list is Midjourney, the company with an AI image generator (of the same name) that currently produces some of the most convincing fake photos. However, the company said earlier this month it would consider banning political generations altogether during election season. Last year, Midjourney was used to create a viral fake image of Pope Benedict unexpectedly strutting down the street with a puffy white jacket. One of Midjourney’s closest competitors, Stability AI (makers of the open-source Stable Diffusion), did participate. Engadget contacted Midjourney for comment about its absence, and we’ll update this article if we hear back.

Only Apple is absent among Silicon Valley’s “Big Five.” However, that may be explained by the fact that the iPhone maker hasn’t yet launched any generative AI products, nor does it host a social media platform where deepfakes could be distributed. Regardless, we contacted Apple PR for clarification but hadn’t heard back at the time of publication.

Although the general principles the 20 companies agreed to sound like a promising start, it remains to be seen whether a loose set of agreements without binding enforcement will be enough to combat a nightmare scenario where the world’s bad actors use generative AI to sway public opinion and elect aggressively anti-democratic candidates — in the US and elsewhere.

“The language isn’t quite as strong as one might have expected,” Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center, told The Associated Press on Friday. “I think we should give credit where credit is due, and acknowledge that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we’ll be keeping an eye on whether they follow through.”

AI-generated deepfakes have already been used in the US Presidential Election. As early as April 2023, the Republican National Committee (RNC) ran an ad using AI-generated images of President Joe Biden and Vice President Kamala Harris. The campaign for Ron DeSantis, who has since dropped out of the GOP primary, followed with AI-generated images of rival and likely nominee Donald Trump in June 2023. Both included easy-to-miss disclaimers that the images were AI-generated.

BOSTON, UNITED STATES- DECEMBER 2: President Joe Biden participates in a International Brotherhood of Electrical Workers (IBEW) phone banking event on December 2nd, 2022 in Boston, Massachusetts for Senator Reverend Raphael Warnockâs (D-GA) re-election campaign. (Photo by Nathan Posner/Anadolu Agency via Getty Images)
In January, New Hampshire voters were greeted with a robocall of an AI-generated impersonation of President Biden’s voice — urging them not to vote.
Anadolu via Getty Images

In January, an AI-generated deepfake of President Biden’s voice was used by two Texas-based companies to robocall New Hampshire voters, urging them not to vote in the state’s primary on January 23. The clip, generated using ElevenLabs’ voice cloning tool, reached up to 25,000 NH voters, according to the state’s attorney general. ElevenLabs is among the pact’s signees.

The Federal Communication Commission (FCC) acted quickly to prevent further abuses of voice-cloning tech in fake campaign calls. Earlier this month, it voted unanimously to ban AI-generated robocalls. The (seemingly eternally deadlocked) US Congress hasn’t passed any AI legislation. In December, the European Union (EU) agreed on an expansive AI Act safety development bill that could influence other nations’ regulatory efforts.

“As society embraces the benefits of AI, we have a responsibility to help ensure these tools don’t become weaponized in elections,” Microsoft Vice Chair and President Brad Smith wrote in a press release. “AI didn’t create election deception, but we must ensure it doesn’t help deception flourish.”

This article originally appeared on Engadget at https://www.engadget.com/microsoft-openai-google-and-others-agree-to-combat-election-related-deepfakes-203942157.html?src=rss

OpenAI lays out its misinformation strategy ahead of 2024 elections

As the US gears up for the 2024 presidential election, OpenAI shares its plans on suppressing misinformation related to elections worldwide, with a focus set on boosting the transparency around the origin of information. One such highlight is the use of cryptography — as standardized by the Coalition for Content Provenance and Authenticity — to encode the provenance of images generated by DALL-E 3. This will allow the platform to better detect AI-generated images using a provenance classifier, in order to help voters assess the reliability of certain content.

This approach is similar to, if not better than, DeepMind's SynthID for digitally watermark AI-generated images and audio, as part of Google's own election content strategy published last month. Meta's AI image generator also adds an invisible watermark to its content, though the company has yet to share its readiness on tackling election-related misinformation.

OpenAI says it will soon work with journalists, researchers and platforms for feedback on its provenance classifier. Along the same theme, ChatGPT users will start to see real-time news from around the world complete with attribution and links. They'll also be directed to CanIVote.org, the official online source on US voting, when they ask procedural questions like where to vote or how to vote.

Additionally, OpenAI reiterates its current policies on shutting down impersonation attempts in the form of deepfakes and chatbots, as well as content made to distort the voting process or to discourage people from voting. The company also forbids applications built for political campaigning, and when necessary, its new GPTs allow users to report potential violations.

OpenAI says learnings from these early measures, if successful at all (and that's a very big "if"), will help it roll out similar strategies across the globe. The firm will have more related announcements in the coming months.

This article originally appeared on Engadget at https://www.engadget.com/openai-lays-out-its-misinformation-strategy-ahead-of-2024-elections-022549912.html?src=rss

Google will require political ads ‘prominently disclose’ their AI-generated aspects

AI-generated images and audio are already making their way into the 2024 Presidential election cycle. In an effort to staunch the flow of disinformation ahead of what is expected to be a contentious election, Google announced on Wednesday that it will require political advertisers to "prominently disclose" whenever their advertisement contains AI-altered or -generated aspects, "inclusive of AI tools." The new rules will based on the company's existing Manipulated Media Policy and will take effect in November.

“Given the growing prevalence of tools that produce synthetic content, we’re expanding our policies a step further to require advertisers to disclose when their election ads include material that’s been digitally altered or generated,” a Google spokesperson said in a statement obtained by The Hill. Small and inconsequential edits like resizing images, minor cleanup to the background or color correction will all still be allowed — those that depict people or things doing stuff that they never actually did or those that otherwise alter actual footage will be flagged. 

Those ads that do utilize AI aspects will need to label them as such in a "clear and conspicuous" manner that is easily seen by the user, per the Google policy. The ads will be moderated first through Google's own automated screening systems and then reviewed by a human as needed.

Google's actions run counter to other companies in social media. X/Twitter recently announced that it reversed its previous position and will allow political ads on the site, while Meta continues to take heat for its own lackadaisical ad moderation efforts. 

The Federal Election Commission is also beginning to weigh in on the issue. LAst month it sought public comment on amending a standing regulation "that prohibits a candidate or their agent from fraudulently misrepresenting other candidates or political parties" to clarify that the "related statutory prohibition applies to deliberately deceptive Artificial Intelligence campaign advertisements" as well.

This article originally appeared on Engadget at https://www.engadget.com/google-will-require-political-ads-prominently-disclose-their-ai-generated-aspects-232906353.html?src=rss

Trump’s Georgia election interference trial will be livestreamed on YouTube

In an unprecedented decision, Fulton County Judge Scott McAfee announced on Thursday that he will allow not only a press pool, cameras and laptops to be present in the courtroom during the election interference trial of former President Donald Trump, but that the entire proceedings will be livestreamed on YouTube as well. That stream will be operated by the court.

Trump and 18 co-defendants are slated their trial on October 23rd. Tsplhey're facing multiple racketeering charges surrounding their efforts in the state of Georgia to subvert and overturn the results of the 2020 presidential election, what Fulton County DA Fani Willis describes as "a criminal enterprise" to unconstitutionally keep the disgraced politician in power. Trump has pled not guilty to all charges. 

While recording court proceedings can be an uncommon occurrence in some jurisdictions, the state of Georgia takes a far more lax approach in allowing the practice. 

“Georgia courts traditionally have allowed the media and the public in so that everyone can scrutinize how our process actually works,” Atlanta-based attorney Josh Schiffer, told Atlanta First News. “Unlike a lot of states with very strict rules, courts in Georgia are going to basically leave it up to the judges.”

For example, when Trump was arraigned in New York on alleged financial crimes, only still photography was allowed. For his Miami charges, photography wasn't allowed at all. This means that the public will not be privy to the in-court proceedings of Trump's federal election interference case, only the Georgia state prosecution.

This article originally appeared on Engadget at https://www.engadget.com/trumps-georgia-election-interference-trial-will-be-livestreamed-on-youtube-193146662.html?src=rss

Trump’s Georgia election interference trial will be livestreamed on YouTube

In an unprecedented decision, Fulton County Judge Scott McAfee announced on Thursday that he will allow not only a press pool, cameras and laptops to be present in the courtroom during the election interference trial of former President Donald Trump, but that the entire proceedings will be livestreamed on YouTube as well. That stream will be operated by the court.

Trump and 18 co-defendants are slated their trial on October 23rd. Tsplhey're facing multiple racketeering charges surrounding their efforts in the state of Georgia to subvert and overturn the results of the 2020 presidential election, what Fulton County DA Fani Willis describes as "a criminal enterprise" to unconstitutionally keep the disgraced politician in power. Trump has pled not guilty to all charges. 

While recording court proceedings can be an uncommon occurrence in some jurisdictions, the state of Georgia takes a far more lax approach in allowing the practice. 

“Georgia courts traditionally have allowed the media and the public in so that everyone can scrutinize how our process actually works,” Atlanta-based attorney Josh Schiffer, told Atlanta First News. “Unlike a lot of states with very strict rules, courts in Georgia are going to basically leave it up to the judges.”

For example, when Trump was arraigned in New York on alleged financial crimes, only still photography was allowed. For his Miami charges, photography wasn't allowed at all. This means that the public will not be privy to the in-court proceedings of Trump's federal election interference case, only the Georgia state prosecution.

This article originally appeared on Engadget at https://www.engadget.com/trumps-georgia-election-interference-trial-will-be-livestreamed-on-youtube-193146662.html?src=rss

Trump’s first post since he was reinstated on X is his mug shot

Former President Donald Trump is back on Twitter (now X) more than two years after he was banned from the platform in the aftermath of the January 6th Capitol riot. On August 24th, 2023, Trump tweeted for the first time since the website reinstated his account on November 19th, 2022. His first post? An image with the mug shot taken when he was booked at the Fulton County jail in Georgia on charges that he conspired to overturn the results of 2020 Presidential elections. 

The image also says "Election Interference" and "Never Surrender!," along with the URL of his website. Trump linked to his website in the tweet, as well, where his mug shot is also prominently featured with a lengthy note that starts with: "Today, at the notoriously violent jail in Fulton County, Georgia, I was ARRESTED despite having committed NO CRIME."

In November last year, Musk appeared to make the decision to reinstate Trump’s account based on the results of a Twitter poll. He asked people to vote on whether Trump should have access to his account returned. At the end of 24 hours, the option to reinstate the former president won with 51.8 percent of a decision that saw more than 15 million votes. Musk admitted at the time that some of the action on the poll came from “bot and troll armies.” Prior to the poll, Musk also said the decision on whether to reinstate Trump would come from a newly formed moderation council, but he never followed through on that pledge.

The website then known as Twitter banned Trump in early 2021 after he broke the company’s rules against inciting violence. The initial suspension saw Trump lose access to his account for 12 hours, but days later, the company made the decision permanent. At first, Trump tried to skirt the ban, even going so far as to file a lawsuit against Twitter that ultimately failed. Following his de-platforming from Twitter, Facebook and other social media websites, Trump went on to create Truth Social. Following his reinstatement, Trump said he didn’t “see any reason” to return to the platform. That said, the promise of reaching a huge audience with something as dramatic as a mug shot was obviously too good for Trump to pass up, particularly with what is likely to be a messy Republican primary on the horizon.

This article originally appeared on Engadget at https://www.engadget.com/trumps-first-post-since-he-was-reinstated-on-x-is-his-mug-shot-025650320.html?src=rss

Trump’s first post since he was reinstated on X is his mug shot

Former President Donald Trump is back on Twitter (now X) more than two years after he was banned from the platform in the aftermath of the January 6th Capitol riot. On August 24th, 2023, Trump tweeted for the first time since the website reinstated his account on November 19th, 2022. His first post? An image with the mug shot taken when he was booked at the Fulton County jail in Georgia on charges that he conspired to overturn the results of 2020 Presidential elections. 

The image also says "Election Interference" and "Never Surrender!," along with the URL of his website. Trump linked to his website in the tweet, as well, where his mug shot is also prominently featured with a lengthy note that starts with: "Today, at the notoriously violent jail in Fulton County, Georgia, I was ARRESTED despite having committed NO CRIME."

In November last year, Musk appeared to make the decision to reinstate Trump’s account based on the results of a Twitter poll. He asked people to vote on whether Trump should have access to his account returned. At the end of 24 hours, the option to reinstate the former president won with 51.8 percent of a decision that saw more than 15 million votes. Musk admitted at the time that some of the action on the poll came from “bot and troll armies.” Prior to the poll, Musk also said the decision on whether to reinstate Trump would come from a newly formed moderation council, but he never followed through on that pledge.

The website then known as Twitter banned Trump in early 2021 after he broke the company’s rules against inciting violence. The initial suspension saw Trump lose access to his account for 12 hours, but days later, the company made the decision permanent. At first, Trump tried to skirt the ban, even going so far as to file a lawsuit against Twitter that ultimately failed. Following his de-platforming from Twitter, Facebook and other social media websites, Trump went on to create Truth Social. Following his reinstatement, Trump said he didn’t “see any reason” to return to the platform. That said, the promise of reaching a huge audience with something as dramatic as a mug shot was obviously too good for Trump to pass up, particularly with what is likely to be a messy Republican primary on the horizon.

This article originally appeared on Engadget at https://www.engadget.com/trumps-first-post-since-he-was-reinstated-on-x-is-his-mug-shot-025650320.html?src=rss

Hack left majority of UK voters’ data exposed for over a year

The UK's Electoral Commission has revealed that some personal information of around 40 million voters was left exposed for over a year. The agency — which regulates party and election finance and elections in the country — said it was the target of a “complex cyberattack.” It first detected suspicious activity on its network in October 2022, but said the intruders first gained access to its systems in August 2021.

The perpetrators found a way onto to the Electoral Commission's servers, which hosted the agency's email and control systems, as well as copies of the electoral registers. Details of donations and loans to registered political parties and non-party campaigners were not affected as those are stored on a separate system. The agency doesn't hold the details of anonymous voters or the addresses of overseas electors registered outside of the UK.

The data that was exposed included the names and addresses of UK residents who registered to vote between 2014 and 2022, along with those who are registered as overseas voters. Information provided to the commission through email and web forms was exposed too. 

"We know that this data was accessible, but we have been unable to ascertain whether the attackers read or copied personal data held on our systems," the commission said. The agency confirmed to TechCrunch that the attack could have affected around 40 million voters. According to UK census data, there were 46.6 million parliamentary electoral registrations and 48.8 million local government electoral registrations in December 2021.

The Electoral Commission says it had to adopt several measures before disclosing the hack. It had to lock out the "hostile actors," analyze the possible extent of the breach and put more security measures in place to stop a similar situation from happening in the future.

Data in the electoral registers is limited and much of it is in the public domain already, the agency said. As such, officials don't believe the data by itself represents a major risk to individuals. However, the agency warned, it's possible that the information "could be combined with other data in the public domain, such as that which individuals choose to share themselves, to infer patterns of behavior or to identify and profile individuals."

The Electoral Commission also noted that there was no impact on UK election security as a result of the attack. "The data accessed does not impact how people register, vote, or participate in democratic processes," it said. "It has no impact on the management of the electoral registers or on the running of elections. The UK’s democratic process is significantly dispersed and key aspects of it remain based on paper documentation and counting. This means it would be very hard to use a cyber-attack to influence the process."

This article originally appeared on Engadget at https://www.engadget.com/hack-left-majority-of-uk-voters-data-exposed-for-over-a-year-150045052.html?src=rss