Gemini, Google's AI chatbot, won't answer questions about India’s upcoming national elections, the company wrote in a blog post today. “Out of an abundance of caution on such an important topic, we have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses,” the company wrote. The restrictions are similar to the ones Google announced in December ahead of global elections in the US and the EU.
“As we shared last December, in preparation for the many elections happening around the world in 2024 and out of an abundance of caution, we’re restricting the types of election-related queries for which Gemini will return responses,” a Google spokesperson wrote to Engadget.
The guardrails are already in place in the US. When I asked Gemini for interesting facts about the 2024 US presidential election, it replied, “I’m still learning how to answer this question. In the meantime, try Google Search.” In addition to America’s Biden-Trump rematch (and down-ballot races that will determine control of Congress), at least 64 countries, representing about 49 percent of the world’s population, will hold national elections this year.
When I prompted OpenAI’s ChatGPT with the same question, it provided a long list of factoids. These included remarks about the presidential rematch, early primaries and Super Tuesday, voting demographics and more.
OpenAI outlined its plans to fight election-related misinformation in January. Its strategy focuses more on preventing wrong information than supplying none at all. Its approach includes stricter guidelines for DALL-E 3 image generation, banning applications that discourage people from voting, and preventing people from creating chatbots that pretend to be candidates or institutions.
It’s understandable why Google would err on the side of caution with its AI bot. Gemini got the company in hot water last month when social media users posted samples where the chatbot applied diversity filters to “historical images,” including presenting Nazis and America’s Founding Fathers as people of color. After a backlash (mainly from the internet’s “anti-woke” brigade), it paused Gemini’s ability to generate people until it could iron out the kinks. Google hasn’t yet lifted that block, and it now responds to prompts about images of people, “Sorry, I wasn’t able to generate the images you requested.”
This article originally appeared on Engadget at https://www.engadget.com/googles-gemini-will-steer-clear-of-election-talk-205135492.html?src=rss
A coalition of 20 tech companies signed an agreement Friday to help prevent AI deepfakes in the critical 2024 elections taking place in more than 40 countries. OpenAI, Google, Meta, Amazon, Adobe and X are among the businesses joining the pact to prevent and combat AI-generated content that could influence voters. However, the agreement’s vague language and lack of binding enforcement call into question whether it goes far enough.
The list of companies signing the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” includes those that create and distribute AI models, as well as social platforms where the deepfakes are most likely to pop up. The signees are Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic and X (formerly Twitter).
The group describes the agreement as “a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters.” The signees have agreed to the following eight commitments:
Developing and implementing technology to mitigate risks related to Deceptive AI Election content, including open-source tools where appropriate
Assessing models in scope of this accord to understand the risks they may present regarding Deceptive AI Election Content
Seeking to detect the distribution of this content on their platforms
Seeking to appropriately address this content detected on their platforms
Fostering cross-industry resilience to deceptive AI election content
Providing transparency to the public regarding how the company addresses it
Continuing to engage with a diverse set of global civil society organizations, academics
Supporting efforts to foster public awareness, media literacy, and all-of-society resilience
The accord will apply to AI-generated audio, video and images. It addresses content that “deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can vote.”
The signees say they will work together to create and share tools to detect and address the online distribution of deepfakes. In addition, they plan to drive educational campaigns and “provide transparency” to users.
OpenAI CEO Sam Altman
FABRICE COFFRINI via Getty Images
OpenAI, one of the signees, already said last month it plans to suppress election-related misinformation worldwide. Images generated with the company’s DALL-E 3 tool will be encoded with a classifier providing a digital watermark to clarify their origin as AI-generated pictures. The ChatGPT maker said it would also work with journalists, researchers and platforms for feedback on its provenance classifier. It also plans to prevent chatbots from impersonating candidates.
“We’re committed to protecting the integrity of elections by enforcing policies that prevent abuse and improving transparency around AI-generated content,” Anna Makanju, Vice President of Global Affairs at OpenAI, wrote in the group’s joint press release. “We look forward to working with industry partners, civil society leaders and governments around the world to help safeguard elections from deceptive AI use.”
Notably absent from the list is Midjourney, the company with an AI image generator (of the same name) that currently produces some of the most convincing fake photos. However, the company said earlier this month it would consider banning political generations altogether during election season. Last year, Midjourney was used to create a viral fake image of Pope Benedict unexpectedly strutting down the street with a puffy white jacket. One of Midjourney’s closest competitors, Stability AI (makers of the open-source Stable Diffusion), did participate. Engadget contacted Midjourney for comment about its absence, and we’ll update this article if we hear back.
Only Apple is absent among Silicon Valley’s “Big Five.” However, that may be explained by the fact that the iPhone maker hasn’t yet launched any generative AI products, nor does it host a social media platform where deepfakes could be distributed. Regardless, we contacted Apple PR for clarification but hadn’t heard back at the time of publication.
Although the general principles the 20 companies agreed to sound like a promising start, it remains to be seen whether a loose set of agreements without binding enforcement will be enough to combat a nightmare scenario where the world’s bad actors use generative AI to sway public opinion and elect aggressively anti-democratic candidates — in the US and elsewhere.
“The language isn’t quite as strong as one might have expected,” Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center, toldThe Associated Press on Friday. “I think we should give credit where credit is due, and acknowledge that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we’ll be keeping an eye on whether they follow through.”
In January, New Hampshire voters were greeted with a robocall of an AI-generated impersonation of President Biden’s voice — urging them not to vote.
Anadolu via Getty Images
In January, an AI-generated deepfake of President Biden’s voice was used by two Texas-based companies to robocall New Hampshire voters, urging them not to vote in the state’s primary on January 23. The clip, generated using ElevenLabs’ voice cloning tool, reached up to 25,000 NH voters, according to the state’s attorney general. ElevenLabs is among the pact’s signees.
The Federal Communication Commission (FCC) acted quickly to prevent further abuses of voice-cloning tech in fake campaign calls. Earlier this month, it voted unanimously to ban AI-generated robocalls. The (seemingly eternally deadlocked) US Congress hasn’t passed any AI legislation. In December, the European Union (EU) agreed on an expansive AI Act safety development bill that could influence other nations’ regulatory efforts.
“As society embraces the benefits of AI, we have a responsibility to help ensure these tools don’t become weaponized in elections,” Microsoft Vice Chair and President Brad Smith wrote in a press release. “AI didn’t create election deception, but we must ensure it doesn’t help deception flourish.”
This article originally appeared on Engadget at https://www.engadget.com/microsoft-openai-google-and-others-agree-to-combat-election-related-deepfakes-203942157.html?src=rss
A coalition of 20 tech companies signed an agreement Friday to help prevent AI deepfakes in the critical 2024 elections taking place in more than 40 countries. OpenAI, Google, Meta, Amazon, Adobe and X are among the businesses joining the pact to prevent and combat AI-generated content that could influence voters. However, the agreement’s vague language and lack of binding enforcement call into question whether it goes far enough.
The list of companies signing the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” includes those that create and distribute AI models, as well as social platforms where the deepfakes are most likely to pop up. The signees are Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic and X (formerly Twitter).
The group describes the agreement as “a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters.” The signees have agreed to the following eight commitments:
Developing and implementing technology to mitigate risks related to Deceptive AI Election content, including open-source tools where appropriate
Assessing models in scope of this accord to understand the risks they may present regarding Deceptive AI Election Content
Seeking to detect the distribution of this content on their platforms
Seeking to appropriately address this content detected on their platforms
Fostering cross-industry resilience to deceptive AI election content
Providing transparency to the public regarding how the company addresses it
Continuing to engage with a diverse set of global civil society organizations, academics
Supporting efforts to foster public awareness, media literacy, and all-of-society resilience
The accord will apply to AI-generated audio, video and images. It addresses content that “deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can vote.”
The signees say they will work together to create and share tools to detect and address the online distribution of deepfakes. In addition, they plan to drive educational campaigns and “provide transparency” to users.
OpenAI CEO Sam Altman
FABRICE COFFRINI via Getty Images
OpenAI, one of the signees, already said last month it plans to suppress election-related misinformation worldwide. Images generated with the company’s DALL-E 3 tool will be encoded with a classifier providing a digital watermark to clarify their origin as AI-generated pictures. The ChatGPT maker said it would also work with journalists, researchers and platforms for feedback on its provenance classifier. It also plans to prevent chatbots from impersonating candidates.
“We’re committed to protecting the integrity of elections by enforcing policies that prevent abuse and improving transparency around AI-generated content,” Anna Makanju, Vice President of Global Affairs at OpenAI, wrote in the group’s joint press release. “We look forward to working with industry partners, civil society leaders and governments around the world to help safeguard elections from deceptive AI use.”
Notably absent from the list is Midjourney, the company with an AI image generator (of the same name) that currently produces some of the most convincing fake photos. However, the company said earlier this month it would consider banning political generations altogether during election season. Last year, Midjourney was used to create a viral fake image of Pope Benedict unexpectedly strutting down the street with a puffy white jacket. One of Midjourney’s closest competitors, Stability AI (makers of the open-source Stable Diffusion), did participate. Engadget contacted Midjourney for comment about its absence, and we’ll update this article if we hear back.
Only Apple is absent among Silicon Valley’s “Big Five.” However, that may be explained by the fact that the iPhone maker hasn’t yet launched any generative AI products, nor does it host a social media platform where deepfakes could be distributed. Regardless, we contacted Apple PR for clarification but hadn’t heard back at the time of publication.
Although the general principles the 20 companies agreed to sound like a promising start, it remains to be seen whether a loose set of agreements without binding enforcement will be enough to combat a nightmare scenario where the world’s bad actors use generative AI to sway public opinion and elect aggressively anti-democratic candidates — in the US and elsewhere.
“The language isn’t quite as strong as one might have expected,” Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center, toldThe Associated Press on Friday. “I think we should give credit where credit is due, and acknowledge that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we’ll be keeping an eye on whether they follow through.”
In January, New Hampshire voters were greeted with a robocall of an AI-generated impersonation of President Biden’s voice — urging them not to vote.
Anadolu via Getty Images
In January, an AI-generated deepfake of President Biden’s voice was used by two Texas-based companies to robocall New Hampshire voters, urging them not to vote in the state’s primary on January 23. The clip, generated using ElevenLabs’ voice cloning tool, reached up to 25,000 NH voters, according to the state’s attorney general. ElevenLabs is among the pact’s signees.
The Federal Communication Commission (FCC) acted quickly to prevent further abuses of voice-cloning tech in fake campaign calls. Earlier this month, it voted unanimously to ban AI-generated robocalls. The (seemingly eternally deadlocked) US Congress hasn’t passed any AI legislation. In December, the European Union (EU) agreed on an expansive AI Act safety development bill that could influence other nations’ regulatory efforts.
“As society embraces the benefits of AI, we have a responsibility to help ensure these tools don’t become weaponized in elections,” Microsoft Vice Chair and President Brad Smith wrote in a press release. “AI didn’t create election deception, but we must ensure it doesn’t help deception flourish.”
This article originally appeared on Engadget at https://www.engadget.com/microsoft-openai-google-and-others-agree-to-combat-election-related-deepfakes-203942157.html?src=rss
As the US gears up for the 2024 presidential election, OpenAI shares its plans on suppressing misinformation related to elections worldwide, with a focus set on boosting the transparency around the origin of information. One such highlight is the use of cryptography — as standardized by the Coalition for Content Provenance and Authenticity — to encode the provenance of images generated by DALL-E 3. This will allow the platform to better detect AI-generated images using a provenance classifier, in order to help voters assess the reliability of certain content.
This approach is similar to, if not better than, DeepMind's SynthID for digitally watermark AI-generated images and audio, as part of Google's own election content strategy published last month. Meta's AI image generator also adds an invisible watermark to its content, though the company has yet to share its readiness on tackling election-related misinformation.
OpenAI says it will soon work with journalists, researchers and platforms for feedback on its provenance classifier. Along the same theme, ChatGPT users will start to see real-time news from around the world complete with attribution and links. They'll also be directed to CanIVote.org, the official online source on US voting, when they ask procedural questions like where to vote or how to vote.
Additionally, OpenAI reiterates its current policies on shutting down impersonation attempts in the form of deepfakes and chatbots, as well as content made to distort the voting process or to discourage people from voting. The company also forbids applications built for political campaigning, and when necessary, its new GPTs allow users to report potential violations.
OpenAI says learnings from these early measures, if successful at all (and that's a very big "if"), will help it roll out similar strategies across the globe. The firm will have more related announcements in the coming months.
This article originally appeared on Engadget at https://www.engadget.com/openai-lays-out-its-misinformation-strategy-ahead-of-2024-elections-022549912.html?src=rss