Intuitive Machines’ moon lander sent home its first images and they’re breathtaking

Intuitive Machines’ lunar lander is well on its way to the moon after launching without a hitch on Thursday, but it managed to snap a few incredible images of Earth while it was still close to home. The company shared the first batch of images from the IM-1 mission on X today after confirming in an earlier post that the spacecraft is “in excellent health.” Along with a view of Earth and some partial selfies of the Nova-C lander, nicknamed Odysseus, you can even see the SpaceX Falcon 9 second stage falling away in the distance after separation.

Odysseus is on track to make its moon landing attempt on February 22, and so far appears to be performing well. The team posted a series of updates on X at the end of the week confirming the lander has passed some key milestones ahead of its touchdown, including engine firing. This marked “the first-ever in-space ignition of a liquid methane and liquid oxygen engine,” according to Intuitive Machines.

This article originally appeared on Engadget at https://www.engadget.com/intuitive-machines-moon-lander-sent-home-its-first-images-and-theyre-breathtaking-194208799.html?src=rss

NASA is looking for volunteers to live in its Mars simulation for a year

If extreme challenges are your cup of tea, NASA has the perfect opportunity for you. The space agency put out a call on Friday for volunteers to participate in its second yearlong simulated Mars mission, the Crew Health and Performance Exploration Analog (CHAPEA 2). For the duration of the mission, which will start in spring 2025, the four selected crew members will be housed in a 1,700-square-foot 3D-printed habitat in Houston. NASA is accepting applications on the CHAPEA website from now through April 2. It’s a paid gig, but NASA hasn’t publicly said how much participants will be compensated.

The Mars Dune Alpha habitat at NASA’s Johnson Space Center is designed to simulate what life might be like for future explorers on the red planet, where the environment is harsh and resources will be limited. There’s a crew currently living and working there as part of the first CHAPEA mission, which is now more than halfway through its 378-day assignment. During their stay, volunteers will perform habitat maintenance and grow crops, among other tasks. The habitat also has a 1,200-square-foot sandbox attached to it for simulated spacewalks.

To be considered, applicants must be a US citizen aged 30-55, speak English proficiently and have a master’s degree in a STEM field, plus at least two years of professional experience, a minimum of one thousand hours piloting an aircraft or two years of work toward a STEM doctoral program. Certain types of professional experience may allow applicants without a master’s to qualify too. CHAPEA 2 is the second of three mission NASA has planned for the program, the first of which began on June 25, 2023. 

This article originally appeared on Engadget at https://www.engadget.com/nasa-is-looking-for-volunteers-to-live-in-its-mars-simulation-for-a-year-172926396.html?src=rss

The Morning After: Zuckerberg’s Vision Pro review, and robotaxis crashing twice into same truck.

Sometimes, timing ruins things. Take this week, instead of detailing the disgust I feel towards this 'meaty' rice, this week's Morning After sets its sights on Mark Zuckerberg, the multimillionaire who's decided to review technology now. Does he know that's my gig?

The Meta boss unfavorably compared Apple's new Vision Pro to his company's Meta Quest 3 headset, which is a delightfully hollow and petty reason to 'review' something. But hey, I had to watch it. And now maybe, you'll watch me? 

We also look closer at Waymo's disastrous December, where two of its robotaxis collided with a truck. The ... same truck.

This week:

🥽🥽: Zuckerberg thinks the Quest 3 is a 'better product' than the Vision Pro

🤖🚙💥💥: Waymo robotaxis crash into the same pickup truck, twice

🚭🛫🚫: United Airlines grounds new Airbus fleet over no smoking sign law

Read this:

GLAAD, the world's largest LGBTQ media advocacy group, has published its first annual report on the video game industry. It found that nearly 20 percent of all players in the United States identify as LGBTQ, yet just 2 percent of games contain characters and storylines relevant to this community. And half of those might be Baldur's Gate 3 alone. (I half-joke.) The report notes that not only does representation matter to many LGBTQ players, but also that new generations of gamers are only becoming increasingly more open to queer content regardless of their sexual orientation. We break down the full report here.

Like email more than video? Subscribe right here for daily reports, direct to your inbox.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-zuckerbergs-vision-pro-review-and-robotaxis-crashing-twice-into-same-truck-150021958.html?src=rss

Wyze camera security issue showed 13,000 users other owners’ homes

Some Wyze camera owners have reported that they were suddenly given access to cameras that weren't theirs and even got notifications for events inside other people's homes. Wyze cofounder David Crosby has confirmed the issue to The Verge, telling the publications that "some users were able to see thumbnails of cameras that were not their own in the Events tab." Users started seeing strangers' camera feeds in their accounts after an outage that Wyze said was caused by an Amazon Web Services problem. 

Crosby wrote in a post on the Wyze forum that the company's servers got overloaded, which corrupted some user data, after the outage. The security issue that resulted from that event then allowed users to "see thumbnails of cameras that were not their own in the Events tab." Users couldn't view those videos and could only see their thumbnails, he clarified, and they were not able to view live streams from other people's cameras. Wyze was able to identify 14 incidents before taking down the Events tab altogether. 

The company said it's going to notify all affected users and that it has forcibly logged out everyone who've recently used the Wyze app in order to reset tokens. "We will explain in more detail once we finish investigating exactly how this happened and further steps we will take to make sure it doesn’t happen again," Crosby added. 

While the company doesn't have a detailed explanation for what happened yet, its swift confirmation of the incident is a huge departure from how it previously dealt with a security flaw. Back in 2022, cybersecurity firm Bitdefender revealed that in March 2019, it informed Wyze of a major security vulnerability in the Wyze Cam v1 model. The company didn't inform customers about the flaw, however, and didn't even issue a fix until three years later.

Update, February 20 2024, 9:08PM ET: In an email received by Engadget, Wyze admits to affected users that "about 13,000 Wyze users received thumbnails from cameras that were not their own and 1,504 users tapped on them. Most taps enlarged the thumbnail, but in some cases an Event Video was able to be viewed." 

The company went on to explain that this glitch was caused by a mix-up of device ID and user ID mapping, due to a new third-party caching client library struggling to cope with the "unprecedented" data load from client devices rebooting all at once. Wyze promises to prevent this from happening again by adding "a new layer of verification" for connections, and that it'll look for more reliable client libraries to cope with such incidents.

This article originally appeared on Engadget at https://www.engadget.com/wyze-camera-security-issue-showed-13000-users-other-owners-homes-140059551.html?src=rss

Reddit reportedly signed a multi-million content licensing deal with an AI company

Ever posted or left a comment on Reddit? Your words will soon be used to train an artificial intelligence companies' models, according to Bloomberg. The website signed a deal that's "worth about $60 million on an annualized basis" earlier this year, it reportedly told potential investors ahead of its expected initial public offering (IPO). Bloomberg didn't name the "large AI company" that's paying Reddit millions for access to its content, but their agreement could apparently serve as a model for future contracts, which could mean more multi-million deals for the firm. 

Reddit first announced that it was going to start charging companies for API access in April last year. It said at the time that pricing will be split in tiers so that even smaller clientele could afford to pay. Companies need that API access to be able to train their chatbots on posts and comments — a lot of which had been written by real people over the past 18 years — from subreddits on a wide variety of topics. However, that API is also used by other developers, including those providing users with third-party clients that are arguably better than Reddit's official app. Thousands of communities shut down last year in protest and even caused stability issues that affected the whole website. 

Reddit could go public as soon as next month with a $5 billion valuation. As Bloomberg notes, the website could convince investors still on fence to take the leap by showing them that it can make big money and grow its revenue through deals with AI companies. The firms behind generative AI technologies are working to update their large language models or LLMs through various partnerships, after all. OpenAI, for instance, already inked an agreement that would give it the right to use Business Insider and Politico articles to train its AI models. It's also in talks with several publishers, including CNN, Fox Corp and Time, Bloomberg says.  

OpenAI is facing several lawsuits that accuse it of using content without the express permission of copyright holders, though, including one filed by The New York Times in December. The AI company previously told Engadget that the lawsuit was unexpected, because it had ongoing "productive conversations" with the publication for a "high-value partnership."

This article originally appeared on Engadget at https://www.engadget.com/reddit-reportedly-signed-a-multi-million-content-licensing-deal-with-an-ai-company-124516009.html?src=rss

Reddit reportedly signed a multi-million content licensing deal with an AI company

Ever posted or left a comment on Reddit? Your words will soon be used to train an artificial intelligence companies' models, according to Bloomberg. The website signed a deal that's "worth about $60 million on an annualized basis" earlier this year, it reportedly told potential investors ahead of its expected initial public offering (IPO). Bloomberg didn't name the "large AI company" that's paying Reddit millions for access to its content, but their agreement could apparently serve as a model for future contracts, which could mean more multi-million deals for the firm. 

Reddit first announced that it was going to start charging companies for API access in April last year. It said at the time that pricing will be split in tiers so that even smaller clientele could afford to pay. Companies need that API access to be able to train their chatbots on posts and comments — a lot of which had been written by real people over the past 18 years — from subreddits on a wide variety of topics. However, that API is also used by other developers, including those providing users with third-party clients that are arguably better than Reddit's official app. Thousands of communities shut down last year in protest and even caused stability issues that affected the whole website. 

Reddit could go public as soon as next month with a $5 billion valuation. As Bloomberg notes, the website could convince investors still on fence to take the leap by showing them that it can make big money and grow its revenue through deals with AI companies. The firms behind generative AI technologies are working to update their large language models or LLMs through various partnerships, after all. OpenAI, for instance, already inked an agreement that would give it the right to use Business Insider and Politico articles to train its AI models. It's also in talks with several publishers, including CNN, Fox Corp and Time, Bloomberg says.  

OpenAI is facing several lawsuits that accuse it of using content without the express permission of copyright holders, though, including one filed by The New York Times in December. The AI company previously told Engadget that the lawsuit was unexpected, because it had ongoing "productive conversations" with the publication for a "high-value partnership."

This article originally appeared on Engadget at https://www.engadget.com/reddit-reportedly-signed-a-multi-million-content-licensing-deal-with-an-ai-company-124516009.html?src=rss

Amazon, one of the world’s largest employers, has called the National Labor Relations Board ‘unconstitutional’

Amazon, a company that employs more than 1.54 million people, has claimed that the National Labor Relations Board Relations Board (NLRB), the federal agency responsible for protecting the rights of workers, is unconstitutional. Amazon made the claim in a legal document filed on Thursday as part of a case in which prosecutors from the Board have accused the e-commerce giant of discrimination against workers at an Amazon warehouse in Staten Island who had voted to unionize, according to The New York Times.

Amazon is not the first company to challenge the Board’s constitutionality. Last month, Elon Musk’s SpaceX sued the NLRB after the agency accused the company of unlawfully firing eight employees and called the agency “unconstitutional” in the lawsuit. Weeks later, grocery chain Trader Joe’s, which the NLRB accused of union-busting, said that the NLRB’s structure and organization was “unconstitutional,” Bloomberg reported. And in separate lawsuits, two Starbucks baristas have independently challenged the agency’s structure as they sought to dissolve their unions.

Amazon’s claim is similar to the existing claims filed by SpaceX and Trader Joe’s. In the lawsuit, the company’s lawyers argued that “the structure of the N.L.R.B. violates the separation of powers” by “impeding the executive power provided for in Article II of the United States Constitution.” In addition, Amazon claimed that the NLRB’s hearings “can seek legal remedies beyond what’s allowed without a trial by jury.”

Seth Goldstein, a lawyer who represents unions in the Amazon and Trader Joe’s cases told Reuters that these challenges to the NLRB increase the chances of the issue reaching the Supreme Court. And they might cause employers to stop bargaining with unions in hope that courts will finally strip the federal agency of its powers, Goldstein said. Amazon has a contentious history with the NLRB, which said the company broke federal labor laws last year. 

This article originally appeared on Engadget at https://www.engadget.com/amazon-one-of-the-worlds-largest-employers-has-called-the-national-labor-relations-board-unconstitutional-011519013.html?src=rss

Amazon, one of the world’s largest employers, has called the National Labor Relations Board ‘unconstitutional’

Amazon, a company that employs more than 1.54 million people, has claimed that the National Labor Relations Board Relations Board (NLRB), the federal agency responsible for protecting the rights of workers, is unconstitutional. Amazon made the claim in a legal document filed on Thursday as part of a case in which prosecutors from the Board have accused the e-commerce giant of discrimination against workers at an Amazon warehouse in Staten Island who had voted to unionize, according to The New York Times.

Amazon is not the first company to challenge the Board’s constitutionality. Last month, Elon Musk’s SpaceX sued the NLRB after the agency accused the company of unlawfully firing eight employees and called the agency “unconstitutional” in the lawsuit. Weeks later, grocery chain Trader Joe’s, which the NLRB accused of union-busting, said that the NLRB’s structure and organization was “unconstitutional,” Bloomberg reported. And in separate lawsuits, two Starbucks baristas have independently challenged the agency’s structure as they sought to dissolve their unions.

Amazon’s claim is similar to the existing claims filed by SpaceX and Trader Joe’s. In the lawsuit, the company’s lawyers argued that “the structure of the N.L.R.B. violates the separation of powers” by “impeding the executive power provided for in Article II of the United States Constitution.” In addition, Amazon claimed that the NLRB’s hearings “can seek legal remedies beyond what’s allowed without a trial by jury.”

Seth Goldstein, a lawyer who represents unions in the Amazon and Trader Joe’s cases told Reuters that these challenges to the NLRB increase the chances of the issue reaching the Supreme Court. And they might cause employers to stop bargaining with unions in hope that courts will finally strip the federal agency of its powers, Goldstein said. Amazon has a contentious history with the NLRB, which said the company broke federal labor laws last year. 

This article originally appeared on Engadget at https://www.engadget.com/amazon-one-of-the-worlds-largest-employers-has-called-the-national-labor-relations-board-unconstitutional-011519013.html?src=rss

Microsoft, OpenAI, Google and others agree to combat election-related deepfakes

A coalition of 20 tech companies signed an agreement Friday to help prevent AI deepfakes in the critical 2024 elections taking place in more than 40 countries. OpenAI, Google, Meta, Amazon, Adobe and X are among the businesses joining the pact to prevent and combat AI-generated content that could influence voters. However, the agreement’s vague language and lack of binding enforcement call into question whether it goes far enough.

The list of companies signing the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” includes those that create and distribute AI models, as well as social platforms where the deepfakes are most likely to pop up. The signees are Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic and X (formerly Twitter).

The group describes the agreement as “a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters.” The signees have agreed to the following eight commitments:

  • Developing and implementing technology to mitigate risks related to Deceptive AI Election content, including open-source tools where appropriate

  • Assessing models in scope of this accord to understand the risks they may present regarding Deceptive AI Election Content

  • Seeking to detect the distribution of this content on their platforms

  • Seeking to appropriately address this content detected on their platforms

  • Fostering cross-industry resilience to deceptive AI election content

  • Providing transparency to the public regarding how the company addresses it

  • Continuing to engage with a diverse set of global civil society organizations, academics

  • Supporting efforts to foster public awareness, media literacy, and all-of-society resilience

The accord will apply to AI-generated audio, video and images. It addresses content that “deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can vote.”

The signees say they will work together to create and share tools to detect and address the online distribution of deepfakes. In addition, they plan to drive educational campaigns and “provide transparency” to users.

OpenAI CEO Sam Altman gestures during a session of the World Economic Forum (WEF) meeting in Davos on January 18, 2024. (Photo by Fabrice COFFRINI / AFP) (Photo by FABRICE COFFRINI/AFP via Getty Images)
OpenAI CEO Sam Altman
FABRICE COFFRINI via Getty Images

OpenAI, one of the signees, already said last month it plans to suppress election-related misinformation worldwide. Images generated with the company’s DALL-E 3 tool will be encoded with a classifier providing a digital watermark to clarify their origin as AI-generated pictures. The ChatGPT maker said it would also work with journalists, researchers and platforms for feedback on its provenance classifier. It also plans to prevent chatbots from impersonating candidates.

“We’re committed to protecting the integrity of elections by enforcing policies that prevent abuse and improving transparency around AI-generated content,” Anna Makanju, Vice President of Global Affairs at OpenAI, wrote in the group’s joint press release. “We look forward to working with industry partners, civil society leaders and governments around the world to help safeguard elections from deceptive AI use.”

Notably absent from the list is Midjourney, the company with an AI image generator (of the same name) that currently produces some of the most convincing fake photos. However, the company said earlier this month it would consider banning political generations altogether during election season. Last year, Midjourney was used to create a viral fake image of Pope Benedict unexpectedly strutting down the street with a puffy white jacket. One of Midjourney’s closest competitors, Stability AI (makers of the open-source Stable Diffusion), did participate. Engadget contacted Midjourney for comment about its absence, and we’ll update this article if we hear back.

Only Apple is absent among Silicon Valley’s “Big Five.” However, that may be explained by the fact that the iPhone maker hasn’t yet launched any generative AI products, nor does it host a social media platform where deepfakes could be distributed. Regardless, we contacted Apple PR for clarification but hadn’t heard back at the time of publication.

Although the general principles the 20 companies agreed to sound like a promising start, it remains to be seen whether a loose set of agreements without binding enforcement will be enough to combat a nightmare scenario where the world’s bad actors use generative AI to sway public opinion and elect aggressively anti-democratic candidates — in the US and elsewhere.

“The language isn’t quite as strong as one might have expected,” Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center, told The Associated Press on Friday. “I think we should give credit where credit is due, and acknowledge that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we’ll be keeping an eye on whether they follow through.”

AI-generated deepfakes have already been used in the US Presidential Election. As early as April 2023, the Republican National Committee (RNC) ran an ad using AI-generated images of President Joe Biden and Vice President Kamala Harris. The campaign for Ron DeSantis, who has since dropped out of the GOP primary, followed with AI-generated images of rival and likely nominee Donald Trump in June 2023. Both included easy-to-miss disclaimers that the images were AI-generated.

BOSTON, UNITED STATES- DECEMBER 2: President Joe Biden participates in a International Brotherhood of Electrical Workers (IBEW) phone banking event on December 2nd, 2022 in Boston, Massachusetts for Senator Reverend Raphael Warnockâs (D-GA) re-election campaign. (Photo by Nathan Posner/Anadolu Agency via Getty Images)
In January, New Hampshire voters were greeted with a robocall of an AI-generated impersonation of President Biden’s voice — urging them not to vote.
Anadolu via Getty Images

In January, an AI-generated deepfake of President Biden’s voice was used by two Texas-based companies to robocall New Hampshire voters, urging them not to vote in the state’s primary on January 23. The clip, generated using ElevenLabs’ voice cloning tool, reached up to 25,000 NH voters, according to the state’s attorney general. ElevenLabs is among the pact’s signees.

The Federal Communication Commission (FCC) acted quickly to prevent further abuses of voice-cloning tech in fake campaign calls. Earlier this month, it voted unanimously to ban AI-generated robocalls. The (seemingly eternally deadlocked) US Congress hasn’t passed any AI legislation. In December, the European Union (EU) agreed on an expansive AI Act safety development bill that could influence other nations’ regulatory efforts.

“As society embraces the benefits of AI, we have a responsibility to help ensure these tools don’t become weaponized in elections,” Microsoft Vice Chair and President Brad Smith wrote in a press release. “AI didn’t create election deception, but we must ensure it doesn’t help deception flourish.”

This article originally appeared on Engadget at https://www.engadget.com/microsoft-openai-google-and-others-agree-to-combat-election-related-deepfakes-203942157.html?src=rss

Microsoft, OpenAI, Google and others agree to combat election-related deepfakes

A coalition of 20 tech companies signed an agreement Friday to help prevent AI deepfakes in the critical 2024 elections taking place in more than 40 countries. OpenAI, Google, Meta, Amazon, Adobe and X are among the businesses joining the pact to prevent and combat AI-generated content that could influence voters. However, the agreement’s vague language and lack of binding enforcement call into question whether it goes far enough.

The list of companies signing the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” includes those that create and distribute AI models, as well as social platforms where the deepfakes are most likely to pop up. The signees are Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic and X (formerly Twitter).

The group describes the agreement as “a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters.” The signees have agreed to the following eight commitments:

  • Developing and implementing technology to mitigate risks related to Deceptive AI Election content, including open-source tools where appropriate

  • Assessing models in scope of this accord to understand the risks they may present regarding Deceptive AI Election Content

  • Seeking to detect the distribution of this content on their platforms

  • Seeking to appropriately address this content detected on their platforms

  • Fostering cross-industry resilience to deceptive AI election content

  • Providing transparency to the public regarding how the company addresses it

  • Continuing to engage with a diverse set of global civil society organizations, academics

  • Supporting efforts to foster public awareness, media literacy, and all-of-society resilience

The accord will apply to AI-generated audio, video and images. It addresses content that “deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can vote.”

The signees say they will work together to create and share tools to detect and address the online distribution of deepfakes. In addition, they plan to drive educational campaigns and “provide transparency” to users.

OpenAI CEO Sam Altman gestures during a session of the World Economic Forum (WEF) meeting in Davos on January 18, 2024. (Photo by Fabrice COFFRINI / AFP) (Photo by FABRICE COFFRINI/AFP via Getty Images)
OpenAI CEO Sam Altman
FABRICE COFFRINI via Getty Images

OpenAI, one of the signees, already said last month it plans to suppress election-related misinformation worldwide. Images generated with the company’s DALL-E 3 tool will be encoded with a classifier providing a digital watermark to clarify their origin as AI-generated pictures. The ChatGPT maker said it would also work with journalists, researchers and platforms for feedback on its provenance classifier. It also plans to prevent chatbots from impersonating candidates.

“We’re committed to protecting the integrity of elections by enforcing policies that prevent abuse and improving transparency around AI-generated content,” Anna Makanju, Vice President of Global Affairs at OpenAI, wrote in the group’s joint press release. “We look forward to working with industry partners, civil society leaders and governments around the world to help safeguard elections from deceptive AI use.”

Notably absent from the list is Midjourney, the company with an AI image generator (of the same name) that currently produces some of the most convincing fake photos. However, the company said earlier this month it would consider banning political generations altogether during election season. Last year, Midjourney was used to create a viral fake image of Pope Benedict unexpectedly strutting down the street with a puffy white jacket. One of Midjourney’s closest competitors, Stability AI (makers of the open-source Stable Diffusion), did participate. Engadget contacted Midjourney for comment about its absence, and we’ll update this article if we hear back.

Only Apple is absent among Silicon Valley’s “Big Five.” However, that may be explained by the fact that the iPhone maker hasn’t yet launched any generative AI products, nor does it host a social media platform where deepfakes could be distributed. Regardless, we contacted Apple PR for clarification but hadn’t heard back at the time of publication.

Although the general principles the 20 companies agreed to sound like a promising start, it remains to be seen whether a loose set of agreements without binding enforcement will be enough to combat a nightmare scenario where the world’s bad actors use generative AI to sway public opinion and elect aggressively anti-democratic candidates — in the US and elsewhere.

“The language isn’t quite as strong as one might have expected,” Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center, told The Associated Press on Friday. “I think we should give credit where credit is due, and acknowledge that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we’ll be keeping an eye on whether they follow through.”

AI-generated deepfakes have already been used in the US Presidential Election. As early as April 2023, the Republican National Committee (RNC) ran an ad using AI-generated images of President Joe Biden and Vice President Kamala Harris. The campaign for Ron DeSantis, who has since dropped out of the GOP primary, followed with AI-generated images of rival and likely nominee Donald Trump in June 2023. Both included easy-to-miss disclaimers that the images were AI-generated.

BOSTON, UNITED STATES- DECEMBER 2: President Joe Biden participates in a International Brotherhood of Electrical Workers (IBEW) phone banking event on December 2nd, 2022 in Boston, Massachusetts for Senator Reverend Raphael Warnockâs (D-GA) re-election campaign. (Photo by Nathan Posner/Anadolu Agency via Getty Images)
In January, New Hampshire voters were greeted with a robocall of an AI-generated impersonation of President Biden’s voice — urging them not to vote.
Anadolu via Getty Images

In January, an AI-generated deepfake of President Biden’s voice was used by two Texas-based companies to robocall New Hampshire voters, urging them not to vote in the state’s primary on January 23. The clip, generated using ElevenLabs’ voice cloning tool, reached up to 25,000 NH voters, according to the state’s attorney general. ElevenLabs is among the pact’s signees.

The Federal Communication Commission (FCC) acted quickly to prevent further abuses of voice-cloning tech in fake campaign calls. Earlier this month, it voted unanimously to ban AI-generated robocalls. The (seemingly eternally deadlocked) US Congress hasn’t passed any AI legislation. In December, the European Union (EU) agreed on an expansive AI Act safety development bill that could influence other nations’ regulatory efforts.

“As society embraces the benefits of AI, we have a responsibility to help ensure these tools don’t become weaponized in elections,” Microsoft Vice Chair and President Brad Smith wrote in a press release. “AI didn’t create election deception, but we must ensure it doesn’t help deception flourish.”

This article originally appeared on Engadget at https://www.engadget.com/microsoft-openai-google-and-others-agree-to-combat-election-related-deepfakes-203942157.html?src=rss