White House gets voluntary commitments from AI companies to curb deepfake porn

The White House released a statement today outlining commitments that several AI companies are making to curb the creation and distribution of image-based sexual abuse. The participating businesses have laid out the steps they are taking to prevent their platforms from being used to generate non-consensual intimate images (NCII) of adults and child sexual abuse material (CSAM).

Specifically, Adobe, Anthropic, Cohere, Common Crawl, Microsoft and OpenAI said they'll be:

  • "responsibly sourcing their datasets and safeguarding them from image-based sexual abuse"

All of the aforementioned except Common Crawl also agreed they'd be:

  • "incorporating feedback loops and iterative stress-testing strategies in their development processes, to guard against AI models outputting image-based sexual abuse"

  • And "removing nude images from AI training datasets" when appropriate.

It's a voluntary commitment, so today's announcement doesn't create any new actionable steps or consequences for failing to follow through on those promises. But it's still worth applauding a good faith effort to tackle this serious problem. The notable absences from today's White House release are Apple, Amazon, Google and Meta.

Many big tech and AI companies have been making strides to make it easier for victims of NCII to stop the spread of deepfake images and videos separately from this federal effort. StopNCII has partnered with several companies for a comprehensive approach to scrubbing this content, while other businesses are rolling out proprietary tools for reporting AI-generated image-based sexual abuse on their platforms.

If you believe you've been the victim of non-consensual intimate image-sharing, you can open a case with StopNCII here; if you're below the age of 18, you can file a report with NCMEC here.

This article originally appeared on Engadget at https://www.engadget.com/ai/white-house-gets-voluntary-commitments-from-ai-companies-to-curb-deepfake-porn-191536233.html?src=rss

Google searches now link to the Internet Archive

Earlier this year, Google said goodbye to its cached web page feature, saying it’s no longer needed. While many were sad to see it go, we can now rejoice as Google is partnering with the Internet Archive to bring something substantially similar back. Thanks to the Internet Archive’s Wayback Machine, you can now look at archived web pages easily.

Clicking on the three dots beside any search result will let you begin to access cached pages. Next, look for the “About this Result” panel and click “More About This Page.” Doing so will lead you to the Wayback Machine, allowing anyone to see snapshots of webpages from various times.

Director of the Wayback Machine Mark Graham said some archived web pages won’t be available because their rights holders have opted out of having their sites archived by the Internet Archive.

This article originally appeared on Engadget at https://www.engadget.com/computing/google-searches-now-link-to-the-internet-archive-164814487.html?src=rss

Meta scraped every Australian user’s account to train its AI

In a government inquiry about AI adoption in Australia, Meta's global privacy director Melinda Claybaugh was asked whether her company has been collecting Australians' data to train its generative AI technology. According to ABC News, Claybaugh initially denied the claim, but upon being pressed, she ultimately admitted that Meta scrapes all the photos and texts in all Facebook and Instagram posts from as far back as 2007, unless the user had set their posts to private. Further, she admitted that the company isn't offering Australians an opt-out option like it does to users in the European Union. 

Claybaugh said that Meta doesn't scrape the accounts of users under 18 years old, but she admitted that the company still collects their photos and other information if they're posted on their parents' or guardians' accounts. She couldn't answer, however, if the company collects data from previous years once a user turns 18. Upon being asked why Meta doesn't offer Australians the option not to consent to data collection, Claybaugh said that it exists in the EU "in response to a very specific legal frame," which most likely pertains to the bloc's General Data Protection Regulation (GDPR).

Meta had notified users in the EU that it will collect their data for AI training unless they opt out. "I will say that the ongoing conversation in Europe is the direct result of the existing regulatory landscape," Claybaugh explained during the inquiry. But even in the region, Claybaugh said that there's an "ongoing legal question around what is the interpretation of existing privacy law with respect to AI training." Meta decided not to offer its multimodal AI model and future versions in the block due to what it says is a lack of clarity from European regulators. Most of its concerns centered around the difficulties of training AI models with data from European users while complying with GDPR rules. 

Despite those legal questions around AI adoption in Europe, bottom line is that Meta is giving users in the bloc the power to block data collection. "Meta made it clear today that if Australia had these same laws Australians' data would also have been protected," Australian Senator David Shoebridge told ABC News. "The government's failure to act on privacy means companies like Meta are continuing to monetise and exploit pictures and videos of children on Facebook."

This article originally appeared on Engadget at https://www.engadget.com/apps/meta-scraped-every-australian-users-account-to-train-its-ai-120026200.html?src=rss

Meta scraped every Australian user’s account to train its AI

In a government inquiry about AI adoption in Australia, Meta's global privacy director Melinda Claybaugh was asked whether her company has been collecting Australians' data to train its generative AI technology. According to ABC News, Claybaugh initially denied the claim, but upon being pressed, she ultimately admitted that Meta scrapes all the photos and texts in all Facebook and Instagram posts from as far back as 2007, unless the user had set their posts to private. Further, she admitted that the company isn't offering Australians an opt-out option like it does to users in the European Union. 

Claybaugh said that Meta doesn't scrape the accounts of users under 18 years old, but she admitted that the company still collects their photos and other information if they're posted on their parents' or guardians' accounts. She couldn't answer, however, if the company collects data from previous years once a user turns 18. Upon being asked why Meta doesn't offer Australians the option not to consent to data collection, Claybaugh said that it exists in the EU "in response to a very specific legal frame," which most likely pertains to the bloc's General Data Protection Regulation (GDPR).

Meta had notified users in the EU that it will collect their data for AI training unless they opt out. "I will say that the ongoing conversation in Europe is the direct result of the existing regulatory landscape," Claybaugh explained during the inquiry. But even in the region, Claybaugh said that there's an "ongoing legal question around what is the interpretation of existing privacy law with respect to AI training." Meta decided not to offer its multimodal AI model and future versions in the block due to what it says is a lack of clarity from European regulators. Most of its concerns centered around the difficulties of training AI models with data from European users while complying with GDPR rules. 

Despite those legal questions around AI adoption in Europe, bottom line is that Meta is giving users in the bloc the power to block data collection. "Meta made it clear today that if Australia had these same laws Australians' data would also have been protected," Australian Senator David Shoebridge told ABC News. "The government's failure to act on privacy means companies like Meta are continuing to monetise and exploit pictures and videos of children on Facebook."

This article originally appeared on Engadget at https://www.engadget.com/apps/meta-scraped-every-australian-users-account-to-train-its-ai-120026200.html?src=rss

Microsoft joins coalition to scrub revenge and deepfake porn from Bing

Microsoft announced it has partnered with StopNCII to help remove non-consensual intimate images — including deepfakes — from its Bing search engine.

When a victim opens a "case" with StopNCII, the database creates a digital fingerprint, also called a "hash," of an intimate image or video stored on that individual's device without their needing to upload the file. The hash is then sent to participating industry partners, who can seek out matches for the original and remove them from their platform if it breaks their content policies. The process also applies to AI-generated deepfakes of a real person.

Several other tech companies have agreed to work with StopNCII to scrub intimate images shared without permission. Meta helped build the tool, and uses it on its Facebook, Instagram and Threads platforms; other services that have partnered with the effort include TikTok, Bumble, Reddit, Snap, Niantic, OnlyFans, PornHub, Playhouse and Redgifs.

Absent from that list is, strangely, Google. The tech giant has its own set of tools for reporting non-consensual images, including AI-generated deepfakes. However, failing to participate in one of the few centralized places for scrubbing revenge porn and other private images arguably places an additional burden on victims to take a piecemeal approach to recovering their privacy.

In addition to efforts like StopNCII, the US government has taken some steps this year to specifically address the harms done by the deepfake side of non-consensual images. The US Copyright Office called for new legislation on the subject, and a group of Senators moved to protect victims with the NO FAKES Act, introduced in July.

If you believe you've been the victim of non-consensual intimate image-sharing, you can open a case with StopNCII here and Google here; if you're below the age of 18, you can file a report with NCMEC here.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/microsoft-joins-coalition-to-scrub-revenge-and-deepfake-porn-from-bing-195316677.html?src=rss

X won’t train Grok on EU users’ public posts

X will permanently avoid training its AI chatbot Grok on the public posts of users in the European Union and European Economic Area following pressure from a regulator in the region. Last month, the company temporarily suspended the practice after Ireland’s Data Protection Commission (DPC) opened High Court proceedings against it. X has now made that commitment a permanent one, which prompted the DPC to end its legal action.

The DPC, which is the chief EU regulator for X, raised concerns that X may have been violating data protection rules and users' rights. Since May, X had offered users the option to opt-out of having their public posts being used to train Grok, implying that the company had enabled that setting for public accounts by default. Under the EU's General Data Protection Regulation (GDPR), however, companies are typically required to obtain explicit consent from users before processing their data. X does not have a media relations department that can be reached for comment.

Meanwhile, the DPC has urged the European Data Protection Board to weigh in "on some of the core issues that arise in the context of processing for the purpose of developing and training an AI model," including how personal data is processed for such purposes. "The DPC hopes that the resulting opinion will enable proactive, effective and consistent Europe-wide regulation of this area more broadly,” DPC commissioner Dale Sunderland said in a statement. “It will also support the handling of a number of complaints that have been lodged with/transmitted to the DPC” about such practices.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/x-wont-train-grok-on-eu-users-public-posts-155438606.html?src=rss

NVIDIA is reportedly in the spotlight of the DoJ’s AI antitrust probe

Update, September 4, 5:15PM ET: NVIDIA has denied Bloomberg's report. Speaking to CNBC, the chipmaker said that it had inquired with the US Department of Justice and has not been subpoenaed. It added that it was "happy to answer any questions regulators" have about its business. The headline of this story has been changed to reflect this denial. The original story follows unedited.


The DOJ has sent subpoenas to NVIDIA and other companies as part of an antitrust probe, as reported by Bloomberg. The federal government is seeking evidence that the company violated antitrust laws with regard to its AI processors. The presence of these subpoenas means the DOJ is one step closer to launching a formal complaint.

Officials speculate that NVIDIA is making it difficult for other companies to switch hardware suppliers and that it “penalizes buyers that don’t exclusively use its artificial intelligence chips.” This probe started in June, but recently escalated to include legally binding requests for information.

Nvidia/RunAI collab.
Nvidia

At the root of the DOJ probe is NVIDIA’s recent acquisition of RunAI, a company that makes software for managing AI computing tasks. The concern is that this purchase will make it harder for business customers to switch away from NVIDIA chips, as it would also necessitate a change in software.

However, that’s not the only reason behind this investigation. Regulators are also looking into whether NVIDIA gives preferential treatment to customers who exclusively use its technology or buy its complete systems. This special treatment allegedly includes first dibs on hardware and related supplies and unique pricing models.

NVIDIA has offered a terse response, telling Bloomberg that it “wins on merit, as reflected in our benchmark results and value to customers, who can choose whatever solution is best for them.” The inference here is that the company’s market dominance comes down to hard work and not sweetheart deals. 

The investigation is still in its early days, as it hasn’t yet blossomed into a formal complaint. The company’s stock took a hit ahead of the DOJ announcement, but that was likely due to continuing delays for its Blackwell AI chip. However, the stock is still up more than double this year as the AI boom continues to do its thing.

This article originally appeared on Engadget at https://www.engadget.com/ai/doj-subpoenas-nvidia-as-part-of-antitrust-probe-regarding-ai-processors-153435877.html?src=rss

Microsoft is sharing Copilot’s ‘next phase’ in a September 16 livestream

According to Microsoft, it's time for the "next phase of Copilot innovation." On September 16, the company is live streaming an event called Microsoft 365 Copilot: Wave 2. Microsoft's CEO Satya Nadella and corporate vice president of AI at work, Jared Spataro, will host the event on LinkedIn (It is "your AI assistant for work," so it's a fitting platform). The stream starts at 8 AM PT/11 AM ET and is available here

Spataro first announced Microsoft 365 Copilot in early 2023 to create responses, draft presentations, and break down data — to name a few of its uses. In the year and a half since, CoPilot has folded in Microsoft's chatbot Bing and expanded to serve entire teams, generate images, and reference multiple documents when it writes. It currently costs $360 annually per user

This article originally appeared on Engadget at https://www.engadget.com/ai/microsoft-is-sharing-copilots-next-phase-in-a-september-16-livestream-134451868.html?src=rss

Lyft’s new price lock feature caps the cost of rides, even during peak hours

Lyft is rolling out a new price lock feature that caps the cost of rides, in an attempt to solve the problem of cost unpredictability for those who rely on the platform for daily commutes. The company says this tool will even work during peak hours, when rides are usually at their most expensive. There are, however, some caveats.

First of all, there’s a required monthly subscription price to use this service, though it’s only $3 per month. There’s also a curious lack of details regarding how exactly the cap works. Does it just average past rides and exclude peak pricing? Is there a limit to just how much can be capped? We reached out to Lyft and will update this post if we hear anything.

The feature in action.
Lyft

One thing is certain. Lyft is planning on this feature being a hit. It has suggested that commuters will take 40 percent more rides once the price lock tool becomes commonplace. However, it's worth noting that Lyft is the one that sets the prices in the first place, so it caused the instability that this tool sets out to solve. 

There’s also a promotion to advertise the price lock mechanism: 100 customers who are starting new jobs will receive free “first day” rides. This will be handled via LinkedIn. Just 100 rides? That seems pretty stingy for a company as large as Lyft, but what do I know?

This isn’t the first time Lyft has tried its hand at a subscription-based service. The company’s Pink subscription service has been an on-again/off-again thing for years. This is more or less a bundle of add-ons at this point. Pink stopped offering ride discounts but began offering perks like free priority pickups and three free cancellations per month. This program is still live, at $10 per month or $100 per year.

This article originally appeared on Engadget at https://www.engadget.com/transportation/lyfts-new-price-lock-feature-caps-the-cost-of-rides-even-during-peak-hours-100014522.html?src=rss

Clearview faces a €30.5 million for violating the GDPR

Clearview AI is back in hot — and expensive — water, with the Dutch Data Protection Authority (DPA) fining the company €30.5 million ($33.6 million) for violating the General Data Protection Regulation (GDPR). The release explains that Clearview created "an illegal database with billions of photos of faces," including Dutch individuals, and has failed to properly inform people that it's using their data. In early 2023, Clearview's CEO claimed the company had 30 billion images

Clearview must immediately stop all violations or face up to €5.1 million ($5.6 million) in non-compliance penalties. "Facial recognition is a highly intrusive technology, that you cannot simply unleash on anyone in the world," Dutch DPA chairman Aleid Wolfsen stated. "If there is a photo of you on the Internet — and doesn't that apply to all of us? — then you can end up in the database of Clearview and be tracked." He adds that facial recognition can help with safety but that "competent authorities" who are "subject to strict conditions" should handle it rather than a commercial company. 

The Dutch DPA further states that since Clearview is breaking the law, using it is also illegal. Wolfsen warns that Dutch companies using Clearview could also be subject to "hefty fines." Clearview didn't issue an objection to the Dutch DPA's fine, so it is unable to launch an appeal.

This fine is far from the first time an entity has stood up against Clearview. In 2020, the LAPD banned its use, and the American Civil Liberties Union (ACLU) sued Clearview, with the settlement ending sales of the biometric database to any private companies. Italy and the UK have previously fined Clearview €20 million ($22 million) and £7.55 million ($10 million), respectively, and instructed the company to delete any data of its residents. Earlier this year, the EU also barred Clearview from untargeted face scraping on the internet. 

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/clearview-faces-a-%E2%82%AC305-million-for-violating-the-gdpr-124549856.html?src=rss