Here’s how Google will start helping you figure out which images are AI generated

Google is trying to be more transparent about whether a piece of content was created or modified using generative AI (GAI) tools. After joining the Coalition for Content Provenance and Authenticity (C2PA) as a steering committee member earlier this year, Google has revealed how it will start implementing the group’s digital watermarking standard.

Alongside partners including Amazon, Meta, and OpenAI, Google has spent the past several months figuring out how to improve the tech used for watermarking GAI-created or modified content. The company says it helped to develop the latest version of Content Credentials, a technical standard used to protect metadata detailing how an asset was created, as well as information about what has been modified and how. Google says the current version of Content Credentials is more secure and tamperproof due to stricter validation methods.

In the coming months, Google will start to incorporate the current version of Content Credentials into some of its main products. In other words, it should soon be easier to tell whether an image was created or modified using GAI in Google Search results. If an image that pops up has C2PA metadata, you should be able to find out what impact GAI had on it via the About this image tool. This is also available in Google Images, Lens and Circle to Search.

The company is looking into how to use C2PA to tell YouTube viewers when footage was captured with a camera. Expect to learn more about that later this year.

Google also plans to use C2PA metadata in its ads systems. It didn't reveal too many details about how its plans there other than to say it will use "C2PA signals to inform how we enforce key policies" and do so gradually.

Of course, the effectiveness of this all depends on whether companies such as camera makers and the developers of GAI tools actually use the C2PA watermarking system. The approach isn't going to stop someone from stripping out an image's metadata either. That could make it harder for systems such as Google's to detect any GAI usage.

Meanwhile, throughout this year, we've seen Meta wrangle over how to disclose whether images were created with GAI across Facebook, Instagram and Threads. The company just changed its policy to make labels less visible on images that were edited with AI tools. Starting this week, if C2PA metadata indicates that someone (for instance) used Photoshop's GAI tools to tweak a genuine photo, the "AI info" label no longer appears front and center. Instead, it's buried in the post's menu.

This article originally appeared on Engadget at https://www.engadget.com/ai/heres-how-google-will-start-helping-you-figure-out-which-images-are-ai-generated-150219272.html?src=rss

OpenAI’s new safety board has more power and no Sam Altman

OpenAI has announced significant changes to its safety and security practices, including the establishment of a new independent board oversight committee. This move comes with a notable shift: CEO Sam Altman is no longer part of the safety committee, marking a departure from the previous structure.

The newly formed Safety and Security Committee (SSC) will be chaired by Zico Kolter, Director of the Machine Learning Department at Carnegie Mellon University. Other key members include Quora CEO Adam D'Angelo, retired US Army General Paul Nakasone, and Nicole Seligman, former EVP and General Counsel of Sony Corporation. 

This new committee replaces the previous Safety and Security Committee that was formed in June 2024, which included Altman among its members. The original committee was tasked with making recommendations on critical safety and security decisions for OpenAI projects and operations.

The SSC's responsibilities now extend beyond recommendations. It will have the authority to oversee safety evaluations for major model releases and exercise oversight over model launches. Crucially, the committee will have the power to delay a release until safety concerns are adequately addressed. 

This restructuring follows a period of scrutiny regarding OpenAI's commitment to AI safety. The company has faced criticism in the past for disbanding its Superalignment team and the departures of key safety-focused personnel. The removal of Altman from the safety committee appears to be an attempt to address concerns about potential conflicts of interest in the company's safety oversight.

OpenAI's latest safety initiative also includes plans to enhance security measures, increase transparency about their work, and collaborate with external organizations. The company has already reached agreements with the US and UK AI Safety Institutes to collaborate on researching emerging AI safety risks and standards for trustworthy AI. 

This article originally appeared on Engadget at https://www.engadget.com/ai/openais-new-safety-board-has-more-power-and-no-sam-altman-230113547.html?src=rss

White House gets voluntary commitments from AI companies to curb deepfake porn

The White House released a statement today outlining commitments that several AI companies are making to curb the creation and distribution of image-based sexual abuse. The participating businesses have laid out the steps they are taking to prevent their platforms from being used to generate non-consensual intimate images (NCII) of adults and child sexual abuse material (CSAM).

Specifically, Adobe, Anthropic, Cohere, Common Crawl, Microsoft and OpenAI said they'll be:

  • "responsibly sourcing their datasets and safeguarding them from image-based sexual abuse"

All of the aforementioned except Common Crawl also agreed they'd be:

  • "incorporating feedback loops and iterative stress-testing strategies in their development processes, to guard against AI models outputting image-based sexual abuse"

  • And "removing nude images from AI training datasets" when appropriate.

It's a voluntary commitment, so today's announcement doesn't create any new actionable steps or consequences for failing to follow through on those promises. But it's still worth applauding a good faith effort to tackle this serious problem. The notable absences from today's White House release are Apple, Amazon, Google and Meta.

Many big tech and AI companies have been making strides to make it easier for victims of NCII to stop the spread of deepfake images and videos separately from this federal effort. StopNCII has partnered with several companies for a comprehensive approach to scrubbing this content, while other businesses are rolling out proprietary tools for reporting AI-generated image-based sexual abuse on their platforms.

If you believe you've been the victim of non-consensual intimate image-sharing, you can open a case with StopNCII here; if you're below the age of 18, you can file a report with NCMEC here.

This article originally appeared on Engadget at https://www.engadget.com/ai/white-house-gets-voluntary-commitments-from-ai-companies-to-curb-deepfake-porn-191536233.html?src=rss

Google searches now link to the Internet Archive

Earlier this year, Google said goodbye to its cached web page feature, saying it’s no longer needed. While many were sad to see it go, we can now rejoice as Google is partnering with the Internet Archive to bring something substantially similar back. Thanks to the Internet Archive’s Wayback Machine, you can now look at archived web pages easily.

Clicking on the three dots beside any search result will let you begin to access cached pages. Next, look for the “About this Result” panel and click “More About This Page.” Doing so will lead you to the Wayback Machine, allowing anyone to see snapshots of webpages from various times.

Director of the Wayback Machine Mark Graham said some archived web pages won’t be available because their rights holders have opted out of having their sites archived by the Internet Archive.

This article originally appeared on Engadget at https://www.engadget.com/computing/google-searches-now-link-to-the-internet-archive-164814487.html?src=rss

Meta scraped every Australian user’s account to train its AI

In a government inquiry about AI adoption in Australia, Meta's global privacy director Melinda Claybaugh was asked whether her company has been collecting Australians' data to train its generative AI technology. According to ABC News, Claybaugh initially denied the claim, but upon being pressed, she ultimately admitted that Meta scrapes all the photos and texts in all Facebook and Instagram posts from as far back as 2007, unless the user had set their posts to private. Further, she admitted that the company isn't offering Australians an opt-out option like it does to users in the European Union. 

Claybaugh said that Meta doesn't scrape the accounts of users under 18 years old, but she admitted that the company still collects their photos and other information if they're posted on their parents' or guardians' accounts. She couldn't answer, however, if the company collects data from previous years once a user turns 18. Upon being asked why Meta doesn't offer Australians the option not to consent to data collection, Claybaugh said that it exists in the EU "in response to a very specific legal frame," which most likely pertains to the bloc's General Data Protection Regulation (GDPR).

Meta had notified users in the EU that it will collect their data for AI training unless they opt out. "I will say that the ongoing conversation in Europe is the direct result of the existing regulatory landscape," Claybaugh explained during the inquiry. But even in the region, Claybaugh said that there's an "ongoing legal question around what is the interpretation of existing privacy law with respect to AI training." Meta decided not to offer its multimodal AI model and future versions in the block due to what it says is a lack of clarity from European regulators. Most of its concerns centered around the difficulties of training AI models with data from European users while complying with GDPR rules. 

Despite those legal questions around AI adoption in Europe, bottom line is that Meta is giving users in the bloc the power to block data collection. "Meta made it clear today that if Australia had these same laws Australians' data would also have been protected," Australian Senator David Shoebridge told ABC News. "The government's failure to act on privacy means companies like Meta are continuing to monetise and exploit pictures and videos of children on Facebook."

This article originally appeared on Engadget at https://www.engadget.com/apps/meta-scraped-every-australian-users-account-to-train-its-ai-120026200.html?src=rss

Meta scraped every Australian user’s account to train its AI

In a government inquiry about AI adoption in Australia, Meta's global privacy director Melinda Claybaugh was asked whether her company has been collecting Australians' data to train its generative AI technology. According to ABC News, Claybaugh initially denied the claim, but upon being pressed, she ultimately admitted that Meta scrapes all the photos and texts in all Facebook and Instagram posts from as far back as 2007, unless the user had set their posts to private. Further, she admitted that the company isn't offering Australians an opt-out option like it does to users in the European Union. 

Claybaugh said that Meta doesn't scrape the accounts of users under 18 years old, but she admitted that the company still collects their photos and other information if they're posted on their parents' or guardians' accounts. She couldn't answer, however, if the company collects data from previous years once a user turns 18. Upon being asked why Meta doesn't offer Australians the option not to consent to data collection, Claybaugh said that it exists in the EU "in response to a very specific legal frame," which most likely pertains to the bloc's General Data Protection Regulation (GDPR).

Meta had notified users in the EU that it will collect their data for AI training unless they opt out. "I will say that the ongoing conversation in Europe is the direct result of the existing regulatory landscape," Claybaugh explained during the inquiry. But even in the region, Claybaugh said that there's an "ongoing legal question around what is the interpretation of existing privacy law with respect to AI training." Meta decided not to offer its multimodal AI model and future versions in the block due to what it says is a lack of clarity from European regulators. Most of its concerns centered around the difficulties of training AI models with data from European users while complying with GDPR rules. 

Despite those legal questions around AI adoption in Europe, bottom line is that Meta is giving users in the bloc the power to block data collection. "Meta made it clear today that if Australia had these same laws Australians' data would also have been protected," Australian Senator David Shoebridge told ABC News. "The government's failure to act on privacy means companies like Meta are continuing to monetise and exploit pictures and videos of children on Facebook."

This article originally appeared on Engadget at https://www.engadget.com/apps/meta-scraped-every-australian-users-account-to-train-its-ai-120026200.html?src=rss

Microsoft joins coalition to scrub revenge and deepfake porn from Bing

Microsoft announced it has partnered with StopNCII to help remove non-consensual intimate images — including deepfakes — from its Bing search engine.

When a victim opens a "case" with StopNCII, the database creates a digital fingerprint, also called a "hash," of an intimate image or video stored on that individual's device without their needing to upload the file. The hash is then sent to participating industry partners, who can seek out matches for the original and remove them from their platform if it breaks their content policies. The process also applies to AI-generated deepfakes of a real person.

Several other tech companies have agreed to work with StopNCII to scrub intimate images shared without permission. Meta helped build the tool, and uses it on its Facebook, Instagram and Threads platforms; other services that have partnered with the effort include TikTok, Bumble, Reddit, Snap, Niantic, OnlyFans, PornHub, Playhouse and Redgifs.

Absent from that list is, strangely, Google. The tech giant has its own set of tools for reporting non-consensual images, including AI-generated deepfakes. However, failing to participate in one of the few centralized places for scrubbing revenge porn and other private images arguably places an additional burden on victims to take a piecemeal approach to recovering their privacy.

In addition to efforts like StopNCII, the US government has taken some steps this year to specifically address the harms done by the deepfake side of non-consensual images. The US Copyright Office called for new legislation on the subject, and a group of Senators moved to protect victims with the NO FAKES Act, introduced in July.

If you believe you've been the victim of non-consensual intimate image-sharing, you can open a case with StopNCII here and Google here; if you're below the age of 18, you can file a report with NCMEC here.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/microsoft-joins-coalition-to-scrub-revenge-and-deepfake-porn-from-bing-195316677.html?src=rss

X won’t train Grok on EU users’ public posts

X will permanently avoid training its AI chatbot Grok on the public posts of users in the European Union and European Economic Area following pressure from a regulator in the region. Last month, the company temporarily suspended the practice after Ireland’s Data Protection Commission (DPC) opened High Court proceedings against it. X has now made that commitment a permanent one, which prompted the DPC to end its legal action.

The DPC, which is the chief EU regulator for X, raised concerns that X may have been violating data protection rules and users' rights. Since May, X had offered users the option to opt-out of having their public posts being used to train Grok, implying that the company had enabled that setting for public accounts by default. Under the EU's General Data Protection Regulation (GDPR), however, companies are typically required to obtain explicit consent from users before processing their data. X does not have a media relations department that can be reached for comment.

Meanwhile, the DPC has urged the European Data Protection Board to weigh in "on some of the core issues that arise in the context of processing for the purpose of developing and training an AI model," including how personal data is processed for such purposes. "The DPC hopes that the resulting opinion will enable proactive, effective and consistent Europe-wide regulation of this area more broadly,” DPC commissioner Dale Sunderland said in a statement. “It will also support the handling of a number of complaints that have been lodged with/transmitted to the DPC” about such practices.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/x-wont-train-grok-on-eu-users-public-posts-155438606.html?src=rss

NVIDIA is reportedly in the spotlight of the DoJ’s AI antitrust probe

Update, September 4, 5:15PM ET: NVIDIA has denied Bloomberg's report. Speaking to CNBC, the chipmaker said that it had inquired with the US Department of Justice and has not been subpoenaed. It added that it was "happy to answer any questions regulators" have about its business. The headline of this story has been changed to reflect this denial. The original story follows unedited.


The DOJ has sent subpoenas to NVIDIA and other companies as part of an antitrust probe, as reported by Bloomberg. The federal government is seeking evidence that the company violated antitrust laws with regard to its AI processors. The presence of these subpoenas means the DOJ is one step closer to launching a formal complaint.

Officials speculate that NVIDIA is making it difficult for other companies to switch hardware suppliers and that it “penalizes buyers that don’t exclusively use its artificial intelligence chips.” This probe started in June, but recently escalated to include legally binding requests for information.

Nvidia/RunAI collab.
Nvidia

At the root of the DOJ probe is NVIDIA’s recent acquisition of RunAI, a company that makes software for managing AI computing tasks. The concern is that this purchase will make it harder for business customers to switch away from NVIDIA chips, as it would also necessitate a change in software.

However, that’s not the only reason behind this investigation. Regulators are also looking into whether NVIDIA gives preferential treatment to customers who exclusively use its technology or buy its complete systems. This special treatment allegedly includes first dibs on hardware and related supplies and unique pricing models.

NVIDIA has offered a terse response, telling Bloomberg that it “wins on merit, as reflected in our benchmark results and value to customers, who can choose whatever solution is best for them.” The inference here is that the company’s market dominance comes down to hard work and not sweetheart deals. 

The investigation is still in its early days, as it hasn’t yet blossomed into a formal complaint. The company’s stock took a hit ahead of the DOJ announcement, but that was likely due to continuing delays for its Blackwell AI chip. However, the stock is still up more than double this year as the AI boom continues to do its thing.

This article originally appeared on Engadget at https://www.engadget.com/ai/doj-subpoenas-nvidia-as-part-of-antitrust-probe-regarding-ai-processors-153435877.html?src=rss

Microsoft is sharing Copilot’s ‘next phase’ in a September 16 livestream

According to Microsoft, it's time for the "next phase of Copilot innovation." On September 16, the company is live streaming an event called Microsoft 365 Copilot: Wave 2. Microsoft's CEO Satya Nadella and corporate vice president of AI at work, Jared Spataro, will host the event on LinkedIn (It is "your AI assistant for work," so it's a fitting platform). The stream starts at 8 AM PT/11 AM ET and is available here

Spataro first announced Microsoft 365 Copilot in early 2023 to create responses, draft presentations, and break down data — to name a few of its uses. In the year and a half since, CoPilot has folded in Microsoft's chatbot Bing and expanded to serve entire teams, generate images, and reference multiple documents when it writes. It currently costs $360 annually per user

This article originally appeared on Engadget at https://www.engadget.com/ai/microsoft-is-sharing-copilots-next-phase-in-a-september-16-livestream-134451868.html?src=rss