California’s ‘click to cancel’ subscription bill is signed into law

Governor Gavin Newsom has signed California's "click to cancel" Assembly Bill 286 into law to make it easier for consumers to opt out of subscriptions. The bill, introduced in April 2024, forces companies that permit online or in-app sign-ups to allow for online or in-app unsubscribing as well.

"AB 2863 is the most comprehensive ‘Click to Cancel’ legislation in the nation, ensuring Californians can cancel unwanted automatic subscription renewals just as easily as they signed up — with just a click or two,” said California Assemblymember Pilar Schiavo.

Like many, you may have signed up for a thing online and when you go to cancel it, are presented with a phone number. You then have to spend an hour on hold before being allowed to convince the person on the other end of a line that you can cancel a subscription that took five seconds to sign up for. California's new bill is designed to kibosh that sort of behavior, though companies have until mid-2025 to comply. 

Adobe is one of the more notable examples of this behavior, particularly since its subscriptions can cost $60 per month. Earlier this year, the FTC sued the company over early termination fees and roadblocks to unsubscribing, calling the practices "illegal." 

The FTC has proposed a similar law last year that would apply across the US, but the finalized rule is still to come. Meanwhile, if you're having trouble cancelling a subscription Engadget created a guide on how to do so with commonly used plans.

This article originally appeared on Engadget at https://www.engadget.com/general/californias-click-to-cancel-subscription-bill-is-signed-into-law-123058770.html?src=rss

X just released its first full transparency report since Elon Musk took over

X has published its most detailed accounting of its content moderation practices since Elon Musk’s takeover of the company. The report, X’s first in more than a year, provides new insight into how X is enforcing its rules as it struggles to hang on to advertisers who have raised concerns about toxicity on the platform.

The report, which details content takedowns and account suspensions from the first half of 2024, shows that suspensions have more than tripled since the last time the company shared data. X suspended just under 5.3 million accounts during the period, compared with 1.6 million suspensions during the first six months of 2022.

In addition to the suspensions, X says it “removed or labeled” more than 10.6 million posts for violating its rules. Violations of the company’s hateful conduct policy accounted for nearly half of that number, with X taking action on 4.9 million such posts. Posts containing abuse and harassment (2.6 million) and violent content (2.2 million) also accounted for a significant percentage of the takedowns and labels.

While these numbers don’t tell a complete story about the state of content on X — the company doesn’t distinguish between posts it removes and those that it labels, for example — it shows that hateful, abusive and violent content are among the biggest issues facing the platform. Those are also the same issues numerous advertisers and civil rights groups have raised concerns about since Musk’s takeover of the company. In the report, X claims that rule-breaking content accounted for less than 1 percent of all posts shared on the platform.

Numbers shared by X.
X

The numbers also suggest there have been significant increases in this type of content since Twitter last shared numbers prior to Musk’s takeover. For example, in the last half of 2021, the last time Twitter shared such data, the company reported it suspended about 1.3 million accounts for terms of service violations and “actioned” about 4.3 million.

X previously published an abbreviated report in a 383-word blog post last April, which shared some stats on content takedowns, but offered almost no details on government requests for information or post removals. The new report is a significant improvement on that front. It says that X received 18,737 government requests for information, with the majority of the requests coming from within the EU and a reported disclosure rate of 53 percent. X also received 72,703 requests from governments to remove content from its platform. The company says it took action in just over 70 percent of cases. Japan accounted for the vast majority of those requests (46,648), followed by Turkey (9,364).

This article originally appeared on Engadget at https://www.engadget.com/social-media/x-just-released-its-first-full-transparency-report-since-elon-musk-took-over-110038194.html?src=rss

X just released its first full transparency report since Elon Musk took over

X has published its most detailed accounting of its content moderation practices since Elon Musk’s takeover of the company. The report, X’s first in more than a year, provides new insight into how X is enforcing its rules as it struggles to hang on to advertisers who have raised concerns about toxicity on the platform.

The report, which details content takedowns and account suspensions from the first half of 2024, shows that suspensions have more than tripled since the last time the company shared data. X suspended just under 5.3 million accounts during the period, compared with 1.6 million suspensions during the first six months of 2022.

In addition to the suspensions, X says it “removed or labeled” more than 10.6 million posts for violating its rules. Violations of the company’s hateful conduct policy accounted for nearly half of that number, with X taking action on 4.9 million such posts. Posts containing abuse and harassment (2.6 million) and violent content (2.2 million) also accounted for a significant percentage of the takedowns and labels.

While these numbers don’t tell a complete story about the state of content on X — the company doesn’t distinguish between posts it removes and those that it labels, for example — it shows that hateful, abusive and violent content are among the biggest issues facing the platform. Those are also the same issues numerous advertisers and civil rights groups have raised concerns about since Musk’s takeover of the company. In the report, X claims that rule-breaking content accounted for less than 1 percent of all posts shared on the platform.

Numbers shared by X.
X

The numbers also suggest there have been significant increases in this type of content since Twitter last shared numbers prior to Musk’s takeover. For example, in the last half of 2021, the last time Twitter shared such data, the company reported it suspended about 1.3 million accounts for terms of service violations and “actioned” about 4.3 million.

X previously published an abbreviated report in a 383-word blog post last April, which shared some stats on content takedowns, but offered almost no details on government requests for information or post removals. The new report is a significant improvement on that front. It says that X received 18,737 government requests for information, with the majority of the requests coming from within the EU and a reported disclosure rate of 53 percent. X also received 72,703 requests from governments to remove content from its platform. The company says it took action in just over 70 percent of cases. Japan accounted for the vast majority of those requests (46,648), followed by Turkey (9,364).

This article originally appeared on Engadget at https://www.engadget.com/social-media/x-just-released-its-first-full-transparency-report-since-elon-musk-took-over-110038194.html?src=rss

Brazil threatens daily fines for X and Starlink for ‘non-compliance’ with ban

One day after X started to come back online for some people in Brazil, the country’s Supreme Court is threatening the social media company and Elon Musk-owned Starlink with hefty daily fines. In a new order posted online, Supreme Court judge Alexandre de Moraes ordered regulators to “reactivate” blocking of X and said that the two companies could be hit with close $1 million a day in fines for not complying.

The latest order from Moraes, who has been publicly sparring with Musk for months, comes after X became accessible again in Brazil for many users on Wednesday. The company said in an earlier statement the change was "an inadvertent and temporary service restoration" that happened as a result of changing network providers.

Following Brazil’s ban last month, X reportedly shifted to using Cloudflare’s servers in the region, which made it more difficult for Brazilian ISPs to carry out the block. The company said Wednesday it made the change in network providers in order to “to provide service to Latin America” and that it expected its service in Brazil to go offline again “soon.”

Now, Moraes says that X could be fined the equivalent of $921,000 a day, beginning September 19, for each day of “non-compliance” with the ban. Starlink, which previously saw its Brazilian bank accounts frozen amid the dispute, faces “joint liability” if X doesn't pay, according to the order. Moraes also ordered the country’s internet regulator to “take immediate measures to prevent access to the platform by blocking the ‘CDN Cloudflare, Fastly and EdgeUno’ servers, and other similar ones, created to circumvent the court order that suspended the operation of the old Twitter in Brazil.”

X didn’t immediately respond to a request for comment.

This article originally appeared on Engadget at https://www.engadget.com/social-media/brazil-threatens-daily-fines-for-x-and-starlink-for-non-compliance-with-ban-194542476.html?src=rss

Meta could face massive EU fines over Marketplace competition

Meta is once again at risk of getting fined heavily by the European Commission. The bloc's regulatory arm is preparing its findings that Meta linked its Marketplace service to Facebook to undermine competitors, the Financial Times reports, citing sources familiar with the case.

If found guilty, Meta could be on the hook for 10 percent of its global annual revenue — a number that reached almost $135 billion last year. However, the fine could be much smaller, and Meta will almost certainly appeal it.

The Commission launched its initial probe in 2019, announcing its preliminary findings three years later that "Meta ties its dominant social network Facebook to its online classified ad services called Facebook Marketplace," Margrethe Vestager, executive vice-president in charge of competition policy, stated at the time. "Furthermore, we are concerned that Meta imposed unfair trading conditions, allowing it to use of data on competing online classified ad services. If confirmed, Meta's practices would be illegal under our competition rules." Meta faces other investigations from the Commission into its election policies, addiction and safety concerns for minors and its consent or pay model.

The news comes at a transitionary time for the European Commission, with President Ursula von der Leyen announcing her new team just yesterday. The shakeup for her second term will see Margrethe Vestager, head of competition for the last decade, replaced by Teresa Ribera. Reports that Vestiger would be stepping down this year first surfaced in August.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/meta-could-face-massive-eu-fines-over-marketplace-competition-113033743.html?src=rss

Here’s how Google will start helping you figure out which images are AI generated

Google is trying to be more transparent about whether a piece of content was created or modified using generative AI (GAI) tools. After joining the Coalition for Content Provenance and Authenticity (C2PA) as a steering committee member earlier this year, Google has revealed how it will start implementing the group’s digital watermarking standard.

Alongside partners including Amazon, Meta, and OpenAI, Google has spent the past several months figuring out how to improve the tech used for watermarking GAI-created or modified content. The company says it helped to develop the latest version of Content Credentials, a technical standard used to protect metadata detailing how an asset was created, as well as information about what has been modified and how. Google says the current version of Content Credentials is more secure and tamperproof due to stricter validation methods.

In the coming months, Google will start to incorporate the current version of Content Credentials into some of its main products. In other words, it should soon be easier to tell whether an image was created or modified using GAI in Google Search results. If an image that pops up has C2PA metadata, you should be able to find out what impact GAI had on it via the About this image tool. This is also available in Google Images, Lens and Circle to Search.

The company is looking into how to use C2PA to tell YouTube viewers when footage was captured with a camera. Expect to learn more about that later this year.

Google also plans to use C2PA metadata in its ads systems. It didn't reveal too many details about how its plans there other than to say it will use "C2PA signals to inform how we enforce key policies" and do so gradually.

Of course, the effectiveness of this all depends on whether companies such as camera makers and the developers of GAI tools actually use the C2PA watermarking system. The approach isn't going to stop someone from stripping out an image's metadata either. That could make it harder for systems such as Google's to detect any GAI usage.

Meanwhile, throughout this year, we've seen Meta wrangle over how to disclose whether images were created with GAI across Facebook, Instagram and Threads. The company just changed its policy to make labels less visible on images that were edited with AI tools. Starting this week, if C2PA metadata indicates that someone (for instance) used Photoshop's GAI tools to tweak a genuine photo, the "AI info" label no longer appears front and center. Instead, it's buried in the post's menu.

This article originally appeared on Engadget at https://www.engadget.com/ai/heres-how-google-will-start-helping-you-figure-out-which-images-are-ai-generated-150219272.html?src=rss

OpenAI’s new safety board has more power and no Sam Altman

OpenAI has announced significant changes to its safety and security practices, including the establishment of a new independent board oversight committee. This move comes with a notable shift: CEO Sam Altman is no longer part of the safety committee, marking a departure from the previous structure.

The newly formed Safety and Security Committee (SSC) will be chaired by Zico Kolter, Director of the Machine Learning Department at Carnegie Mellon University. Other key members include Quora CEO Adam D'Angelo, retired US Army General Paul Nakasone, and Nicole Seligman, former EVP and General Counsel of Sony Corporation. 

This new committee replaces the previous Safety and Security Committee that was formed in June 2024, which included Altman among its members. The original committee was tasked with making recommendations on critical safety and security decisions for OpenAI projects and operations.

The SSC's responsibilities now extend beyond recommendations. It will have the authority to oversee safety evaluations for major model releases and exercise oversight over model launches. Crucially, the committee will have the power to delay a release until safety concerns are adequately addressed. 

This restructuring follows a period of scrutiny regarding OpenAI's commitment to AI safety. The company has faced criticism in the past for disbanding its Superalignment team and the departures of key safety-focused personnel. The removal of Altman from the safety committee appears to be an attempt to address concerns about potential conflicts of interest in the company's safety oversight.

OpenAI's latest safety initiative also includes plans to enhance security measures, increase transparency about their work, and collaborate with external organizations. The company has already reached agreements with the US and UK AI Safety Institutes to collaborate on researching emerging AI safety risks and standards for trustworthy AI. 

This article originally appeared on Engadget at https://www.engadget.com/ai/openais-new-safety-board-has-more-power-and-no-sam-altman-230113547.html?src=rss

White House gets voluntary commitments from AI companies to curb deepfake porn

The White House released a statement today outlining commitments that several AI companies are making to curb the creation and distribution of image-based sexual abuse. The participating businesses have laid out the steps they are taking to prevent their platforms from being used to generate non-consensual intimate images (NCII) of adults and child sexual abuse material (CSAM).

Specifically, Adobe, Anthropic, Cohere, Common Crawl, Microsoft and OpenAI said they'll be:

  • "responsibly sourcing their datasets and safeguarding them from image-based sexual abuse"

All of the aforementioned except Common Crawl also agreed they'd be:

  • "incorporating feedback loops and iterative stress-testing strategies in their development processes, to guard against AI models outputting image-based sexual abuse"

  • And "removing nude images from AI training datasets" when appropriate.

It's a voluntary commitment, so today's announcement doesn't create any new actionable steps or consequences for failing to follow through on those promises. But it's still worth applauding a good faith effort to tackle this serious problem. The notable absences from today's White House release are Apple, Amazon, Google and Meta.

Many big tech and AI companies have been making strides to make it easier for victims of NCII to stop the spread of deepfake images and videos separately from this federal effort. StopNCII has partnered with several companies for a comprehensive approach to scrubbing this content, while other businesses are rolling out proprietary tools for reporting AI-generated image-based sexual abuse on their platforms.

If you believe you've been the victim of non-consensual intimate image-sharing, you can open a case with StopNCII here; if you're below the age of 18, you can file a report with NCMEC here.

This article originally appeared on Engadget at https://www.engadget.com/ai/white-house-gets-voluntary-commitments-from-ai-companies-to-curb-deepfake-porn-191536233.html?src=rss

Google searches now link to the Internet Archive

Earlier this year, Google said goodbye to its cached web page feature, saying it’s no longer needed. While many were sad to see it go, we can now rejoice as Google is partnering with the Internet Archive to bring something substantially similar back. Thanks to the Internet Archive’s Wayback Machine, you can now look at archived web pages easily.

Clicking on the three dots beside any search result will let you begin to access cached pages. Next, look for the “About this Result” panel and click “More About This Page.” Doing so will lead you to the Wayback Machine, allowing anyone to see snapshots of webpages from various times.

Director of the Wayback Machine Mark Graham said some archived web pages won’t be available because their rights holders have opted out of having their sites archived by the Internet Archive.

This article originally appeared on Engadget at https://www.engadget.com/computing/google-searches-now-link-to-the-internet-archive-164814487.html?src=rss

Meta scraped every Australian user’s account to train its AI

In a government inquiry about AI adoption in Australia, Meta's global privacy director Melinda Claybaugh was asked whether her company has been collecting Australians' data to train its generative AI technology. According to ABC News, Claybaugh initially denied the claim, but upon being pressed, she ultimately admitted that Meta scrapes all the photos and texts in all Facebook and Instagram posts from as far back as 2007, unless the user had set their posts to private. Further, she admitted that the company isn't offering Australians an opt-out option like it does to users in the European Union. 

Claybaugh said that Meta doesn't scrape the accounts of users under 18 years old, but she admitted that the company still collects their photos and other information if they're posted on their parents' or guardians' accounts. She couldn't answer, however, if the company collects data from previous years once a user turns 18. Upon being asked why Meta doesn't offer Australians the option not to consent to data collection, Claybaugh said that it exists in the EU "in response to a very specific legal frame," which most likely pertains to the bloc's General Data Protection Regulation (GDPR).

Meta had notified users in the EU that it will collect their data for AI training unless they opt out. "I will say that the ongoing conversation in Europe is the direct result of the existing regulatory landscape," Claybaugh explained during the inquiry. But even in the region, Claybaugh said that there's an "ongoing legal question around what is the interpretation of existing privacy law with respect to AI training." Meta decided not to offer its multimodal AI model and future versions in the block due to what it says is a lack of clarity from European regulators. Most of its concerns centered around the difficulties of training AI models with data from European users while complying with GDPR rules. 

Despite those legal questions around AI adoption in Europe, bottom line is that Meta is giving users in the bloc the power to block data collection. "Meta made it clear today that if Australia had these same laws Australians' data would also have been protected," Australian Senator David Shoebridge told ABC News. "The government's failure to act on privacy means companies like Meta are continuing to monetise and exploit pictures and videos of children on Facebook."

This article originally appeared on Engadget at https://www.engadget.com/apps/meta-scraped-every-australian-users-account-to-train-its-ai-120026200.html?src=rss