TikTok tightens age verification across Europe

TikTok is bolstering its age-verification measures across Europe. In the coming weeks, the platform will roll out upgraded age-detection tech in the European Economic Area, as well as in the UK and Switzerland.

The systems will assess the likely age of a user based on their profile information and activity. When the tech flags an account that may belong to a user aged under 13 (the minimum age to use TikTok), a specialist moderator will assess whether it should be banned. TikTok will send users in Europe a notification to tell them about these measures and offer them a chance to learn more.

Also, if a moderator is looking at content for other reasons and thinks an account might belong to an underage user, they can flag it to a specialist for further review. Anyone can report an account they suspect is used by someone under 13 as well. TikTok says it removes about 6 million underage accounts in total from the platform every month.

Those whose accounts are banned can appeal if they think their access was wrongly terminated. Users can then provide a government-approved ID, a credit card authorization or selfie for age estimation (the latter process has not gone well for Roblox as of late, as kids found workarounds for age checks).

TikTok acknowledged that there's no single ideal solution to the issue as things stand. "Despite best efforts, there remains no globally agreed-upon method for effectively confirming a person's age in a way that also preserves their privacy," it stated in a blog post. "At TikTok, we're committed to keeping children under the age of 13 off our platform, providing teens with age-appropriate experiences and continuing to assess and implement a range of solutions. We believe that a multi-layered approach to age assurance — one in which multiple techniques are used — is essential to protecting teens and upholding safety-by-design principles."

TikTok is rolling out these practices after a pilot in Europe over the last year. That project helped the platform to identify and remove thousands more underage accounts. It worked with the Data Protection Commission (its main privacy regulator in the EU) to help ensure it complied with the bloc’s strict data protection standards.

These measures are coming into force amid intensifying calls to keep kids off social media. A social media ban for under 16s in Australia went into effect last month. Affected platforms have collectively closed or restricted millions of accounts as a result. Reddit has filed a lawsuit over the ban

A similar ban might be on the cards in the UK amid public pressure and cross-party support. Prime Minister Keir Starmer said "all options are on the table" and that he was watching "what is happening in Australia."

The House of Lords is set to vote on proposals for an under-16 social media ban next week. If an amendment passes, members of parliament will hold a binding vote on the matter in the coming months.

This article originally appeared on Engadget at https://www.engadget.com/social-media/tiktok-tightens-age-verification-across-europe-130000847.html?src=rss

Amazon’s New World: Aeternum MMO will go offline January 31, 2027

Today, Amazon shared more details about the final chapter of its game New World: Aeternum. The company announced in October that it would wind down support for the MMO, with the Nighthaven season to be its last. New World will be delisted and no longer available for purchase starting today, but the game's servers will not be taken offline until January 31, 2027. People who own the game will be able to continue playing until that date. Nighthaven season will continue through to that end date.

Players who had previously purchased New World: Aeternum will be able to re-download and continue playing up to the shutdown date. In-game currency such as Marks of Fortune will no longer be available to buy starting July 20, 2026, and refunds will not be offered for Marks of Fortune purchases.

This article originally appeared on Engadget at https://www.engadget.com/gaming/amazons-new-world-aeternum-mmo-will-go-offline-january-31-2027-205449407.html?src=rss

YouTube adds more parental controls, including a way to block teens from watching Shorts

YouTube is rolling out some additional parental controls, including a way to set time limits for viewing Shorts on teen accounts. In the near future, parents and guardians will be able to set the Shorts timer to zero on supervised accounts. "This is an industry-first feature that puts parents firmly in control of the amount of short-form content their kids watch," Jennifer Flannery O'Connor, YouTube's vice president of product management, wrote in a blog post. Along with that, take-a-break and bedtime reminders are now enabled by default for users aged 13-17. 

The platform is also bringing in new principles, under which it will recommend more age-appropriate and "enriching" videos to teens. For instance, YouTube will suggest videos from the likes of Khan Academy, CrashCourse and TED-Ed to them more often. It said it developed these principles (and a guide for creators to make teen-friendly videos) with help from its youth advisory committee, the Center for Scholars and Storytellers at UCLA, the American Psychological Association, the Digital Wellness Lab at Boston Children’s Hospital and other organizations.

Moreover, an updated sign-up process for kid accounts will be available in the coming weeks. Kid accounts are tied to parental ones, and don't have their own associated email address or a password. YouTube says users will be able to switch between accounts in the mobile app with just a few taps. "This makes it easier to ensure that everyone in the family is in the right viewing experience with the content settings and recommendations of age-appropriate content they actually want to watch," O'Connor wrote.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/youtube/youtube-adds-more-parental-controls-including-a-way-to-block-teens-from-watching-shorts-151329673.html?src=rss

Framework increases Desktop prices by up to $460 due to RAM crisis

Computer brand Framework has hiked the prices on RAM for its Desktop systems and Mainframes in response to rising costs with its suppliers. Compared with when the Desktops were announced, the 32GB and 64GB options each cost $40 more, but its 128GB variation now costs an extra $460. The current pricing for machines is $1,139 for 32GB, $1,639 for 64GB or $2,459 for 128GB. 

Since the company began altering its pricing structure last month, it committed to remaining transparent with customers about the changes happening to RAM prices. Framework also said it would reduce prices again once the market calms down. The original prices will be honored for any existing pre-orders. 

One of the big takeaways from CES 2026 was that RAM is going to be an expensive commodity this year. The rising costs are largely in response to artificial intelligence projects, such as the rush to build data centers. As a result, buyers who take the modular approach may want to upgrade less costly components for better specs without making the increasingly hefty investment in memory.

This article originally appeared on Engadget at https://www.engadget.com/computing/framework-increases-desktop-prices-by-up-to-460-due-to-ram-crisis-234827145.html?src=rss

Google’s new commerce framework cranks up the heat on ‘agentic shopping’

To further push the limits of consumerism, Google has launched a new open standard for agentic commerce that's called Universal Commerce Protocol (UCP). In brief, it's a framework that combines the power of AI agents and online shopping platforms to help customers buy more things.

Thanks to the introduction of UCP, Google is offering three new online shopping features. To start, Google's AI mode will have a new checkout feature that allows customers to buy eligible products from certain US retailers within Google Search. Currently, this feature works with Google Pay, but it will soon add PayPal compatibility and incorporate more capabilities, like related product discovery and using loyalty points.

On the merchant side, Google also established the Business Agent feature, which Google said will be "a virtual sales associate that can answer product questions in a brand’s voice." The Business Agent will launch tomorrow with early adopters including Lowe’s, Michaels, Poshmark, Reebok and more. Also for retailers, the UCP is responsible for the new Direct Offers feature, which lets companies advertising with Google to "present exclusive offers for shoppers who are ready to buy, directly in AI Mode." The Direct Offers feature will work in tandem with the ads in AI Mode that Google is testing.  

With UCP, Google Search, retailers and payment processors are joining forces to make online shopping even easier, whether it's figuring out what product to buy, completing the purchase or offering "post-purchase support." According to Google, UCP is compatible with existing industry protocols, like Agent2Agent, Agent Payment Protocols and Model Context Protocol. UCP was even co-developed with industry giants like Shopify, Etsy and Walmart, and was endorsed by even more companies in the commerce ecosystem, including Macy's, Stripe, Visa and more.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/googles-new-commerce-framework-cranks-up-the-heat-on-agentic-shopping-212433122.html?src=rss

Instagram says accounts ‘are secure’ after wave of suspicious password reset requests

If you received a bunch of password reset requests from Instagram recently, you're not alone. Malwarebytes, an antivirus software company, initially reported that there was a data breach revealing the "sensitive information" of 17.5 million Instagram users. Malwarebytes added that the leak included Instagram usernames, physical addresses, phone numbers, email addresses and more. However, Instagram said there was no breach and that user accounts were "secure."

In Malwarebytes post, the company added that the "data is available for sale on the dark web and can be abused by cybercriminals." Malwarebytes noted in an email to its customers that it discovered the breach during its routine dark web scan and that it's tied to a potential incident related to an Instagram API exposure from 2024.

The reported breach has resulted in users receiving several emails from Instagram about password reset requests. According to Malwarebytes, the leaked information could lead to more serious attacks, like phishing attempts or account takeovers. In response, Instagram posted on X that users can ignore the recent emails requesting password resets.

"We fixed an issue that let an external party request password reset emails for some people," Instagram's post on X read. "There was no breach of our systems and your Instagram accounts are secure."

While Instagram said this isn't a data breach, its parent company has been in hot water for data breaches in the past. If you haven't already, it's always a good idea to turn on two-factor authentication and change your password. Even better, you can review what devices are logged into your Instagram account in Meta's Accounts Center.

Update, January 11, 2026, 11:10AM ET: This story and its headline have been updated with Instagram's statement that was posted on X.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/instagram-says-accounts-are-secure-after-wave-of-suspicious-password-reset-requests-192105188.html?src=rss

Spotify is no longer running ads for ICE

There are no recruitment ads for Immigration and Customs Enforcement (ICE) running on Spotify at the moment, the streaming service has told Variety. A spokesperson has confirmed the news after an ICE agent fatally shot Renee Good in Minneapolis, but they also clarified that the ads stopped running in late 2025. “The advertisements mentioned were part of a US government recruitment campaign that ran across all major media and platforms,” they explained.

Spotify caught flak back in October for playing ICE ads, asking people to “join the mission to protect America,” in between songs for users on the ad-supported plan. The advertisements even promised $50,000 signing bonuses for new recruits. Campaigns were launched to urge users to cancel their subscriptions and to boycott the service, and even music labels called on the company to stop serving ICE advertisements. Spotify said back then that the ads don’t violate its policies and that users can simply mark them with a thumbs up or down to let the platform know their preferences.

The company reportedly received $74,000 from Homeland Security for the ICE ads, but that’s a tiny amount compared to what other companies received. According to a report by Rolling Stone, Google and YouTube were paid $3 million for Spanish-language ads that called for self-deportation, while Meta received $2.8 million.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/streaming/spotify-is-no-longer-running-ads-for-ice-130000672.html?src=rss

California introduces a one-stop shop to delete your online data footprint

Californians can now put a stop to their personal data being sold around on an online trading floor, thanks to a new free tool. On January 1, the state launched its Delete Request and Opt-out Platform, shortened to DROP, that allows residents to request to delete all of their personal information online that's been harvested by data brokers.

According to the California Privacy Protection Agency (CalPrivacy), which was responsible for DROP's release, it's a "first of its kind" tool that imposes new restrictions on businesses that hoard and sell personal info that consumers didn't provide directly. The process requires verifying your California residency before you can send a "single deletion request to every registered data broker in California."

On the other end, CalPrivacy will require data brokers to register every year and to process any deletion requests from DROP. Data brokers will also have to report the type of information they collect and share, while also being subject to regular audits that check for compliance. If any data broker is found skirting the requirements, they could face penalties and fines.

Besides being the first in the country to offer this type of comprehensive tool that deletes online personal data, CalPrivacy said it's one of four states, including Oregon, Texas and Vermont, to require data broker registration. According to the agency, data brokers will start processing the first deletion requests from DROP starting August 1, 2026.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/california-introduces-a-one-stop-shop-to-delete-your-online-data-footprint-173102064.html?src=rss

Elon Musk’s Grok AI posted CSAM image following safeguard ‘lapses’

Elon Musk's Grok AI has been allowing users to transform photographs of woman and children into sexualized and compromising images, Bloomberg reported. The issue has created an uproar among users on X and prompted an "apology" from the bot itself. "I deeply regret an incident on Dec. 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt," Grok said in a post. An X representative has yet to comment on the matter.

According to the Rape, Abuse & Incest National Network, CSAM includes "AI-generated content that makes it look like a child is being abused," as well as "any content that sexualizes or exploits a child for the viewer’s benefit."

Several days ago, users noticed others on the site asking Grok to digitally manipulate photos of women and children into sexualized and abusive content, according to CNBC. The images were then distributed on X and other sites without consent, in possible violation of law. "We've identified lapses in safeguards and are urgently fixing them," a response from Grok reads. It added that CSAM is "illegal and prohibited." Grok is supposed to have features to prevent such abuse, but AI guardrails can often be manipulated by users.

It appears X has yet to reinforced whatever guardrails Grok has to prevent this sort of image generation. However, the company has hidden Grok's media feature which makes it harder to either find images or document potential abuse. Grok itself acknowledged that "a company could face criminal or civil penalties if it knowingly facilitates or fails to prevent AI-generated CSAM after being alerted." 

The Internet Watch Foundation recently revealed that AI-generated CSAM has increased by an increase orders of magnitude in 2025 compared to the year before. This is in part because the language models behind AI generation are accidentally trained on real photos of children scraped from school websites and social media or even prior CSAM content.

This article originally appeared on Engadget at https://www.engadget.com/ai/elon-musks-grok-ai-posted-csam-image-following-safeguard-lapses-140521454.html?src=rss

Instagram chief: AI is so ubiquitous ‘it will be more practical to fingerprint real media than fake media’

It's no secret that AI-generated content took over our social media feeds in 2025. Now, Instagram's top exec Adam Mosseri has made it clear that he expects AI content to overtake non-AI imagery and the significant implications that shift has for its creators and photographers.

Mosseri shared the thoughts in a lengthy post about the broader trends he expects to shape Instagram in 2026. And he offered a notably candid assessment on how AI is upending the platform. "Everything that made creators matter—the ability to be real, to connect, to have a voice that couldn’t be faked—is now suddenly accessible to anyone with the right tools," he wrote. "The feeds are starting to fill up with synthetic everything."

But Mosseri doesn't seem particularly concerned by this shift. He says that there is "a lot of amazing AI content" and that the platform may need to rethink its approach to labeling such imagery by "fingerprinting real media, not just chasing fake."

From Mosseri (emphasis his):

Social media platforms are going to come under increasing pressure to identify and label AI-generated content as such. All the major platforms will do good work identifying AI content, but they will get worse at it over time as AI gets better at imitating reality. There is already a growing number of people who believe, as I do, that it will be more practical to fingerprint real media than fake media. Camera manufacturers could cryptographically sign images at capture, creating a chain of custody.

On some level, it's easy to understand how this seems like a more practical approach for Meta. As we've previously reported, technologies that are meant to identify AI content, like watermarks, have proved unreliable at best. They are easy to remove and even easier to ignore altogether. Meta's own labels are far from clear and the company, which has spent tens of billions of dollars on AI this year alone, has admitted it can't reliably detect AI-generated or manipulated content on its platform.

That Mosseri is so readily admitting defeat on this issue, though, is telling. AI slop has won. And when it comes to helping Instagram's 3 billion users understand what is real, that should largely be someone else's problem, not Meta's. Camera makers — presumably phone makers and actual camera manufacturers — should come up with their own system that sure sounds a lot like watermarking to "to verify authenticity at capture." Mosseri offers few details about how this would work or be implemented at the scale required to make it feasible.

Mosseri also doesn't really address the fact that this is likely to alienate the many photographers and other Instagram creators who have already grown frustrated with the app. The exec regularly fields complaints from the group who want to know why Instagram's algorithm doesn't consistently surface their posts to their on followers.

But Mosseri suggests those complaints stem from an outdated vision of what Instagram even is. The feed of "polished" square images, he says, "is dead." Camera companies, in his estimation, are "are betting on the wrong aesthetic" by trying to "make everyone look like a professional photographer from the past." Instead, he says that more "raw" and "unflattering" images will be how creators can prove they are real, and not AI. In a world where Instagram has more AI content than not, creators should prioritize images and videos that intentionally make them look bad. 


This article originally appeared on Engadget at https://www.engadget.com/social-media/instagram-chief-ai-is-so-ubiquitous-it-will-be-more-practical-to-fingerprint-real-media-than-fake-media-202620080.html?src=rss