EU says Pornhub and others failed to stop minors accessing adult content

The European Commission (EC) accused four porn platforms of not doing enough to prevent minors from accessing their content. In its preliminary findings of a 10-month investigation, the European Union's regulatory arm said Pornhub, Stripchat, XNXX and XVideos have breached the Digital Services Act (DSA).

The EC said the platforms have an ineffective “self-declaration“ measure — they only require users to make a single click to state they are over 18. Nor do efforts like content warnings, page blurring and "restricted to adults" labels "effectively prevent minors from accessing harmful content." As such, the EC said the platforms are failing to protect the wellbeing and rights of minors, and it demanded that they put privacy-preserving age verification systems in place.

Furthermore, the EC said the quartet did not use objective and thorough methodologies to fully assess the risks to minors accessing content on their platforms. The regulator determined Stripchat, Xvideos and XNXX either misrepresented or failed to take into account consultations with organizations that specialize in children's rights and age verification systems in their risk assessments. It also suggested that the platforms' risk assessments "disproportionately emphasized business-centric concerns, such as reputational damage, rather than focusing on the societal risks to minors."

The platforms now have the chance to review the EC's preliminary findings and respond to them. They can implement measures to remedy the alleged DSA breaches as well. However, if the Commission confirms that the platforms failed to adhere to the DSA and it decides to issue a non-compliance decision, the porn providers could be on the hook for fines of up to six percent of their global annual turnover.

“In the EU, online platforms have a responsibility. Children are accessing adult content at increasingly younger ages and these platforms must put in place robust, privacy-preserving and effective measures to keep minors off their services,” Henna Virkkunen, the European Union’s executive vice-president for tech sovereignty, security and democracy, said in a statement. “Today, we are taking another action to enforce the DSA — ensuring that children are properly protected online, as they have the right to be.”


This article originally appeared on Engadget at https://www.engadget.com/big-tech/eu-says-pornhub-and-others-failed-to-stop-minors-accessing-adult-content-155632108.html?src=rss

OpenAI drops plans to release an adult chatbot

OpenAI has "indefinitely" abandoned plans to release an a erotic chatbot for adults following concerns from employees and investors, the company confirmed to The Financial Times. Plans for such a feature, first announced in October 2025 for release in December last year, had already been delayed while company debated whether to release it all. It's the second app OpenAI has decided to shelf this week, after announcing on Tuesday that it was shutting down its Sora video generator.

The adult-oriented chatbot, reportedly called "Citron mode," is now on hold with no planned release date. The company reportedly had difficulty training models that previously avoided erotic content and also removing illegal behavior like bestiality or incest, two people familiar with the matter told the FT

Open AI said that it wanted to conduct long-term research on the effects of erotic chats and user attachment to AI, adding that there was currently not yet enough "empirical evidence" on the subject. The company also said it wanted to focus on its core productivity tools like coding assistants and drop "side quests" like Sora and the erotic chatbot.

The idea for adult features came after OpenAI announced that it would add parental controls and automatic age detection features for ChatGPT. CEO Sam Altman said back in October that the company had always been careful about such issues over concerns around unhealthy AI attachments, but felt comfortable that it could "safely relax the restrictions in most cases."

However, the adult mode had reportedly caused concern among investors, particularly amid the controversy caused by rival xAI's Grok model that generated deepfake nudes of real people and children. Staff also worried about the feature, with one senior employee even leaving the company over the issue. "AI shouldn’t replace your friends or your family; you should have human connections," he told the FT

Another challenge is OpenAI's age-checking tech, introduced following lawsuits from families who said that ChatGPT harmed their children. The tech reportedly has an error right higher than ten percent, which would still give a large number of young people access to the tech. OpenAI said that figure is in the industry standard range and that it is continuing to work on its accuracy. 

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-drops-plans-to-release-an-adult-chatbot-113121190.html?src=rss

Oversight Board tells Meta expanding Community Notes outside of US poses ‘significant’ risks

Meta didn't consult its Oversight Board last year when it announced sweeping policy changes to content moderation and a rollback of third-party fact checking in the United States in favor of Community Notes. But the company did ask the board for advice on how to expand the crowd-sourced fact checks to other countries.

Now the Oversight Board is publishing its advice to Meta. In a 15,000-word policy advisory opinion, the group urged Meta to be cautious with an international rollout, warning that an expansion of the program could "pose significant human rights risks and contribute to tangible harms" if safeguards are not put in place. 

The board, notably, was asked to weigh in on a fairly narrow set of questions, including how it should evaluate whether to withhold the feature in certain countries. Meta "respectfully" asked the Oversight Board to avoid "general" critiques about the system, which it has said is modeled after X.

In its opinion, the Oversight Board said that Community Notes "could enhance users’ freedom of expression and improve online discourse" with enough safeguard. But it recommended Meta withhold the feature in countries with "high polarization," as well as countries in the midst of a crisis or "protracted conflict." The board also said that Meta should avoid countries with a history of organized disinformation networks, because the notes may be more easily manipulated in such places, and countries with "linguistic complexity" that Meta may be ill-equipped to understand. 

Depending on how you interpret that advice, that could exclude quite a few countries, though the board stopped short of making country-specific recommendations. Still, it raises questions about how closely Meta will follow the suggested guidelines. For example, the United States could be considered a country with "high polarization." (Community Notes has been live in the US for more than a year.)

While the Oversight Board was careful to say it "neither endorses nor opposes" an expansion of Community Notes, it did discuss Meta's approach to fact checking, noting that its partnerships with outside fact-checking organizations are still largely in place outside of the US. And the opinion cautions against ending these relationships, noting that research into Community Notes on X shows that authors writing notes often rely on work done by professional fact checkers.

"Community Notes and fact checking are not mutually exclusive," Oversight Board member Paolo Carozza tells Engadget. "One doesn't have to replace or substitute for the other, they can coexist. And in some situations, there are really important reasons for them to coexist. The board really deliberately stayed away from any kind of suggestion that the introduction of Community Notes ought to result in the removal or ending of fact checking."


This article originally appeared on Engadget at https://www.engadget.com/social-media/oversight-board-tells-meta-expanding-community-notes-outside-of-us-poses-significant-risks-100000213.html?src=rss

Reddit will prompt some accounts to ‘verify humanness’ in latest bot crackdown

Reddit CEO Steve Huffman has detailed the company's latest plan to fight bots and it means that some accounts will need to "verify humanness," though the company is stopping short of widespread identity verification. In an update, Huffman said that in "rare" cases accounts that seem "fishy" will be prompted for additional verification.

Such prompts "will not apply to most users," according to Huffman, but will apply to accounts where Reddit detects signs of automated posting or bot-like behavior. If the account doesn't pass the verification test, it may be "restricted" from the platform. For now, verification will take the form of on-device methods, including FaceID and passkeys. But the company is considering alternative methods, including World ID, the face-scanning orb company run by Sam Altman. "I think the internet needs verification solutions like this, where your account information, usage data, and identity never mix," Huffman writes. 

As part of the new policy, Reddit is also adding an "[APP]" label to existing "good" bots on the platform and making it easier for users to report suspected "bad" bots. The company is also grappling with a growing number of age verification laws. Reddit is “exploring” ways to “comply with these regulations without compromising user privacy,” Huffmans said.

The company is clearly trying to walk a careful line in how it approaches verification. Huffman notes that Reddit intends to "confirm humanness" rather than verify users' actual identities, which would erode the anonymity that Reddit is known for. But the rise of agentic AI has meant that Reddit is contending with the same sorts of bot-driven spam that took down the short-lived reboot of Digg.

Of course, Reddit is also filled with AI-generated material that's shared by actual humans but may be considered spammy by other users. The company has no plans to crack down on such content, at least for now, according to Huffman. "For better or worse, using AI to write is part of how people will communicate in the future (albeit annoying), so our current focus is to ensure there is a real, live human behind the accounts you’re seeing."


This article originally appeared on Engadget at https://www.engadget.com/social-media/reddit-will-prompt-some-accounts-to-verify-humanness-in-latest-bot-crackdown-161000181.html?src=rss

Anthropic releases safer Claude Code ‘auto mode’ to avoid mass file deletions and other AI snafus

Anthropic has begun previewing "auto mode" inside of Claude Code. The company describes the new feature as a middle path between the app's default behavior, which sees Claude request approval for every file write and bash command, and the "dangerously-skip-premissions" command some coders use to make the chatbot function more autonomously. 

With auto mode enabled, a classifier system guides Claude, giving it permission to carry out actions it deems safe, while redirecting the chatbot to take a different approach when it determines Claude might do something risky. In designing the system, Anthropic's goal was to reduce the likelihood of Claude carrying out mass file deletions, extracting sensitive data or executing malicious code. 

Of course, no system is perfect, and Anthropic warns as such. "The classifier may still allow some risky actions: for example, if user intent is ambiguous, or if Claude doesn't have enough context about your environment to know an action might create additional risk," the company writes. 

Anthropic doesn't mention a specific incident as inspiration for auto mode, but the recent 13-hour AWS outage Amazon suffered after one of the company's AI tools reportedly deleted a hosting environment, was probably front of mind for the company. Amazon blamed that specific incident on human error, saying the staffer involved in the incident had "broader permissions than expected."

Team plan users can preview auto mode starting today, with the feature set to roll out to Enterprise and API users in the coming days.

This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-releases-safer-claude-code-auto-mode-to-avoid-mass-file-deletions-and-other-ai-snafus-142500615.html?src=rss

X is changing its revenue-sharing policy to deter users pretending to be Americans

X is updating its revenue-sharing incentives to give more weight to engagement from a user’s home region, Nikita Bier, the company’s Head of Product has announced. Bier said the change in policy was to “encourage content that resonates with people in [the user’s] country, in neighboring countries and people who speak [their] language.” 

Bier continued that while X appreciates everyone’s opinion on US politics, the company is hoping the new policy can “disincentivize gaming the attention of US or Japanese accounts.” The US and Japan have the largest number of users on X. Bier didn’t mention it outright, but dozens of popular accounts tweeting pro-Trump sentiments and commentaries focusing on US politics in general were revealed to be based outside the US late last year, when X rolled out a transparency feature that exposed users’ locations. Those accounts, which pretended to be from the US and garnered millions of likes, views and reposts, turned out to be based in countries like India, Kenya and Nigeria. 

“X will be a much richer community when there's relevant posts for people in all parts of the world,” Bier said. When one user responded to his post that some countries barely have any users, making it hard to earn money from the website, Bier just suggested that they should write about their day-to-day experiences. “Of course, you’re welcome to continue chiming in on America politics. We just won’t send money overseas for that content,” he said. X’s new policy will start taking effect on Thursday, March 26. 

Update, March 25 2026, 11:30AM ET: According to a tweet from Musk, X "will pause moving forward with this until further consideration."

This article originally appeared on Engadget at https://www.engadget.com/social-media/x-is-changing-its-revenue-sharing-policy-to-deter-users-pretending-to-be-americans-090701729.html?src=rss

Baltimore sues xAI over Grok deepfakes

Grok has already taken extensive heat after the AI chatbot's image generation tool was used to create an estimated 3 million sexualized images over 11 days, including 23,000 of minors, according to the Center for Countering Digital Hate. Regulators around the world have limited access or launched investigations into the platform's potentially illegal and nonconsensual image generation. The US government hasn't made any moves against xAI or its platform at the federal level, but today, the city of Baltimore began a municipal lawsuit against the company. 

The lawsuit takes a different tactic, arguing that Elon Musk's businesses violated the city's Consumer Protection Ordinance. This complaint, as reported by The Guardian, said that xAI marketed Grok as an all-purpose AI assistant without disclosing the risks and exposure to harm of using both Grok and the X social network. 

"Baltimore’s consumer protection laws exist to safeguard residents from exactly this kind of emerging harm," City Solicitor Ebony M. Thompson said. "When companies introduce powerful technologies without adequate guardrails, the City has both the authority and the obligation to act. We are stepping in now to protect our residents, hold these companies accountable, and prevent these harms from becoming further entrenched as this technology continues to evolve."

The other notable action against Grok within the US stemmed from a potential class action filed by three teenagers who alleged that photos of them were used to create child sexual abuse material.

This article originally appeared on Engadget at https://www.engadget.com/ai/baltimore-sues-xai-over-grok-deepfakes-214135922.html?src=rss

Alphabet no longer has a controlling stake in its life sciences business Verily

Alphabet's life sciences business Verily is restructuring and raising money as a new corporate entity. Verily announced that with its $300 million investment round, it will change from an LLC to a corporation and rename itself Verily Health Inc. As a result, Alphabet now has a minority stake rather than a controlling one in the business. 

Similar to every other tech business, this chapter for Verily will be focused on AI. “From research to care, our customers need solutions that bring the best of clinical and scientific rigor together with AI to deliver the next generation of healthcare - one that is as precise as it is personal," Chairman and CEO Stephen Gillett said.

Google Life Sciences was renamed Verily in 2015, around the same time as Google also rebranded to Alphabet. It has worked on a wide range of projects over the years, such as using eye scans to predict heart disease and an opioid addiction center. In 2025, it closed its medical device division, a move that may have signaled its shift toward AI.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/alphabet-no-longer-has-a-controlling-stake-in-its-life-sciences-business-verily-221718631.html?src=rss

Don’t be surprised that the FBI is buying your location data

The FBI has confirmed to the Senate it is once again buying data which can be used to track the locations of US citizens. That may have surprised the people who thought the precedent in Carpenter v. United States prohibited it. But while that case examined if it was legal for law enforcement to obtain location data from mobile networks without a warrant, here the FBI and other agencies have found a way to skirt the Fourth Amendment entirely. Over the last few years, they have taken to just buying location data from the same companies which power the enormous online advertising ecosystem.

When your phone is connected to the internet, it broadcasts about itself, and so do the apps and platforms you use. That information includes your IP address and device type, as well as your longitude and latitude if your device has GPS. This data, known as Bidstream, alongside any third party cookies tied to your device, enables the process of Real Time Bidding (RTB). RTB is the process where your attention is auctioned off to the highest bidder in the milliseconds after you’ve loaded a page. In order to make the auctions work, these platforms need to know as much about you as they can.

As I explained in depth back in 2021, data such as your location and IP address is broadcast over the ad networks. This information can also be aggregated, licensed or sold to data brokers who can pair this with any “deterministic data” available. For instance, if you sign up to a platform and tell them your name, email address and annual income, that data could be licensed to a data broker. Even banks looking for new revenue streams are planning to license anonymized customer data to these companies. Data brokers can easily combine the two streams of information to build out a fairly extensive picture of you as a person, and what advertisers will be the most interested in you. Unfortunately, it’s extremely difficult to opt out of this and, even if it were, it would be even more difficult to destroy the data already in circulation.

In 2018 French company Vectaury, which acted as an ad sales intermediary for mobile apps, was inspected by the French data protection regulator. Officials found the company had built a database containing the personal data of 67.6 million people without proper consent.

Data brokers don’t just harvest and hoard this data to make online ad sales, however, they will also license and sell its databases to others. Lawmakers believe that these brokers have sold this data to rival nations looking for ways to spy on US citizens.

In January, 404Media revealed the US Immigration and Customs Enforcement Agency (ICE) bought access to tools supplied by cybersecurity company Penlink. Specifically, it purchased access to tools named Tangles and Webloc, which can be used to surveil large numbers of people at once. The latter tool reportedly has the power to identify smartphones in a given area and time, and can then follow them on their journey through the day and back to their home at night.

Given the secretive nature of its business, Penlink does not reveal much about how its tools operate. A since-removed marketing page says Webloc automatically analyzes “location based information” available in “endless digital channels from the web ecosystem.” And 404Media’s report says these tools access “commercially available smartphone location data,” supplied by third-party data brokers. Forbes reports the system can also pull together data from a variety of sources, including social media, to offer a real-time view of an event. The Texas Observer says Webloc can use this information to enable “warrantless device tracking.”

A number of other US law enforcement agencies have also purchased location data from data brokers, including the Department of Homeland Security, Customs and Border Protection, the Secret Service and the Internal Revenue Service. This isn’t just limited to government agencies, however, as anti-abortion groups did similar while targeting people visiting Planned Parenthood clinics.

The Fourth Amendment guarantees the right of the people to be protected from “unreasonable searches and seizures,” made without probable case. But, as Dori H. Rahbar wrote in the Columbia Law Review, “the Fourth Amendment does not regulate open market transactions.” Aaron X Sobel, writing in the Yale Law and Policy Review, described the practice as “end-running warrants,” and urged legislators to close this loophole. The Electronic Frontier Foundation (EFF), is also pushing for legislation under the Fourth Amendment is Not For Sale Act.

It’s not likely that such legislation will be passed for a long time, and a cynic would suggest it’s not possible under the current administration. But, even if it is, it won’t address the bigger issue of the ad tech industry and its partners vacuuming up as much information about us as it can. When these companies — many of which aren’t even known to the public — are able to store up enough information on us that, if they were so motivated, they could follow our path through the day, it’s a sign something is very rotten indeed. If we’re concerned about governments having this sort of access, then we should be equally nervous about anyone else having it as well.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/dont-be-surprised-that-the-fbi-is-buying-your-location-data-182047627.html?src=rss

UK fines 4chan nearly $700,000 for failing its online safety act obligations

UK’s Ofcom has fined 4chan a total of £520,000 ($690,000) over the website’s failure to comply with the rules of Online Safety Act 2023. The biggest chunk of the amount came from 4chan’s failure to ensure children cannot encounter pornographic content on its website by implementing an effective age check mechanism. For that violation, the website has received a penalty of £450,000 ($598,000) and an order to apply an age check system by April 2. It carries a daily rate penalty of £500 ($664) until the website is compliant or until June 1, whichever comes sooner.

Ofcom also found that 4chan has failed to carry out sufficient illegal content risk assessment on its website and has fined it £50,000 ($66,400) for that violation. 4chan has until April 2 to conduct a risk assessment, or it has to pay an additional £200 ($266) per day. Finally, the regulator has determined that 4chan failed to include provisions in its terms of service that specify how it protects users from illegal content. That carries a fine of £20,000 ($26,600), with a daily rate penalty of £100 ($133) a day from its compliance deadline of April 2 to June 1.

The regulator started investigating 4chan, famous for its anonymous and unmoderated messaging boards, in June 2025 to determine if it was failing to meet its obligations under the law. In October, Ofcom announced its decision for some of the investigations it opened. It slapped 4chan with a £20,000 ($26,700) fine for ignoring its requests for a copy of the website’s illegal harms risk assessment and to provide information about its qualifying worldwide revenue. The regulator has confirmed to Engadget that 4chan has yet to pay that previous fine, which also earned cumulative daily punishment fees for 60 days.

This article originally appeared on Engadget at https://www.engadget.com/social-media/uk-fines-4chan-nearly-700000-for-failing-its-online-safety-act-obligations-115106264.html?src=rss