OkCupid settles FTC case on alleged misuse of its users’ personal data

Match Group and its subsidiary OkCupid has finally settled a lawsuit with the Federal Trade Commission that dates back to its alleged sharing of user data back in 2014. According to the lawsuit, the FTC accused OkCupid of inappropriately sharing personal user data that includes photos and location info with a third party company, Clarifai, which offers AI-powered software for uses like facial recognition and content moderation.

According to the FTC, OkCupid's privacy policy at the time noted that the company wouldn't share a user's personal information with others, except for some cases including "service providers, business partners, other entities within its family of businesses." However, the lawsuit accused OkCupid of sharing three million photos of its users to Clarifai, which the FTC claims is a "unrelated third party" that didn't fall under the allowed entities. On top of that, the lawsuit alleged that OkCupid didn't inform its users of this data sharing, nor give them a chance to opt out.

"While we do not admit any wrongdoing, we have settled this matter with the FTC with no monetary penalty to resolve an issue from 2014 and move forward," an OkCupid spokesperson told Engadget, adding that the allegations don't reflect how OkCupid operates today. "Over the years, we have further strengthened our privacy practices and data governance to ensure we meet the expectations of our users."

Moving forward, the settlement would "permanently prohibit" Match Group, which owns OkCupid, and Humor Rainbow, which operates OkCupid, from misrepresenting what kind of personal information it collects, the purpose for collecting the data and any consumer choices to prevent data collection. Even after the 2014 incident, OkCupid was found with security flaws that could've exposed user account info but, which were quickly patched in 2020.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/okcupid-settles-ftc-case-on-alleged-misuse-of-its-users-personal-data-175159228.html?src=rss

Kash Patel’s personal email account was accessed by hackers linked to Iran

A hacking group called Handala has gained access to FBI Director Kash Patel's email account, Reuters reports. The group published content from Patel's email on their website as proof, including photos of Patel "sniffing and smoking cigars" and "making a face while taking a picture of himself in the mirror with a ​large bottle of rum."

TechCrunch was able to independently confirm that at least some of the emails Handala stole were from Patel's account by checking information used by mail delivery systems that’s stored in an email's header. Several stolen emails included a cryptographic signature that linked them to Patel's account. The FBI has also separately confirmed that the Director's account was hacked. "The FBI is aware of malicious actors targeting Director Patel's personal email information, and we have taken all necessary steps to mitigate potential risks associated with this activity," the Bureau told TechCrunch. "The information in question is historical in nature and involves no government information." 

The FBI is offering up to $10 million in rewards for more information about the hackers who targeted Patel's account. Handala presents as a pro-Palestinian hacking group online, but is believed to be one of several aliases used by cyberintelligence units working for the Iranian government, Reuters writes. Groups affiliated with Iran have targeted officials in the US before. In August 2024, the FBI shared that a separate group, APT42, was trying to gain access to both the Trump and Harris campaigns. Three men associated with APT42 were later charged that September.

Handala has appeared to become more active during the current conflict between the US, Israel and Iran. According to Reuters, the group claimed to be behind a cyber attack on Stryker, a medical devices company, earlier in March. Handala also said it accessed and published personal data from Lockheed Martin employees stationed in the Middle East.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/kash-patels-personal-email-account-was-accessed-by-hackers-linked-to-iran-212618474.html?src=rss

European Commission confirms data breach

The European Commission has announced that it suffered a cyber attack that affected "cloud infrastructure hosting the Commission's web presence on the Europea.eu platform." While the attack has been contained, Bleeping Computer reports that the threat actor claiming to be behind it was able to take over 350GB of data before the Commission addressed the issue.

"Early findings of our ongoing investigation suggest that data have been taken from [Europa] websites," the European Commission says. "The Commission is duly notifying the Union entities who might have been affected by the incident."

The Commission's investigation is ongoing, and it has yet to disclose how its cloud infrastructure was breached. According to Bleeping Computer, the threat actor was able to access the Europa sites and employee data via one of the Commission's Amazon Web Services accounts. The Commission disclosed a breach that similarly impacted employee data in February.

Both breaches appear to be less severe than the Salt Typhoon hack that impacted US telecommunications companies in 2024. Hackers reportedly gained access to data from the smartphones of members of both the Trump and Harris campaigns, and other government officials. In January 2026, the European Commission introduced a new Cybersecurity Package designed to address similar issues, in part by outlining new ways for EU states to deal with potentially risky companies in their telecom supply chains.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/european-commission-confirms-data-breach-200000982.html?src=rss

EU says Pornhub and others failed to stop minors accessing adult content

The European Commission (EC) accused four porn platforms of not doing enough to prevent minors from accessing their content. In its preliminary findings of a 10-month investigation, the European Union's regulatory arm said Pornhub, Stripchat, XNXX and XVideos have breached the Digital Services Act (DSA).

The EC said the platforms have an ineffective “self-declaration“ measure — they only require users to make a single click to state they are over 18. Nor do efforts like content warnings, page blurring and "restricted to adults" labels "effectively prevent minors from accessing harmful content." As such, the EC said the platforms are failing to protect the wellbeing and rights of minors, and it demanded that they put privacy-preserving age verification systems in place.

Furthermore, the EC said the quartet did not use objective and thorough methodologies to fully assess the risks to minors accessing content on their platforms. The regulator determined Stripchat, Xvideos and XNXX either misrepresented or failed to take into account consultations with organizations that specialize in children's rights and age verification systems in their risk assessments. It also suggested that the platforms' risk assessments "disproportionately emphasized business-centric concerns, such as reputational damage, rather than focusing on the societal risks to minors."

The platforms now have the chance to review the EC's preliminary findings and respond to them. They can implement measures to remedy the alleged DSA breaches as well. However, if the Commission confirms that the platforms failed to adhere to the DSA and it decides to issue a non-compliance decision, the porn providers could be on the hook for fines of up to six percent of their global annual turnover.

“In the EU, online platforms have a responsibility. Children are accessing adult content at increasingly younger ages and these platforms must put in place robust, privacy-preserving and effective measures to keep minors off their services,” Henna Virkkunen, the European Union’s executive vice-president for tech sovereignty, security and democracy, said in a statement. “Today, we are taking another action to enforce the DSA — ensuring that children are properly protected online, as they have the right to be.”


This article originally appeared on Engadget at https://www.engadget.com/big-tech/eu-says-pornhub-and-others-failed-to-stop-minors-accessing-adult-content-155632108.html?src=rss

OpenAI drops plans to release an adult chatbot

OpenAI has "indefinitely" abandoned plans to release an a erotic chatbot for adults following concerns from employees and investors, the company confirmed to The Financial Times. Plans for such a feature, first announced in October 2025 for release in December last year, had already been delayed while company debated whether to release it all. It's the second app OpenAI has decided to shelf this week, after announcing on Tuesday that it was shutting down its Sora video generator.

The adult-oriented chatbot, reportedly called "Citron mode," is now on hold with no planned release date. The company reportedly had difficulty training models that previously avoided erotic content and also removing illegal behavior like bestiality or incest, two people familiar with the matter told the FT

Open AI said that it wanted to conduct long-term research on the effects of erotic chats and user attachment to AI, adding that there was currently not yet enough "empirical evidence" on the subject. The company also said it wanted to focus on its core productivity tools like coding assistants and drop "side quests" like Sora and the erotic chatbot.

The idea for adult features came after OpenAI announced that it would add parental controls and automatic age detection features for ChatGPT. CEO Sam Altman said back in October that the company had always been careful about such issues over concerns around unhealthy AI attachments, but felt comfortable that it could "safely relax the restrictions in most cases."

However, the adult mode had reportedly caused concern among investors, particularly amid the controversy caused by rival xAI's Grok model that generated deepfake nudes of real people and children. Staff also worried about the feature, with one senior employee even leaving the company over the issue. "AI shouldn’t replace your friends or your family; you should have human connections," he told the FT

Another challenge is OpenAI's age-checking tech, introduced following lawsuits from families who said that ChatGPT harmed their children. The tech reportedly has an error right higher than ten percent, which would still give a large number of young people access to the tech. OpenAI said that figure is in the industry standard range and that it is continuing to work on its accuracy. 

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-drops-plans-to-release-an-adult-chatbot-113121190.html?src=rss

Oversight Board tells Meta expanding Community Notes outside of US poses ‘significant’ risks

Meta didn't consult its Oversight Board last year when it announced sweeping policy changes to content moderation and a rollback of third-party fact checking in the United States in favor of Community Notes. But the company did ask the board for advice on how to expand the crowd-sourced fact checks to other countries.

Now the Oversight Board is publishing its advice to Meta. In a 15,000-word policy advisory opinion, the group urged Meta to be cautious with an international rollout, warning that an expansion of the program could "pose significant human rights risks and contribute to tangible harms" if safeguards are not put in place. 

The board, notably, was asked to weigh in on a fairly narrow set of questions, including how it should evaluate whether to withhold the feature in certain countries. Meta "respectfully" asked the Oversight Board to avoid "general" critiques about the system, which it has said is modeled after X.

In its opinion, the Oversight Board said that Community Notes "could enhance users’ freedom of expression and improve online discourse" with enough safeguard. But it recommended Meta withhold the feature in countries with "high polarization," as well as countries in the midst of a crisis or "protracted conflict." The board also said that Meta should avoid countries with a history of organized disinformation networks, because the notes may be more easily manipulated in such places, and countries with "linguistic complexity" that Meta may be ill-equipped to understand. 

Depending on how you interpret that advice, that could exclude quite a few countries, though the board stopped short of making country-specific recommendations. Still, it raises questions about how closely Meta will follow the suggested guidelines. For example, the United States could be considered a country with "high polarization." (Community Notes has been live in the US for more than a year.)

While the Oversight Board was careful to say it "neither endorses nor opposes" an expansion of Community Notes, it did discuss Meta's approach to fact checking, noting that its partnerships with outside fact-checking organizations are still largely in place outside of the US. And the opinion cautions against ending these relationships, noting that research into Community Notes on X shows that authors writing notes often rely on work done by professional fact checkers.

"Community Notes and fact checking are not mutually exclusive," Oversight Board member Paolo Carozza tells Engadget. "One doesn't have to replace or substitute for the other, they can coexist. And in some situations, there are really important reasons for them to coexist. The board really deliberately stayed away from any kind of suggestion that the introduction of Community Notes ought to result in the removal or ending of fact checking."


This article originally appeared on Engadget at https://www.engadget.com/social-media/oversight-board-tells-meta-expanding-community-notes-outside-of-us-poses-significant-risks-100000213.html?src=rss

Reddit will prompt some accounts to ‘verify humanness’ in latest bot crackdown

Reddit CEO Steve Huffman has detailed the company's latest plan to fight bots and it means that some accounts will need to "verify humanness," though the company is stopping short of widespread identity verification. In an update, Huffman said that in "rare" cases accounts that seem "fishy" will be prompted for additional verification.

Such prompts "will not apply to most users," according to Huffman, but will apply to accounts where Reddit detects signs of automated posting or bot-like behavior. If the account doesn't pass the verification test, it may be "restricted" from the platform. For now, verification will take the form of on-device methods, including FaceID and passkeys. But the company is considering alternative methods, including World ID, the face-scanning orb company run by Sam Altman. "I think the internet needs verification solutions like this, where your account information, usage data, and identity never mix," Huffman writes. 

As part of the new policy, Reddit is also adding an "[APP]" label to existing "good" bots on the platform and making it easier for users to report suspected "bad" bots. The company is also grappling with a growing number of age verification laws. Reddit is “exploring” ways to “comply with these regulations without compromising user privacy,” Huffmans said.

The company is clearly trying to walk a careful line in how it approaches verification. Huffman notes that Reddit intends to "confirm humanness" rather than verify users' actual identities, which would erode the anonymity that Reddit is known for. But the rise of agentic AI has meant that Reddit is contending with the same sorts of bot-driven spam that took down the short-lived reboot of Digg.

Of course, Reddit is also filled with AI-generated material that's shared by actual humans but may be considered spammy by other users. The company has no plans to crack down on such content, at least for now, according to Huffman. "For better or worse, using AI to write is part of how people will communicate in the future (albeit annoying), so our current focus is to ensure there is a real, live human behind the accounts you’re seeing."


This article originally appeared on Engadget at https://www.engadget.com/social-media/reddit-will-prompt-some-accounts-to-verify-humanness-in-latest-bot-crackdown-161000181.html?src=rss

Anthropic releases safer Claude Code ‘auto mode’ to avoid mass file deletions and other AI snafus

Anthropic has begun previewing "auto mode" inside of Claude Code. The company describes the new feature as a middle path between the app's default behavior, which sees Claude request approval for every file write and bash command, and the "dangerously-skip-premissions" command some coders use to make the chatbot function more autonomously. 

With auto mode enabled, a classifier system guides Claude, giving it permission to carry out actions it deems safe, while redirecting the chatbot to take a different approach when it determines Claude might do something risky. In designing the system, Anthropic's goal was to reduce the likelihood of Claude carrying out mass file deletions, extracting sensitive data or executing malicious code. 

Of course, no system is perfect, and Anthropic warns as such. "The classifier may still allow some risky actions: for example, if user intent is ambiguous, or if Claude doesn't have enough context about your environment to know an action might create additional risk," the company writes. 

Anthropic doesn't mention a specific incident as inspiration for auto mode, but the recent 13-hour AWS outage Amazon suffered after one of the company's AI tools reportedly deleted a hosting environment, was probably front of mind for the company. Amazon blamed that specific incident on human error, saying the staffer involved in the incident had "broader permissions than expected."

Team plan users can preview auto mode starting today, with the feature set to roll out to Enterprise and API users in the coming days.

This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-releases-safer-claude-code-auto-mode-to-avoid-mass-file-deletions-and-other-ai-snafus-142500615.html?src=rss

X is changing its revenue-sharing policy to deter users pretending to be Americans

X is updating its revenue-sharing incentives to give more weight to engagement from a user’s home region, Nikita Bier, the company’s Head of Product has announced. Bier said the change in policy was to “encourage content that resonates with people in [the user’s] country, in neighboring countries and people who speak [their] language.” 

Bier continued that while X appreciates everyone’s opinion on US politics, the company is hoping the new policy can “disincentivize gaming the attention of US or Japanese accounts.” The US and Japan have the largest number of users on X. Bier didn’t mention it outright, but dozens of popular accounts tweeting pro-Trump sentiments and commentaries focusing on US politics in general were revealed to be based outside the US late last year, when X rolled out a transparency feature that exposed users’ locations. Those accounts, which pretended to be from the US and garnered millions of likes, views and reposts, turned out to be based in countries like India, Kenya and Nigeria. 

“X will be a much richer community when there's relevant posts for people in all parts of the world,” Bier said. When one user responded to his post that some countries barely have any users, making it hard to earn money from the website, Bier just suggested that they should write about their day-to-day experiences. “Of course, you’re welcome to continue chiming in on America politics. We just won’t send money overseas for that content,” he said. X’s new policy will start taking effect on Thursday, March 26. 

Update, March 25 2026, 11:30AM ET: According to a tweet from Musk, X "will pause moving forward with this until further consideration."

This article originally appeared on Engadget at https://www.engadget.com/social-media/x-is-changing-its-revenue-sharing-policy-to-deter-users-pretending-to-be-americans-090701729.html?src=rss

Baltimore sues xAI over Grok deepfakes

Grok has already taken extensive heat after the AI chatbot's image generation tool was used to create an estimated 3 million sexualized images over 11 days, including 23,000 of minors, according to the Center for Countering Digital Hate. Regulators around the world have limited access or launched investigations into the platform's potentially illegal and nonconsensual image generation. The US government hasn't made any moves against xAI or its platform at the federal level, but today, the city of Baltimore began a municipal lawsuit against the company. 

The lawsuit takes a different tactic, arguing that Elon Musk's businesses violated the city's Consumer Protection Ordinance. This complaint, as reported by The Guardian, said that xAI marketed Grok as an all-purpose AI assistant without disclosing the risks and exposure to harm of using both Grok and the X social network. 

"Baltimore’s consumer protection laws exist to safeguard residents from exactly this kind of emerging harm," City Solicitor Ebony M. Thompson said. "When companies introduce powerful technologies without adequate guardrails, the City has both the authority and the obligation to act. We are stepping in now to protect our residents, hold these companies accountable, and prevent these harms from becoming further entrenched as this technology continues to evolve."

The other notable action against Grok within the US stemmed from a potential class action filed by three teenagers who alleged that photos of them were used to create child sexual abuse material.

This article originally appeared on Engadget at https://www.engadget.com/ai/baltimore-sues-xai-over-grok-deepfakes-214135922.html?src=rss