Hacker used Anthropic’s Claude chatbot to attack multiple government agencies in Mexico

Here's yet another troubling story about this "golden" era of AI. A hacker has exploited Anthropic's Claude chatbot to carry out attacks against Mexican government agencies, according to a report by Bloomberg. This resulted in the theft of 150GB of official government data, including taxpayer records, employee credentials and more.

The hacker used Claude to find vulnerabilities in government networks and to write scripts to exploit them. It also tasked the chatbot with finding ways to automate data theft, as indicated by cybersecurity company Gambit Security. This started in December and continued for around a month.

It looks like the hacker was able to essentially jailbreak Claude with prompts, finally bypassing the chatbot's guardrails. Claude originally refused the nefarious demands until eventually relenting.

"In total, it produced thousands of detailed reports that included ready-to-execute plans, telling the human operator exactly which internal targets to attack next and what credentials to use," said Curtis Simpson, Gambit Security’s chief strategy officer.

Anthropic has investigated the claims, disrupted the activity and banned all of the accounts involved, according to a company representative. The spokesperson also said that its latest model, Claude Opus 4.6, includes tools to disrupt this kind of misuse.

It's also been reported that this hacker used ChatGPT to supplement the attacks, using OpenAI's chatbot to gather information on how to move through computer networks, determine which credentials were needed to access systems and how to avoid detection. OpenAI says it has identified attempts by the hacker to violate its usage policies and that the tools refused to comply.

The hacker remains unidentified. The attacks haven't been attributed to a specific group, but Gambit Security did suggest they could be tied to a foreign government. It's also unclear what the hacker wants to do with all of that data.

Mexico's national digital agency hasn't commented on the breach, but did note that cybersecurity is a priority. The state government of Jalisco denies that it was breached, saying only federal networks were impacted. However, Mexico's national electoral institute also denied any breaches or unauthorized access in recent months. It's worth noting that Gambit found at least 20 security vulnerabilities during its research that the country is likely not keen on highlighting.

This isn't the first time Claude has been used for a major cyberattack. Last year, hackers in China manipulated the tool into attempting to infiltrate dozens of global targets, several of which were successful. Anthropic just nixed its long-standing safety pledge, which committed to never train an AI system unless it could guarantee in advance that safety measures were adequate. So who knows what fresh hell the future will bring as the company's tools become more advanced.

This article originally appeared on Engadget at https://www.engadget.com/ai/hacker-used-anthropics-claude-chatbot-to-attack-multiple-government-agencies-in-mexico-171237255.html?src=rss

Amazon introduces three personality styles for Alexa+

Amazon is offering a new way for Alexa+ users to customize the AI assistant's communication style. The company has introduced three personalities for Alexa+, so the assistant can adopt an attitude that is Brief, Chill or Sweet. 

The Brief style will be exactly that: no small talk and no extra conversation. Chill is easygoing and seems to be inspired by caricatures of the surfer/stoner type, while the Sweet mode is almost aggressively perky and chipper. In the audio sample provided, when a user asks "Alexa, how's it going?" the Chill voice responds, "Life’s treating me well – all systems are Zen and the digital universe is spinning in harmony." In contrast, the Sweet one replies, "Absolutely fantastic! I’m radiating pure joy and ready to make your day incredibly amazing!"

Illustration explaining the three different personality styles users can apply to the Alexa+ assistant.
The three new personality styles will set alongside the standard Alexa.
Amazon

Amazon explained that the three personality styles are based on five metrics: expressiveness, emotional openness, formality, directness and humor. The company may release additional options with different combinations of those sliding scale traits in the future.

For now, users can swap the assistant's vibe from the Alexa app or with the spoken command, "Alexa, change your personality style." Both approaches can also be used to swap back to the classic Alexa voice. All three personalities are available now for all Alexa+ customers.

This article originally appeared on Engadget at https://www.engadget.com/ai/amazon-introduces-three-personality-styles-for-alexa-140000602.html?src=rss

1Password plans are getting more expensive soon

1Password is increasing prices for its individual and family plans. The individual rate is increasing from nearly $36 a year to $48, while the family option will cost $72 instead of $60. In emails sent to users, the business announced that the new rates will take effect for users at their next subscription renewal after March 27. 

It's a sizable price hike, but 1Password hasn't been incrementally inching its fees higher every couple years like we see so often for streaming subscriptions. This is the biggest bump we've seen to its rates in several years, even though the company has been adding ever-more tools for cybersecurity, such as new phishing protections that rolled out last month. Even at the higher cost, it's still one of the best options out there for password management.

Fortunately for those on a budget, we have seen 1Password offer pretty substantial discounts on its plans at times, often cutting the rates by as much as half. The company usually participates in the big deal sprees like Black Friday, but keep an eye out for standalone sales that might pop up year-round.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/1password-plans-are-getting-more-expensive-soon-213236400.html?src=rss

Reddit fined $19.6 million over age verification checks in the UK

A common theme in online age verification laws is the tension between user privacy and preventing children from accessing harmful or inappropriate content. Now the UK is sending a not-so-subtle message to Reddit on the subject, to the tune of £14.5m ($19.6 million). The nation's Information Commissioner's Office (ICO) accused the company of using children’s data and potentially exposing them to inappropriate content.

“Children under 13 had their personal information collected and used in ways they could not understand, consent to or control,” UK Information Commissioner John Edwards wrote in a statement. “That left them potentially exposed to content they should not have seen. This is unacceptable and has resulted in today’s fine.”

In July 2025, Reddit began requiring age verification to access adult content in the UK, in compliance with the Online Safety Act. However, that's only used to block under-18 users from sexually explicit, violent or other mature posts. The platform also prohibits users under 13 from accessing it altogether — and enforcement of that policy is lax. It merely requires users to declare, when signing up, that they're over 13. The ICO (accurately) described the method as "easy to bypass."

In its defense, Reddit told the BBC that it "didn't require users to share information about their identities, regardless of age, because we are deeply committed to their privacy and safety." The company said it would appeal the decision. "The ICO's insistence that we collect more private information on every UK user is counterintuitive and at odds with our strong belief in our users' online privacy and safety," the spokesperson added.

"It's concerning that a company the size of Reddit failed in its legal duty to protect the personal information of UK children," Edwards said. "Companies operating online services likely to be accessed by children have a responsibility to protect those children by ensuring they’re not exposed to risks through the way their data is used. To do this, they need to be confident they know the age of their users and have appropriate, effective age assurance measures in place.”

“Reddit failed to meet these expectations,” he added. “They must do better, and we are continuing to consider the age assurance controls now implemented by the platform.” The ICO also accused Reddit of failing to conduct a data protection impact assessment by January 2025.

The Guardian notes that the £14.5m fine is the third-largest handed down by the ICO. It trails only a £20m fine for British Airways involving a data breach disclosure and an £18.4m penalty for Marriott Hotels for exposing over 300 million customer records in a hack.

This article originally appeared on Engadget at https://www.engadget.com/social-media/reddit-fined-196-million-over-age-verification-checks-in-the-uk-173705048.html?src=rss

Anthropic accuses three Chinese AI labs of abusing Claude to improve their own models

Anthropic is issuing a call to action against AI "distillation attacks," after accusing three AI companies of misusing its Claude chatbot. On its website, Anthropic claimed that DeepSeek, Moonshot and MiniMax have been conducting "industrial-scale campaigns…to illicitly extract Claude’s capabilities to improve their own models."

Distillation in the AI world refers to when less capable models lean on the responses of more powerful ones to train themselves. While distillation isn't a bad thing across the board, Anthropic said that these types of attacks can be used in a more nefarious way. According to Anthropic, these three Chinese AI firms were responsible for more than "16 million exchanges with Claude through approximately 24,000 fraudulent accounts." From Anthropic's perspective, these competing companies were using Claude as a shortcut to develop more advanced AI models, which could also lead to circumventing certain safeguards.

Anthropic said in its post that it was able to link each of these distilling attack campaigns to the specific companies with "high confidence" thanks to IP address correlation, metadata requests and infrastructure indicators, along with corroborating with others in the AI industry who have noticed similar behaviors.

Early last year, OpenAI made similar claims of rival firms distilling its models and banned suspected accounts in response. As for Anthropic, the company behind Claude said it would upgrade its system to make distillation attacks harder to do and easier to identify. While Anthropic is pointing fingers at these other firms, it's also facing a lawsuit from music publishers who accused the AI company of using illegal copies of songs to train its Claude chatbot.

This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-accuses-three-chinese-ai-labs-of-abusing-claude-to-improve-their-own-models-205210613.html?src=rss

Ring could be planning to expand Search Party feature beyond dogs

Ring CEO Jamie Siminoff has indicated that the company's controversial Search Party feature might not always be just for lost dogs, according to emails obtained by 404 Media. A creepy surveillance tool being used to surveil. Who could have predicted that?

"I believe that the foundation we created with Search Party, first for finding dogs, will end up becoming one of the most important pieces of tech and innovation to truly unlock the impact of our mission," Siminoff wrote in an email to staffers. "You can now see a future where we are able to zero out crime in neighborhoods. So many things to do to get there but for the first time ever we have the chance to fully complete what we started."

The words "zero out crime in neighborhoods" are particularly troubling. It is, however, worth noting that this is just an email and doesn't necessarily indicate a plan by the company. Siminoff wrote the email back in October when Search Party first launched, which was months before the public backlash started. He did end the thread by noting he couldn't "wait to show everyone else all the exciting things we are building over the years to come."

One of those things could be the recently-launched "Familiar Faces" tool, which uses facial recognition to identify people that wander into the frame of a Ring camera. It seems to me that a combination of the Search Party tech, which uses the combined might of connected Ring cameras, with the Familiar Faces tech could make for a very powerful surveillance tool that excels at finding specific individuals.

Siminoff also suggested in an earlier email to staffers that Ring technology could have been used to catch Charlie Kirk's killer by leveraging the company's Community Requests feature. This is a tool that allows cops to ask camera owners for footage, thanks to a partnership with the police tech company Axon.

Ring had planned an expansion of this program via a partnership with a surveillance company called Flock Safety. The companies canceled this partnership after a Super Bowl ad spotlighting the Search Party tool triggered public outcry. Ring didn't cite public sentiment for this decision, rather saying the integration would require "significantly more time and resources than anticipated."

Ring has responded to 404 Media's reporting, saying in an email that Search Party "does not process human biometrics or track people" and that "sharing has always been the camera owner's choice." This response did not provide any information as to what the future will hold for the company's toolset.

The organization has been friendly with law enforcement since inception. "Our mission to reduce crime in neighborhoods has been at the core of everything we do at Ring," founding chief Jamie Siminoff said when Amazon bought the company for $839 million back in 2018. 

This article originally appeared on Engadget at https://www.engadget.com/big-tech/ring-could-be-planning-to-expand-search-party-feature-beyond-dogs-175805706.html?src=rss

YouTube was down for thousands of users in the US

YouTube is experiencing an outage across the United States, with users in other countries like Canada, India, the Philippines, Australia and Russia also having problems with accessing the website. The issue seems to have started at around 8 PM Eastern time and reached 338,000 reports on Downdetector before starting to taper down. More users reported having issues accessing the app, but I personally lost access to the web homepage first.

As of 9:22 PM, users are still reporting being unable to access YouTube on Reddit. As of 9:33 PM, users are complaining that they still can’t access the service, though others say it’s back up for them. Some people are reporting partial restoration of service, with the homepage now being accessible but not seeing any recommended videos.

Downdetector also got thousands of reports of Google being down at around 8 PM Eastern time. As of 9:53 PM, Engadget Managing Editor Cherlynn Low reports that both YouTube and Google Home Assistant are still inaccessible for her. As of 10:12PM Eastern, Team YouTube posted on X that the issue has been completely fixed. While it didn’t say why YouTube went down, the team acknowledged the problem before 9PM and posted an update 20 minutes later that its recommendation system was having issues, even though its homepage was back.

Update, February 17, 2026, 10:27 PM ET: YouTube says the issue has been completely fixed.

Update, February 17, 2026, 10:08 PM ET: Updated with reports that certain Google services are also down for some users.

Update, February 17, 2026, 9:34 PM ET: Updated with reports from users.

Update, February 17, 2026, 9:26 PM ET: Updated to correct time of outage, added new countries where it’s out and added new reports of YouTube still being inaccessible.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/youtube-was-down-for-thousands-of-users-in-the-us-020718788.html?src=rss

YouTube is down for thousands of users in the US

YouTube is experiencing an outage across the United States, with users in other countries like Canada, India, the Philippines, Australia and Russia also having problems with accessing the website. The issue seems to have started at around 8 PM Eastern time and reached 338,000 reports on Downdetector before starting to taper down. More users reported having issues accessing the app, but I personally lost access to the web homepage first.

As of 9:22 PM, users are still reporting being unable to access YouTube on Reddit. As of 9:33 PM, users are complaining that they still can’t access the service, though others say it’s back up for them. Some people are reporting partial restoration of service, with the homepage now being accessible but not seeing any recommended videos.

Downdetector also got thousands of reports of Google being down at around 8 PM Eastern time. As of 9:53 PM, Engadget Managing Editor Cherlynn Low reports that both YouTube and Google Home Assistant are still inaccessible for her.

Update, February 17, 2026, 10:08 PM ET: Updated with reports that certain Google services are also down for some users.

Update, February 17, 2026, 9:34 PM ET: Updated with reports from users.

Update, February 17, 2026, 9:26 PM ET: Updated to correct time of outage, added new countries where it’s out and added new reports of YouTube still being inaccessible.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/youtube-is-down-for-thousands-of-users-in-the-us-020718165.html?src=rss

Texas AG sues TP-Link over purported connection to China

Texas is suing Wi-Fi router maker TP-Link for deceptively marketing the security of its products and allowing Chinese hacking groups to access Americans' devices, Attorney General Ken Paxton has announced. Paxton originally started looking into TP-Link in October 2025. Texas Governor Greg Abbott later prohibited state employees from using TP-Link products in January of this year.

TP-Link is no longer owned by a Chinese company and its products are assembled in Vietnam, but Paxton's lawsuit claims that because the company's "ownership and supply-chain are tied to China" it's subject to the country's data laws, which require companies to comply with requests from Chinese intelligence agencies. The lawsuit also says that firmware vulnerabilities in TP-Link's hardware have already "exposed millions of consumers to severe cybersecurity risks."

TP-Link provided the following statement to Engadget in response to the lawsuit:

The claims made by the Texas Attorney General’s office are without merit and will be proven false. TP-Link Systems Inc. is an independent American company. Neither the Chinese government nor the CCP exercises any form of ownership or control over TP-Link, its products, or its user data. TP-Link’s founder and CEO, Jeffrey Chao, resides in Irvine, CA, and is not and never has been a member of the CCP. To ensure the highest level of security, our core operations and infrastructure are located entirely within the United States, and all U.S. users' networking data is stored securely on Amazon Web Services servers. We will continue to vigorously defend our reputation as a trusted provider of secure connectivity for American families.

TP-Link was reportedly being investigated at the federal level in 2024 after its devices were connected to the massive "Salt Typhoon" hack that accessed data from multiple US telecom companies. Despite all signs pointing to the federal government getting ready to ban TP-Link in 2025, Reuters reports that the Trump administration paused plans to ban the company’s routers in early February, ahead of a meeting between President Donald Trump and President Xi Jinping.

Update, February 17, 3:38PM ET: Added statement from TP-Link.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/texas-ag-sues-tp-link-over-purported-connection-to-china-193802258.html?src=rss

EU launches second investigation into Grok’s nonconsensual image generation

X is facing yet another investigation into Grok's reported creation of nonconsensual sexual images on the platform. Ireland's Data Protection Commission (DPC) has announced an inquiry into X regarding the harmful, intimate images and processing of EU and EEA individuals' personal data — including children.

In an 11-day period, X generated about three million sexualized images, an estimated 23,000 of which were of children. British nonprofit, the Center for Countering Digital Hate (CCDH) announced the results of its December 29 to January 9 review last month.  

Critically, the investigation will determine whether X has broken GDPR laws. "The DPC has been engaging with XIUC since media reports first emerged a number of weeks ago concerning the alleged ability of X users to prompt the @Grok account on X to generate sexualised images of real people, including children," DPC deputy commissioner Graham Doyle said, referring to X using the full title X Internet Unlimited Company (XIUC). 

Doyle continued: "As the Lead Supervisory Authority for XIUC across the EU/EEA, the DPC has commenced a large-scale inquiry which will examine XIUC’s compliance with some of their fundamental obligations under the GDPR in relation to the matters at hand."

The DPC's probe could have repercussions for X across the EU, while also building on similar probes in the bloc. In January, the European Commission launched an investigation into whether X has violated the Digital Services Act. It's looking into if X has properly "assessed and mitigated" Grok's risks on X, including the spread of illegal content such as the AI-generation nonconsensual sexually explicit images. Once again this includes those of children — this disturbing point can't be emphasized too much. 

X claimed in mid-January that it was preventing Grok from editing photos of real people to give them revealing clothing. However, this seems far from the truth. Earlier this month, a male reporter found Grok would still put him in revealing clothing and even added visible genitalia. 

This article originally appeared on Engadget at https://www.engadget.com/big-tech/eu-launches-second-investigation-into-groks-nonconsensual-image-generation-113239967.html?src=rss