Snapchat is rolling out sponsored AI agents

It was only a matter of time before they found a way to use AI agents as corporate shills. On Tuesday, Snapchat rolled out AI Sponsored Snaps, a "new way for brands to show up in Chat through AI agents." Or, put another way, it's conversational advertising. (Yay?)

AI Sponsored Snaps will appear in the app's Chat tab (with a light gray "Ad" notation next to the brand name). After opening the chat, you can ask the agent questions about the brand it represents. Snap showed an example from its first partner for the initiative, Experian. The bot offers to answer your questions on saving money, improving your credit score and — there it is — exploring loans and credit cards.

Whether through credit card offers or other means, the AI agent will presumably try to guide you toward behavior that makes money for the sponsor. So, it isn't clear why this would be better for consumers than asking a general-purpose chatbot like Gemini or Claude the same questions. Maybe the answer is as simple as, "It isn't… but they know people will use it anyway."

Four screenshots, showing the process of chatting with a sponsored AI agent.
Snap

"Conversation is becoming the most valuable real estate in advertising," Snap's Chief Business Officer, Ajit Mohan, wrote in a press release. "AI is accelerating that shift, turning chat into the place where people discover products, ask questions, and make decisions in real time. The real opportunity isn't just putting ads into those environments, it's designing formats that feel native to how people already talk."

Snap says more than half a billion people have messaged its My AI feature since it launched three years ago. That was despite a shaky start, where the bot told researchers and journalists posing as young teenagers how to mask the smell of alcohol or cannabis and set the mood for sex.

This article originally appeared on Engadget at https://www.engadget.com/ai/snapchat-is-rolling-out-sponsored-ai-agents-162720124.html?src=rss

Microsoft is reportedly offering voluntary buyouts to up to 7 percent of its employees

Microsoft is planning to get rid of more US employees via its first voluntary buyout program, CNBC reports. The buyout program will reportedly be offered to US employees at "the senior director level and below whose years of employment and age add up to 70 or higher," and could cover up to 7 percent of the company's US workforce. 

With around 125,000 employees in the US as of June 2025, that could mean up to 8,750 will be offered a paid exit when Microsoft begins its program in May. That's a smaller figure than the 15,000 or so employees the company laid off in May and July of 2025, but still significant, particularly if the majority of employees do take the buyout.

"Our hope is that this program gives those eligible the choice to take that next step on their own terms, with generous company support," Microsoft's executive vice president and chief people officer Amy Coleman shared in a memo viewed by CNBC.

Engadget has contacted Microsoft to confirm the existence of the voluntary buyout program and other details CNBC reported. We'll update this article if we hear back.

Microsoft used its 2025 layoffs to streamline layers of management and its video game business, but these new cuts may have a lot more to do with AI. Not necessarily because the company's adoption of AI tools has made employees redundant, but rather because Microsoft continues to aggressively spend on AI infrastructure. The company said it spent $37.5 billion in capital expenditures during Q2 2026, much of which went toward data center buildout.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/microsoft-is-reportedly-offering-voluntary-buyouts-to-up-to-7-percent-of-its-employees-200050484.html?src=rss

France’s national agency for managing IDs and passports suffered a data breach last week

The French government confirmed that France Titres, also known as Agence nationale des titres sécurisés (ANTS), experienced a security breach last week. France Titres disclosed that it detected a data breach on April 15. The next day, a hacker claimed responsibility for the breach and claimed to have up to 19 million records that they are attempting to sell. According to Bleeping Computer, the data does not appear to have been widely leaked yet. 

France Titres is responsible for the country's identification and registration materials, including driver’s licenses, national ID cards, passports and immigration documents. The compromised data includes full names, email addresses, dates of birth, account identifiers, login IDs, phone numbers and mailing addresses. The department said that while the breach did not permit access to its portals, the exposed information could be used for phishing attacks or other illicit actions. The announcement advised caution regarding any suspicious communications claiming to be from the agency.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/frances-national-agency-for-managing-ids-and-passports-suffered-a-data-breach-last-week-201432189.html?src=rss

Hey Meta workers, are you getting paid for those keystrokes?

No longer content to subsume recognizable intellectual properties, the majority of the indexed internet and books (basically all of them), AI will apparently now begin devouring its own workforce.

A report in Reuters alleged that the keystrokes, mouse movements and clicks of Meta's workforce are to be captured for the purposes of training AI — something the company's communications department was happy to confirm as accurate! In a cheery missive, a company spokesperson told Engadget that "If we're building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them [...] we’re launching an internal tool that will capture these kinds of inputs on certain applications to help us train our models."

All this leads one to ask the obvious question: hey, what the fuck?

The nature of at-will employment in the United States is such that your boss basically never needs to explain why your job duties change, but it's rarely so sweeping, so brazen or so unavoidably tied to the reminder that you are being surveilled at a frighteningly granular level. Gross!

Installing keyloggers on someone else's computer in a non-work setting can often constitute a criminal offense (hello CFAA!) and it's frankly weird we allow this sort of thing to happen in the workplace at all. But in this case, there's at least some possibility this data may eventually be used to replace the exact people currently strongarmed into making those clicks and clacking those keys — or as a thin excuse to lay a lot of them off.

It's not as though the data underpinning large language models is worthless. Ill-gotten information has been the subject of exorbitant settlements and many pending court cases with considerable sums riding on their eventual judgements. If Meta thought it could obtain this sort of data from its estimated 3.5 billion combined users instead of its comparably paltry body of employees without it immediately reading as the single most invasive chapter in a laughably long history of move fast, break things, and never admit to the mess, wouldn't it just... do that? Technology has progressed so far, yet people continue to really hate feeling taken advantage of. And that sort of thing is still bad for business.

In a fragile economy floated by rampant self-dealing and the shifting moods of a few very rich weirdos, even the mere mention of AI's relentless forward march to annihilate its own creators can make a shoe company's stock pop, however briefly.

Maybe that's why Meta was delighted to confirm the broad details of the Reuters story, yet declined multiple requests to comment on if workers can opt out of this surveillance, or if they are being compensated in any way for their data. I, for one, would still love to know!

Do you work at Meta and want to talk confidentially? I'm @amarae.60 on Signal.

This article originally appeared on Engadget at https://www.engadget.com/general/hey-meta-workers-are-you-getting-paid-for-those-keystrokes-131934881.html?src=rss

Anthropic is investigating ‘unauthorized access’ of its Mythos cybersecurity tool

Anthropic is investigating potential "unauthorized access" to its Claude Mythos model that has been touted for its ability to find cybersecurity flaws, the company told Bloomberg. A group gained access to the model through a third-party contractor portal and by using internet sleuthing tools, according to the report. However, the group is only interested in trying the models and not using them maliciously, according to a person familiar with the matter. 

"We're investigating a report claiming unauthorized access to Claude Mythos Previous through one of our third-party vendor environments," Anthropic said in a statement. 

The Claude Mythos Preview arrived earlier this month as part of "Project Glasswing" with significant fanfare. Anthropic limited the preview release to a small number of trusted test companies including Amazon, Microsoft, Apple and Cisco. Another was Mozilla, which said the model helped it find and patch 271 Firefox vulnerabilities. A growing number of banks and government agencies have been seeking access as well in order to safeguard their own systems. 

However, several unauthorized users (who reportedly have a private chat on Discord), supposedly gained access to Mythos through a developer portal and by making an educated guess as to where the model might be located. That same group may also have access to other unreleased Anthropic models, according to the report. 

The new Mythos model has gained notoriety of late for its supposed ability to sniff out security flaws in operating systems and internet browsers. This has prompted some skepticism among security researchers but also fear that AI-generated cyber attacks could become a "real threat," CTO of cloud security firm Edera Alex Zenla recently told Wired. Anthropic was recently designated as a "supply chain risk" by the US Department of Defense, but has been in talks with the Trump administration of late to have that label removed. 

This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-is-investigating-unauthorized-access-of-its-mythos-cybersecurity-tool-091017168.html?src=rss

Mozilla says it patched 271 Firefox vulnerabilities thanks to Anthropic’s Claude Mythos

Anthropic's buzzy announcement about using AI to improve cybersecurity earlier this month was met with plenty of skepticism. However, Mozilla shared some details that support use of the company's special Claude Mythos Preview model as a way to protect critical services. Using Mythos helped Mozilla's team find and patch 271 vulnerabilities in the latest release of the Firefox browser. "So far we’ve found no category or complexity of vulnerability that humans can find that this model can’t," the foundation said.

The blog post from Mozilla feels like a positive sign for Anthropic's Project Glasswing. Obviously the AI company would want to put itself in the best possible light while presenting its own initiative, but there's something encouraging about hearing the benefits from a third party. Mozilla also noted that in its time with Claude Mythos, the AI wasn't able to turn up any bugs that a human wouldn't have been able to find, given enough time and resources, which indicates that AI isn't presently able to do more to crack cybersecurity protections than a person can.

An organizaion successfully using AI for good is certainly a refreshing change of pace in tech news. And for those Firefox users who aren't personally interested in applying any generative AI in their browsing, Mozilla has given the option to turn it all off for the past several months.

This article originally appeared on Engadget at https://www.engadget.com/ai/mozilla-says-it-patched-271-firefox-vulnerabilities-thanks-to-anthropics-claude-mythos-224330023.html?src=rss

AI company deletes the 3 million OKCupid photos it used for facial recognition training

When online platforms violate their own privacy policies to sell your photos, have no fear: They just might have to pay an undisclosed settlement fee 12 years later. (Who says justice is dead?) According to Reuters, AI company Clarifai says it has deleted 3 million profile photos taken from dating site OkCupid in 2014. It follows a settlement reached last month between the FTC and Match Group, OkCupid's owner.

The Delaware-based Clarifai reportedly certified the data deletion to the FTC on April 7. The company also confirmed to US Representative Lori Trahan (D-MA) that it deleted any models that trained on the data. Clarifai told the representative's office that it hadn't shared the data with third parties.

The FTC opened the investigation in 2019, after The New York Times reported that Clarifai had built a training database using OkCupid dating profile photos. The behavior was a direct violation of OkCupid’s privacy policy. Court documents reviewed by Reuters reveal that Clarifai asked OkCupid executives for the data in 2014. Apparently, they obliged.

Five people sitting on stairs. Creepy boxes surround their faces, estimating age, race and gender.
<p>Clarifai uses this creepy facial profiling example to sell its services.</p>
Clarifai

"We're ⁠collecting data now and just realized that OkCupid must have a HUGE amount of awesome data for this," Clarifai founder Matthew Zeiler wrote in an email to OkCupid co-founder Maxwell Krohn. The AI startup used the dating site's images to build a facial recognition service that can identify a person's age, gender and race. (Another brilliant and totally ethical idea from Clarifai, tapping into unsecured city surveillance cameras without authorization, was reportedly shuttered.)

Zeiller suggested to The New York Times in 2019 that people needed to, well, get over it. "There has to be some level of trust with tech companies like Clarifai to put powerful technology to good use, and get comfortable with that," the AI founder declared. Some of OkCupid's founders were reportedly investors in Clarifai.

As part of the settlement, the FTC "permanently prohibited" OkCupid from misrepresenting its data collection and privacy controls. TechCrunch notes how strange it is to use that as a penalty, given that FTC rules already bar that behavior.

This article originally appeared on Engadget at https://www.engadget.com/ai/ai-company-deletes-the-3-million-okcupid-photos-it-used-for-facial-recognition-training-195223996.html?src=rss

Amazon will invest up to $25 billion in Anthropic in a broad deal

Amazon and Anthropic are strengthening their ties once again, with steep financial commitments made on both sides. Today, Amazon announced that it will invest $5 billion in the AI company, along with as much as $20 billion in additional payments if certain milestones are met. This news follows the initial $4 billion investment Amazon made in Anthropic in 2023 and a second $4 billion round from 2024.

On Anthropic's side, it has committed to continued use of Amazon's custom Trainium silicon for its AI models. The latest agreement will see Anthropic promising to spend more than $100 billion on AWS technologies over the coming decade. It will secure up to 5 gigawatts of current and future chip capacity for training and powering its models. Their partnership is also bringing Anthropic's Claude platform to Amazon Web Services customers within the AWS portal, removing the need for additional credentials.

This article originally appeared on Engadget at https://www.engadget.com/ai/amazon-will-invest-up-to-25-billion-in-anthropic-in-a-broad-deal-225239302.html?src=rss

The European Commission wants Google to share search engine data with competitors

The European Commission has proposed new measures for Google aimed at bringing the tech giant's search business into compliance with the Digital Markets Act. In order to allow third-party online search engines to be competitive with Google, the EC has recommended that Google permit those services to access its treasure trove of search engine data. As it stands, the proposal would require Google to let rivals see data points "such as ranking, query, click and view data, on fair, reasonable and non-discriminatory terms."

"Data is a key input for online search and for developing new services, including AI," said Teresa Ribera, the Commission's executive vice-president for Clean, Just and Competitive Transition. "Access to this data should not be restricted in ways that could harm competition. In fast-moving markets, small changes can quickly have a big impact. We will not allow practices that risk closing markets or limiting choice."

European regulators have been using the Digital Markets Act to hammer at Google's dominant market position for several years. Beginning in March 2024, Google was required to be in compliance with the DMA and it did plan some changes in accordance with the legislation. A year later, though, the Commission levied preliminary charges against Google arguing that Google Search and the Play Store had not met their obligations for market competition. Google offered some possible adjustments to how search results are displayed in response, but it seems the regulator is going to keep fighting for more robust changes to Google's search business.

If you think all that sounds like something Google is unwilling and unlikely to do, you'd be correct. For starters, the actual requirements for Google could change in the coming months. The EC is accepting comments on the proposed measures through May 1, and Google's legal team is certain to have a lot of opinions to share. We've reached out to the company for a comment on these preliminary measures. A final, binding decision on Google's next steps is due by July 27, so we're expecting a lot of back-and-forth between the parties until that date.

Update, April 17 2026, 11:36AM ET: Reached for comment, Google's Senior Competition Counsel Clare Kelly told Engadget, "hundreds of millions of Europeans trust Google with their most sensitive searches — including private questions about their health, family, and finances — and the Commission’s proposal would force us to hand this data over to third parties, with dangerously ineffective privacy protections. We will continue to vigorously defend against this overreach, which far exceeds the DMA’s original mandate and jeopardizes people’s privacy and security."

This article originally appeared on Engadget at https://www.engadget.com/big-tech/the-european-commission-wants-google-to-share-search-engine-data-with-competitors-192709530.html?src=rss

Meta isn’t setting its Oversight Board free just yet

The Oversight Board — the policy body Meta created to weigh its most impactful moderation rulings — has seen its role within Mark Zuckerberg's empire come into question due to shifting content policy priorities and dwindling investment. The Oversight Board has taken steps to formalize its long-contemplated desire to work with other companies, but Engadget has learned Meta has thus far declined to move forward with that process. 

Over the last year, board members have become increasingly interested in artificial intelligence policy and how their experience shaping Meta's content rules could translate into advising companies in the generative AI space. That interest has intensified as some AI companies have privately signaled they would be open to working with the board, according to a source familiar with the organization who was not permitted to speak publicly. The board began talks with Meta last fall about the possibility, which would require the company to sign off on changes to the legal documents that govern the board's operations. But Meta officials have not indicated whether the company is willing to make those changes, which would likely require approval from top executives. 

Platformer, which first reported on Meta's budget negotiations with the Oversight Board, noted that the company "has long encouraged the board to seek additional funding sources." So far, no other company has publicly shown interest in working with the group, though the board has had conversations with other firms behind the scenes. 

Oversight Board co-chair Paolo Carozza told Engadget in December that there had been "really preliminary" discussions between the board and AI companies, though he declined to name which ones in particular. "It feels like quite a different moment now, largely because of generative AI, LLMs, chatbots [and] the way that a variety of retail-level users of these technologies are facing a whole new set of challenges and harms that's attracting a lot of scrutiny," he said at the time. 

Meta has readily agreed to amend the board's governing documents in the past — like when the trust that controls the Oversight Board's budget funded a new organization to mediate content moderation disputes in Europe. While Meta executives once promoted the idea of its ostensibly independent Oversight Board working with other social media platforms, the prospect of the group working with a competitor as it pursues AI superintelligence is apparently more complicated. 

Over the last five years, board members have received briefings from officials at Meta about the inner workings of its moderation systems and other non-public details as part of their work with the company. That raises practical questions about how the board would safeguard Meta's proprietary information, as well as larger strategic questions about whether Meta would want its Oversight Board to work with some of the companies it's now fiercely competing with, the source said. It's not clear how invested Meta's current leadership is in ensuring a future for the board. Former president of global affairs Nick Clegg, who was one of the most vocal champions of the board's work, left the company last year.

Meanwhile, other board members have publicly made the case that the group, which consists of free speech and human rights experts from around the world, is well-positioned to guide AI companies grappling with an increasing number of real-world harms. When Anthropic published a "Claude Constitution" earlier this year, the board published a lengthy analysis from member Suzanne Nossel arguing that Claude also needed the kind of "oversight" the board has provided for Meta. She made a similar argument for the wider AI industry in an op-ed in The Guardian last month.

While Nossel denied that she was directly pitching the Oversight Board to Anthropic, she said that AI companies face many of the "same dilemmas" as social media platforms. "When the board was first created, there was the notion that we might work across the industry," she told Engadget. "Now, as the world shifts toward an AI-centric paradigm, we're very interested in what our experience can bring to that conversation." 

Oversight Board members, who naturally have a vested interest in expanding their purview, aren't the only members of the industry who have warned that generative AI platforms are essentially speed-running social media companies' playbook. A former OpenAI researcher recently wrote that "OpenAI Is Making the Mistakes Facebook Made," citing the AI company's moves toward optimizing for engagement and its plans for in-app advertising. The researcher cited Meta's Oversight Board as an example of the kind of independent governance that's needed in the AI industry.

The question of working with other companies has taken on new urgency as the Oversight Board faces the possibility that it will lose its backing from Meta. In a statement, a Meta spokesperson pointed to previous reports that Meta has committed to funding the board through 2028 and said that "nothing has changed." But a source familiar with the board tells Engadget that Meta has so far only handed over half of the smaller tranche of 2028 funds to the board amid ongoing discussions about its future, including whether it will expand its purview beyond Meta. 

There are also very real questions about how the Oversight Board fits into Meta's current strategy around content moderation. Zuckerberg announced last year that Meta was shifting away from most proactive moderation, ending fact-checking in the United States and rolling back hate speech rules. Zuckerberg himself reportedly led the push for these changes following a meeting with then President-elect Donald Trump. The Oversight Board, which Meta has sometimes asked to advise on major policy changes, was not consulted. The company recently said it plans to reduce the number of human moderators in favor of AI-based systems.

"The Oversight Board is currently engaged in meaningful discussions with Meta regarding its future and the evolution of its model to ensure the organization can address the most urgent emerging challenges in AI governance, standards, and accountability," an Oversight Board spokesperson said in a statement. "At this time, no decisions have been made about the Board’s future, and the organization’s day-to-day work and mandate remain unchanged.”

Critics have long said that the board, which has received more than $280 million from Meta, moves far too slowly. In a little more than five years of operation, the board has published more than 200 decisions about specific moderation issues, which Meta is required to uphold. Those decisions — a tiny fraction of the millions of requests it receives — can take months, though the board can opt to move more quickly. The board has also made hundreds of policy recommendations, which Meta has to respond to but isn't required to implement. The company has agreed to at least some changes in response to 75 percent of recommendations, according to the board. 

For the Oversight Board, working with a company besides Meta would begin to address some of the challenges it now faces. It would boost the group's credibility at a time when Meta seems to be re-evaluating its relationship with the board, and it would open up the possibility of new sources of funding. But the situation underscores another long-simmering tension when it comes to the role of the "independent" oversight organization. Meta has always been in control of how much influence the group can actually have. And it's not clear that the company is ready to let the board, which has spent the last five years learning the minutiae of Meta's content moderation and policy processes, advise the companies it's now competing with.

During its work with Meta, the Oversight Board has weighed in on its rules for AI several times. The board has criticized the company's "manipulated media" policy that governs deepfakes and other content, which led to Meta adopting new rules around AI labeling. In its most recent decision dealing with AI, the board urged Meta to invest in better AI detection tools and to collaborate more closely with other platforms. The company has not yet formally responded to those recommendations. 

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-isnt-setting-its-oversight-board-free-just-yet-153000172.html?src=rss