Meta is reportedly planning to cut up to 20 percent of its staff in upcoming layoffs

Meta could be preparing for one of the largest layoffs in its history, according to a Reuters report. The tech giant is planning to cut about 20 percent of its workforce, according to the outlet's sources. According to the report, neither a date nor the exact number of layoffs has been finalized yet.

However, Reuters reported that Meta's top executives have told "other senior leaders" to start "planning how to pare back." In its latest financial report, the company's employee headcount was 78,865 as of December 31, 2025, while revenue reached nearly $60 billion for the fourth quarter and more than $200 billion for the entire year. A Meta spokesperson told Reuters that this was "speculative reporting about theoretical approaches."

Meta is no stranger to major layoffs. Earlier this year, Meta targeted about 1,000 employees in its layoffs with the Reality Labs division that's responsible for the company's virtual reality and metaverse efforts. Early last year, Meta laid off about five percent of its workforce, following a smaller round of firings that same month. Meanwhile, the company has been spending heavily to acquire AI startups, like Moltbook, a social network designed for AI agents, and Manus, a startup focused on AI agents for task automation.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/meta-is-reportedly-planning-to-cut-up-to-20-percent-of-its-staff-in-upcoming-layoffs-160812304.html?src=rss

Meta is killing end-to-end encryption in Instagram DMs

Meta is killing end-to-end encryption in Instagram DMs. The feature will "no longer be supported after May 8, 2026," the company wrote in an update on its support page. Unlike WhatsApp, Meta never made encryption available to all Instagram users and it was never a default setting. Instead, users in "some areas" had the ability to opt-in to encryption on a per-chat basis.

In a statement, a Meta spokesperson said the feature was being retired due to low adoption. "Very few people were opting in to end-to-end encrypted messaging in DMs, so we're removing this option from Instagram in the coming months," the spokesperson said. "Anyone who wants to keep messaging with end-to-end encryption can easily do that on WhatsApp.”

Interestingly, Meta's statement doesn't mention the status of encryption on Messenger. The company began turning on end-to-end encryption as a default setting in 2023 after years of work on the feature. A support page for Messenger currently states that the company "is in the process of securing personal messages with end-to-end encryption by default."

Meta's approach to encrypted messaging has changed several times over the years. It started encrypting WhatsApp chats in 2016. In 2019, Mark Zuckerberg outlined a "privacy-focused" revamp of the company's apps, saying at the time that "implementing end-to-end encryption for all private communications is the right thing to do." In 2021, the company's head of safety said that Meta was delaying its encryption work until 2023 in order to create stronger safety features.  

Meta’s use of encryption has been repeatedly criticized by law enforcement and some child safety organizations that say the feature makes it harder to catch predators who target children on social media. Recently, the topic has been raised numerous times during a trial in New Mexico over child safety. Internal documents that have surfaced as part of the trial show Meta executives and researchers debating the trade-offs between safety and privacy as it relates to encryption. 

In testimony that was broadcast during the trial, Zuckerberg said that safety issues were "a large part of the reason why it took so long" to bring encryption to Messenger. "There's been debate about this, but I think the majority of folks, from people who use our products to people who are involved in security overall, believe that strong encryption is positive," he said.


This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-is-killing-end-to-end-encryption-in-instagram-dms-195207421.html?src=rss

X could be breaching US sanctions on Iran, watchdog warns

The newly verified X account for Iran's supreme leader could be putting the company on the wrong side of US sanctions, according to a watchdog group. The Tech Transparency Project, which last month published a report on X granting premium perks to sanctioned officials in Iran, now says that the verified account for the country's new leader raises fresh questions about the issue. 

The TTP notes that the X account for Iran's new supreme leader, Mojtaba Khamenei, appears to be paying for an X premium subscription despite being on the US government's list of sanctioned individuals since 2019. As the group points out, the Iran-based account was created this month and currently bears a blue checkmark, which typically indicates the account holder is paying for a subscription. 

The account belonging to Mojtaba Khamenei has been boosted by other state-linked accounts in Iran, including the one that previously belonged to Khamenei's father. That account has had a gray checkmark, which indicates it belongs to a verified government official. Verified accounts on X are rewarded with extra visibility on the platform, along with other perks. The younger Khamenei's verified account has already gained more than 20,000 new followers in the hours since TTP first posted about it. 

"The new Supreme Leader's account is just the latest account for a sanctioned entity apparently paying X for premium services," TTP director Katie Paul said in a statement to Engadget. "TTP has identified dozens of accounts, many linked to designated terrorists, that subscribed to X premium over the past three years. What's more concerning than the blatant disregard for U.S. sanctions law is the fact that Musk's companies have a contract with the Pentagon while X is actively profiting from U.S. adversaries."

As Paul notes, this isn't the first time TTP has raised questions about whether X is running afoul of US sanctions via its premium service. In 2024, the group published a report noting that X was accepting paid verification from more than two dozen sanctioned individuals and groups. The company said at the time that it had a "a robust and secure approach in place for our monetization features." 

X didn't respond to a request for comment. But in the hours after Engadget reached out about Khamenei’s account, the blue checkmark was removed. The company also removed blue checks from a handful of Iran-based accounts flagged by TTP last month following reporting from Wired.

Update, March 13, 2026, 9:08AM PT: This story was updated to reflect changes made to Khamenei’s account following publication.

This article originally appeared on Engadget at https://www.engadget.com/social-media/x-could-be-breaching-us-sanctions-on-iran-watchdog-warns-213550284.html?src=rss

Ukraine allows allies to train AI models on its battlefield data

Ukraine's four-year war with Russia has made it the world leader in battlefield drone technology. One byproduct of that is that the data it collects has become one of the country's most valuable assets. On Thursday, Ukraine played that card, saying it will begin sharing its battlefield data with allies to train drone AI software.

"In modern warfare, we must defeat Russia in every technological cycle," Ukraine Defense Minister Mykhailo Fedorov wrote on Telegram (translated from Ukrainian). "Artificial intelligence is one of the key areas of this competition."

Fedorov previewed the move when he took his post in January. At the time, the tech-savvy cabinet member pledged to "more actively" bring allies into projects. Foreign allies and companies have sought access to the country's data as, for better or worse, AI increasingly becomes an integral element of warfare.

Fedorov says Ukraine has a platform that will safely train partners' AI models without providing sensitive data. The system is said to provide continually updating datasets, including large volumes of photos and videos.

"For us, this is the next step in the development of win-win cooperation," Fedorov wrote. "Partners get the opportunity to train their AI models on real data from modern warfare. And [for] Ukraine: faster development of autonomous systems and new technological solutions for the front."

Last year, Ukrainian President Volodymyr Zelenskyy warned global leaders of a dangerous escalation tied to drone tech and AI. “We are now living through the most destructive arms race in human history,” he said at a meeting of the UN General Assembly in September. However, given the ugly realities in his country, Zelenskyy reiterated his need for armaments. “The only guarantee of security is friends and weapons,” he said.

This article originally appeared on Engadget at https://www.engadget.com/ai/ukraine-allows-allies-to-train-ai-models-on-its-battlefield-data-165104853.html?src=rss

Google’s GFiber internet business is merging with Astound Broadband

Google has announced that GFiber is merging with Astound Broadband, in an agreement that sees Astound’s parent company Stonepeak become the majority owner, with Alphabet retaining a minority stake.

No financial specifics were detailed in a press release, but the new combined business will be an independent provider led by GFiber’s executive team, who Google says will use its "expertise in high-speed fiber innovation to manage the combined network footprint." Astound already serves over one million customers across the US, and by joining forces Google says the two providers will be able to grant better internet access to more communities.

GFiber, formerly known as Google Fiber, has been around for nearly 15 years, and currently offers speeds of up to 8Gbps on its $150/month Edge 8 Gig plan. A 20 Gig service was expected to leave early access later in 2026.

The fiber broadband service is part of Alphabet’s "Other Bets" portfolio, which also includes Waymo, Verily, and Wing, a combined segment that recorded an operating loss of $16.8 billion in 2025, CNBC reports. The company’s deal with Stonepeak is subject to regulatory approval and is expected to close in Q4 of this year.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/googles-gfiber-internet-business-is-merging-with-astound-broadband-132832086.html?src=rss

Amazon wins a temporary injunction against Perplexity’s Comet browser

Amazon has secured a temporary win in its fight with Perplexity over the use of AI shopping bots. Bloomberg reported that a San Francisco federal court has determined that Perplexity must stop using its Comet web browser's AI agent to make purchases for users on Amazon's marketplace. The AI company will have a week to appeal the decision, otherwise it has been ordered to stop accessing any password-protected areas of Amazon's systems and destroy its copies of Amazon's data while the two companies continue to argue their cases.  

"Amazon has provided strong evidence that Perplexity, through its Comet browser, accesses with the Amazon user's permission but without authorization by Amazon, the user's password-protected account," District Judge Maxine Chesney wrote in placing the temporary block.

"The preliminary injunction will prevent Perplexity’s unauthorized access to the Amazon store and is an important step in maintaining a trusted shopping experience for Amazon customers," an Amazon spokesperson told Bloomberg.

Amazon sent a cease-and-desist letter to Perplexity over the AI company's shopping bots in November. According to Amazon, use of the Comet agent to make purchases is a violation of its terms of service. "Perplexity will continue to fight for the right of internet users to choose whatever AI they want," a representative from Perplexity said of this week's decision.

This article originally appeared on Engadget at https://www.engadget.com/ai/amazon-wins-a-temporary-injunction-against-perplexitys-comet-browser-184000462.html?src=rss

The Oversight Board says Meta needs new rules for AI-generated content

The Oversight Board is once again urging Meta to overhaul its rules around AI-generated content. This time, the board says Meta should create a separate rule for AI content that's independent of its misinformation policy, invest in more reliable detection tools and make better use of digital watermarks among other changes. 

The group's recommendations stem from an AI-generated video shared last year that claimed to show damaged buildings in the Israeli city of Haifa during the Israel-Iran conflict in 2025. The clip, which racked up more than 700,000 views, was posted by an account that claimed to be a news outlet but was actually run by someone in the Philippines.

After the video was reported to Meta, the company declined to remove it or add a "high risk" AI label that would have clearly indicated the content had been created or manipulated with AI. The board overturned Meta's decision not to add the "high risk" label and says the case shines a light on several areas where the company's current AI rules are falling short.

"Meta must do more to address the proliferation of deceptive AI- generated content on its platforms, including by inauthentic or abusive networks of accounts and pages, particularly on matters of public interest, so that users can distinguish between what is real and fake," the board wrote in its decision. Meta eventually disabled three accounts linked to the page after the board flagged "obvious signals of deception."

One of the board's top recommendations is that Meta create a dedicated rule for AI-generated content that's separate from its misinformation policy. The rule, according to the board, should include specifics about how and when users are required to label AI content as well as information about how Meta penalizes those who break the rule. 

The board was also highly critical of how Meta uses its current "AI Info" labels, noting that the way they are applied is "neither robust nor comprehensive enough to contend with the scale and velocity of AI-generated content,” especially in times of conflict or crisis. “A system overly dependent on self-disclosure of AI usage and escalated review (which occurs infrequently) to properly label this output cannot meet the challenges posed in the current environment.”

Meta, the board said, also needs to invest in more sophisticated detection technology that can reliably label AI media, including audio and video. The group added that it was "concerned" about reports that the company is "inconsistently implementing" digital watermarks on AI content created by its own AI tools. 

In a statement, Meta said it “welcomed” the decision and that it would also take action “on content that is identical and in the same context” when “it is technically and operationally possible to do so.” The company has 60 days to formally respond to its recommendations. 

The decision isn't the first time the board has been critical of Meta's handling of AI content. The group has described the company's manipulated media rules as "incoherent" on two other occasions, and has criticized it for relying on third-parties, including fact checking organizations, to flag problematic content. Meta's reliance on fact checkers and other "trusted partners" was again raised in this case, with the board saying that it had heard from these groups that Meta "is less responsive to outreach and concerns, in part due to a significant reduction in capacities for Meta’s internal teams." Meta, the board writes, "should be capable of conducting such assessments of harm itself, rather than rely solely on partners reaching out to them during an armed conflict."

While the Oversight Board's decision relates to a post from last year, the issue of AI-generated content during armed conflicts has taken on a new urgency during the latest conflict in the Middle East. Since the start of the US and Israel's strikes on Iran earlier this month, there has been a sharp rise in viral AI-generated misinformation across social media. The board, which has previously hinted that it would like to work with generative AI companies, included a suggestion that would seem to apply to not just Meta. 

"The industry needs coherence in helping users distinguish deceptive AI-generated content and platforms should address abusive accounts and pages sharing such output," it wrote.

Update, March 10, 10:53AM ET: This story was updated to reflect Meta’s response to the Oversight Board.

This article originally appeared on Engadget at https://www.engadget.com/social-media/the-oversight-board-says-meta-needs-new-rules-for-ai-generated-content-100000268.html?src=rss

You can (sort of) block Grok from editing your uploaded photos

People can block the xAI's Grok chatbot from creating modifications of their uploaded images on social network X. Neither X or xAI, both Elon Musk-owned businesses, have made a public announcement about this feature, which users began noticing on the iOS app within the image/video upload menu over the past few days. 

This option is likely a response to Grok's latest scandal, which began at the start of 2026 when the addition of image generation tools to the chatbot saw about 3 million sexualized or nudified images created. An estimated 23,000 of the images made in that 11-day period contained sexualized images of children, according to the Center for Countering Digital Hate. Grok is now facing two separate investigations by regulators in the EU over the issue.

The positive side of the recent feature addition is that X and xAI have taken a step toward limiting inappropriate uses of Grok. This block is a simple toggle and it hasn't been buried in the UI. So that's nice.

The negative side, however, is that this token gesture that doesn't amount to any serious improvement to how Grok works or can be used. It's great that the chatbot won't alter the file uploaded by one person, but as reported by The Verge, the block only limits tagging Grok in a reply to create an image edit. There are plenty of workarounds for those dedicated individuals who insist on being able to use generative AI to undress people without their consent or knowledge. 

Hopefully xAI has more powerful protective tools in the works. The limitations Grok on putting real people in scanty clothing that X announced in January seem to have had only partial success at best. If this additional and narrow use case is all the company offers, then the claims of being a zero-tolerance space for nonconsensual nudity are going to ring hollow. Especially since, as we noted at the time, xAI could stop allowing image generation at all until the issue is properly and thoroughly fixed.

This article originally appeared on Engadget at https://www.engadget.com/ai/you-can-sort-of-block-grok-from-editing-your-uploaded-photos-215356117.html?src=rss

Bluesky’s CEO is stepping down after nearly 5 years

Bluesky CEO Jay Graber, who has led the upstart social platform since 2021, is stepping down from her role as its top executive. Toni Schneider, who has been an advisor and investor in Bluesky, will take over the job temporarily while Graber stays on as Chief Innovation Officer. 

"As Bluesky matures, the company needs a seasoned operator focused on scaling and execution, while I return to what I do best: building new things," Graber wrote in a blog post. Schneider, who was previously CEO at Wordpress parent Automattic, will be that "experienced operator and leader" while Blueksy's board searches for a permanent CEO, she said.

Graber's history with Bluesky dates back to its early days as a side project at Jack Dorsey's Twitter. She was officially brought on as CEO in 2021 as Bluesky spun off into an independent company (it officially ended its association with Twitter in 2022 and Dorsey cut ties with Bluesky in 2024). She led the company through its launch and early viral success as it grew from an invitation-only platform to the 43 million-user service it is today. During that time, she's become known as an advocate for decentralized social media and for trolling Mark Zuckerberg's t-shirt choices. 

Nearly three years since it launched publicly, Bluesky has carved out a small but influential niche in the post-Twitter social landscape. The platform is less than a third of the size of Meta's competitor, Threads, which has also copied some of Bluesky's signature features. Bluesky also has yet to roll out any meaningful monetization features, though it has teased a premium subscription service in the past.

As Chief Innovation Officer, Graber will presumably still be an influential voice at the company going forward. And, as Wired points out, she still has a seat on Bluesky's board so she will get some say in who steps into the role permanently. Until then, Schneider, who is also a partner at VC firm Tre Ventures, will lead the company. "I deeply believe in what this team has built and the open social web they're fighting for," he wrote in a post on Bluesky. 


This article originally appeared on Engadget at https://www.engadget.com/social-media/blueskys-ceo-is-stepping-down-after-nearly-5-years-201900960.html?src=rss

OpenAI is reportedly pushing back the launch of its ‘adult mode’ even further

Here comes another disappointment for ChatGPT users. As first reported by Sources' Alex Heath, OpenAI is yet again delaying its "adult mode" for ChatGPT. A company spokesperson told Heath that "we're pushing out the launch of adult mode so we can focus on work that is a higher priority for more users right now."

More specifically, OpenAI's spokesperson said that things like "gains in intelligence, personality improvements, personalization, and making the experience more proactive" were being prioritized instead. However, the company still wants to release an adult mode, but it would "take more time," according to the company spokesperson.

The reveal of ChatGPT's adult mode dates back to October, when OpenAI's CEO, Sam Altman, posted on X that the company would roll out more age-gating as part of its "treat adults like adults" principle, adding that this would include "erotica for verified adults." Altman originally said this adult mode would be available in December, but an OpenAI exec later said during a December briefing that it would instead debut in the first quarter of 2026. 

With Q1 almost coming to a close, we no longer have a timeframe for when ChatGPT's adult mode will release. However, OpenAI began rolling out its age prediction tool in January, which may go hand-in-hand with the upcoming adult mode.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-is-reportedly-pushing-back-the-launch-of-its-adult-mode-even-further-213013801.html?src=rss