Meta is bringing more international news to its AI

Meta AI should soon be better at surfacing international news content thanks to a set of new deals with publishers. The company announced new agreements with international outlets and offered additional details on its recent deal with News Corp. 

The latest deals bring French newspaper Le Figaro, Spanish media company Prisa and German newspaper Süddeutsche Zeitung into the fold. Together, along with News Corp, which runs a number of outlets in the UK, these sources should give Meta AI better access to timely info about world events. Meta didn't disclose terms of the deals — The Wall Street Journal previously reported the News Corp arrangement was worth up to $50 million a year — but it said that it intends to link out to the relevant news sources.

"These integrations will also facilitate easier access to information by linking out to articles, allowing you to visit these partners’ websites for more details while providing value to partners, enabling them to reach new audiences," Meta wrote in an update. The company has a long and sometimes fraught history with publishers as its priorities have shifted over the years. In the past, Meta has struck deals to pay publishers to produce live video and "instant articles" only to change course as news content has become less of a priority for Facebook.

Now, with Meta struggling to compete with its AI rivals, it seems the social media company is once again interested in news content. As the company notes in its blog post, Meta AI isn't always great at surfacing accurate and timely info. I noted this in 2024 when the company's assistant was repeatedly unable to accurately answer seemingly simple questions like " who is the Speaker of the House of Representatives." 

By striking a bunch of deals with publishers, the company should be better equipped to handle these kinds of queries (and hopefully more complex ones). How much benefit publishers will see from these arrangements, however, is an open question. While Meta says it will link out to the relevant news sources, there are lots of outside data points that raise serious questions about the effect AI search tools are having on web traffic.

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-is-bringing-more-international-news-to-its-ai-213323713.html?src=rss

Meta is killing end-to-end encryption in Instagram DMs

Meta is killing end-to-end encryption in Instagram DMs. The feature will "no longer be supported after May 8, 2026," the company wrote in an update on its support page. Unlike WhatsApp, Meta never made encryption available to all Instagram users and it was never a default setting. Instead, users in "some areas" had the ability to opt-in to encryption on a per-chat basis.

In a statement, a Meta spokesperson said the feature was being retired due to low adoption. "Very few people were opting in to end-to-end encrypted messaging in DMs, so we're removing this option from Instagram in the coming months," the spokesperson said. "Anyone who wants to keep messaging with end-to-end encryption can easily do that on WhatsApp.”

Interestingly, Meta's statement doesn't mention the status of encryption on Messenger. The company began turning on end-to-end encryption as a default setting in 2023 after years of work on the feature. A support page for Messenger currently states that the company "is in the process of securing personal messages with end-to-end encryption by default."

Meta's approach to encrypted messaging has changed several times over the years. It started encrypting WhatsApp chats in 2016. In 2019, Mark Zuckerberg outlined a "privacy-focused" revamp of the company's apps, saying at the time that "implementing end-to-end encryption for all private communications is the right thing to do." In 2021, the company's head of safety said that Meta was delaying its encryption work until 2023 in order to create stronger safety features.  

Meta’s use of encryption has been repeatedly criticized by law enforcement and some child safety organizations that say the feature makes it harder to catch predators who target children on social media. Recently, the topic has been raised numerous times during a trial in New Mexico over child safety. Internal documents that have surfaced as part of the trial show Meta executives and researchers debating the trade-offs between safety and privacy as it relates to encryption. 

In testimony that was broadcast during the trial, Zuckerberg said that safety issues were "a large part of the reason why it took so long" to bring encryption to Messenger. "There's been debate about this, but I think the majority of folks, from people who use our products to people who are involved in security overall, believe that strong encryption is positive," he said.


This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-is-killing-end-to-end-encryption-in-instagram-dms-195207421.html?src=rss

X could be breaching US sanctions on Iran, watchdog warns

The newly verified X account for Iran's supreme leader could be putting the company on the wrong side of US sanctions, according to a watchdog group. The Tech Transparency Project, which last month published a report on X granting premium perks to sanctioned officials in Iran, now says that the verified account for the country's new leader raises fresh questions about the issue. 

The TTP notes that the X account for Iran's new supreme leader, Mojtaba Khamenei, appears to be paying for an X premium subscription despite being on the US government's list of sanctioned individuals since 2019. As the group points out, the Iran-based account was created this month and currently bears a blue checkmark, which typically indicates the account holder is paying for a subscription. 

The account belonging to Mojtaba Khamenei has been boosted by other state-linked accounts in Iran, including the one that previously belonged to Khamenei's father. That account has had a gray checkmark, which indicates it belongs to a verified government official. Verified accounts on X are rewarded with extra visibility on the platform, along with other perks. The younger Khamenei's verified account has already gained more than 20,000 new followers in the hours since TTP first posted about it. 

"The new Supreme Leader's account is just the latest account for a sanctioned entity apparently paying X for premium services," TTP director Katie Paul said in a statement to Engadget. "TTP has identified dozens of accounts, many linked to designated terrorists, that subscribed to X premium over the past three years. What's more concerning than the blatant disregard for U.S. sanctions law is the fact that Musk's companies have a contract with the Pentagon while X is actively profiting from U.S. adversaries."

As Paul notes, this isn't the first time TTP has raised questions about whether X is running afoul of US sanctions via its premium service. In 2024, the group published a report noting that X was accepting paid verification from more than two dozen sanctioned individuals and groups. The company said at the time that it had a "a robust and secure approach in place for our monetization features." 

X didn't respond to a request for comment. But in the hours after Engadget reached out about Khamenei’s account, the blue checkmark was removed. The company also removed blue checks from a handful of Iran-based accounts flagged by TTP last month following reporting from Wired.

Update, March 13, 2026, 9:08AM PT: This story was updated to reflect changes made to Khamenei’s account following publication.

This article originally appeared on Engadget at https://www.engadget.com/social-media/x-could-be-breaching-us-sanctions-on-iran-watchdog-warns-213550284.html?src=rss

Meta is testing clickable links in Instagram captions for verified subscribers

Instagram has long limited users' ability to share links, restricting link-sharing to Stories, Reels and user profiles. But that might now be changing. The company has started to test clickable links inside of post captions for subscribers to Meta Verified. 

The new feature, which has been a long-requested update from creators, was spotted by blogger Andrea Valeria, who posted screenshots of a clickable Substack link she was able to add to an Instagram post. According to Valeria, an in-app message indicated she could share up to 10 links a month.

Meta confirmed to Engadget that it's testing links in captions for subscribers to Meta Verified, but didn't provide details on how many people have access to the feature or if it will be widely available. It does seem to be somewhat limited, however, as the link on Valeria's post appears on Instagram's mobile app, but now when viewing the same post on Instagram's website. 

Instagram's restrictions on link-sharing have been a notable part of the platform since its early days. The limitation helped kickstart an entire industry of "link in bio" platforms like Linktree, which help creators direct followers to off-platform websites based on what they share on Instagram. If Meta begins implementing the feature widely, it could drastically change how creators are able to interact with their followers (although a 10-link per month limit would likely still require "link in bio" solutions). 

The test is also the latest way that Meta has experimented with making link-sharing a paid feature. The company has also recently tested restricting creators' ability to share links on Facebook by requiring a Meta Verified subscription. Meta Verified for creators starts at $14.99 a month, with the most expensive plans costing $499.99 a month. 


This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-is-testing-clickable-links-in-instagram-captions-for-verified-subscribers-184555406.html?src=rss

The Oversight Board says Meta needs new rules for AI-generated content

The Oversight Board is once again urging Meta to overhaul its rules around AI-generated content. This time, the board says Meta should create a separate rule for AI content that's independent of its misinformation policy, invest in more reliable detection tools and make better use of digital watermarks among other changes. 

The group's recommendations stem from an AI-generated video shared last year that claimed to show damaged buildings in the Israeli city of Haifa during the Israel-Iran conflict in 2025. The clip, which racked up more than 700,000 views, was posted by an account that claimed to be a news outlet but was actually run by someone in the Philippines.

After the video was reported to Meta, the company declined to remove it or add a "high risk" AI label that would have clearly indicated the content had been created or manipulated with AI. The board overturned Meta's decision not to add the "high risk" label and says the case shines a light on several areas where the company's current AI rules are falling short.

"Meta must do more to address the proliferation of deceptive AI- generated content on its platforms, including by inauthentic or abusive networks of accounts and pages, particularly on matters of public interest, so that users can distinguish between what is real and fake," the board wrote in its decision. Meta eventually disabled three accounts linked to the page after the board flagged "obvious signals of deception."

One of the board's top recommendations is that Meta create a dedicated rule for AI-generated content that's separate from its misinformation policy. The rule, according to the board, should include specifics about how and when users are required to label AI content as well as information about how Meta penalizes those who break the rule. 

The board was also highly critical of how Meta uses its current "AI Info" labels, noting that the way they are applied is "neither robust nor comprehensive enough to contend with the scale and velocity of AI-generated content,” especially in times of conflict or crisis. “A system overly dependent on self-disclosure of AI usage and escalated review (which occurs infrequently) to properly label this output cannot meet the challenges posed in the current environment.”

Meta, the board said, also needs to invest in more sophisticated detection technology that can reliably label AI media, including audio and video. The group added that it was "concerned" about reports that the company is "inconsistently implementing" digital watermarks on AI content created by its own AI tools. 

In a statement, Meta said it “welcomed” the decision and that it would also take action “on content that is identical and in the same context” when “it is technically and operationally possible to do so.” The company has 60 days to formally respond to its recommendations. 

The decision isn't the first time the board has been critical of Meta's handling of AI content. The group has described the company's manipulated media rules as "incoherent" on two other occasions, and has criticized it for relying on third-parties, including fact checking organizations, to flag problematic content. Meta's reliance on fact checkers and other "trusted partners" was again raised in this case, with the board saying that it had heard from these groups that Meta "is less responsive to outreach and concerns, in part due to a significant reduction in capacities for Meta’s internal teams." Meta, the board writes, "should be capable of conducting such assessments of harm itself, rather than rely solely on partners reaching out to them during an armed conflict."

While the Oversight Board's decision relates to a post from last year, the issue of AI-generated content during armed conflicts has taken on a new urgency during the latest conflict in the Middle East. Since the start of the US and Israel's strikes on Iran earlier this month, there has been a sharp rise in viral AI-generated misinformation across social media. The board, which has previously hinted that it would like to work with generative AI companies, included a suggestion that would seem to apply to not just Meta. 

"The industry needs coherence in helping users distinguish deceptive AI-generated content and platforms should address abusive accounts and pages sharing such output," it wrote.

Update, March 10, 10:53AM ET: This story was updated to reflect Meta’s response to the Oversight Board.

This article originally appeared on Engadget at https://www.engadget.com/social-media/the-oversight-board-says-meta-needs-new-rules-for-ai-generated-content-100000268.html?src=rss

Bluesky’s CEO is stepping down after nearly 5 years

Bluesky CEO Jay Graber, who has led the upstart social platform since 2021, is stepping down from her role as its top executive. Toni Schneider, who has been an advisor and investor in Bluesky, will take over the job temporarily while Graber stays on as Chief Innovation Officer. 

"As Bluesky matures, the company needs a seasoned operator focused on scaling and execution, while I return to what I do best: building new things," Graber wrote in a blog post. Schneider, who was previously CEO at Wordpress parent Automattic, will be that "experienced operator and leader" while Blueksy's board searches for a permanent CEO, she said.

Graber's history with Bluesky dates back to its early days as a side project at Jack Dorsey's Twitter. She was officially brought on as CEO in 2021 as Bluesky spun off into an independent company (it officially ended its association with Twitter in 2022 and Dorsey cut ties with Bluesky in 2024). She led the company through its launch and early viral success as it grew from an invitation-only platform to the 43 million-user service it is today. During that time, she's become known as an advocate for decentralized social media and for trolling Mark Zuckerberg's t-shirt choices. 

Nearly three years since it launched publicly, Bluesky has carved out a small but influential niche in the post-Twitter social landscape. The platform is less than a third of the size of Meta's competitor, Threads, which has also copied some of Bluesky's signature features. Bluesky also has yet to roll out any meaningful monetization features, though it has teased a premium subscription service in the past.

As Chief Innovation Officer, Graber will presumably still be an influential voice at the company going forward. And, as Wired points out, she still has a seat on Bluesky's board so she will get some say in who steps into the role permanently. Until then, Schneider, who is also a partner at VC firm Tre Ventures, will lead the company. "I deeply believe in what this team has built and the open social web they're fighting for," he wrote in a post on Bluesky. 


This article originally appeared on Engadget at https://www.engadget.com/social-media/blueskys-ceo-is-stepping-down-after-nearly-5-years-201900960.html?src=rss

Meta hit with a class action lawsuit over smart glasses’ privacy claims

Meta is facing a class action lawsuit for false advertising related to its AI glasses following reports about the company's use of human contractors to review footage captured from users' glasses. The lawsuit, filed Wednesday in federal court in San Francisco, alleges that Meta's claims about the devices' privacy features have misled users. 

The lawsuit comes after a Swedish newspaper reported that subcontractors in Kenya have raised concerns about viewing footage recorded via Ray-Ban Meta glasses. According to Svenska Dagbladet, workers have reported witnessing "intimate" material, including bathroom visits, sexual encounters and other private details as part of their job labeling objects in videos captured on users' smart glasses.

"This nationwide class action seeks to hold Meta responsible for its affirmatively false advertising and failure to disclose the true nature of surveillance and its connection to the company’s AI data collection pipeline," the lawsuit, filed by Clarkson Law Firm, states. The filing names two individuals who live in California and New Jersey who purchased Meta's smart glasses. It says that both "relied" on Meta's marketing claims about the glasses' privacy protecting features and that they would not have purchased them if they knew about the company's use of contractors. The lawsuit seeks monetary damages and injunctive relief.

A spokesperson for Meta confirmed to Engadget that data from its smart glasses can be shared with human contractors in some cases. The company declined to comment on the claims in the lawsuit.

"Ray-Ban Meta glasses help you use AI, hands free, to answer questions about the world around you," the spokesperson said. "Unless users choose to share media they've captured with Meta or others, that media stays on the user's device. When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people's experience, as many other companies do. We take steps to filter this data to protect people's privacy and to help prevent identifying information from being reviewed."

What the company doesn't explicitly say there is that there is no way to use the smart glasses' "multimodal" features without sharing the captures of your surroundings with the company. As I noted in my review of the second-generation Ray-Ban Meta smart glasses last year: "images of your surroundings processed for the glasses' multimodal features like Live AI can be used for training purposes (these images aren't saved to your device's camera roll)." 

So while Meta claims that users' own recordings are kept private, footage that is captured but not stored locally for users — like video when Live AI is in use — can be sent to contractors who help train the company's AI models. Meta's privacy policy doesn't specifically mention the use of human contractors, though it states that such data can be used for training purposes. 

"The undisclosed human review pipeline renders the Meta AI Glasses’ privacy features materially misleading, transforms the product from a personal device into a surveillance conduit, and exposes consumers to unreasonable risks of dignitary harm, emotional distress, stalking, extortion, identity theft, and reputational injury," the lawsuit says. "Indeed, Meta employees and contractors have described viewing credit card numbers, nudity, sexual activity, and identifiable faces in the footage they reviewed, and reported that Meta’s purported anonymization safeguards do not reliably function."

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-hit-with-a-class-action-lawsuit-over-smart-glasses-privacy-claims-182846817.html?src=rss

Mark Zuckerberg downplays Meta’s own research in New Mexico child safety trial

Jurors in a New Mexico child safety trial heard testimony from Meta CEO Mark Zuckerberg today. During pre-recorded testimony, Zuckerberg was repeatedly asked about the company's understanding of social media addiction and other issues that had been studied by its researchers. 

During the deposition, which was recorded last March, Zuckerberg was asked about numerous findings from researchers at Meta who studied how the company's apps affect users and teens. The CEO downplayed the significance of many of these documents.

Early in the testimony, which was viewed by Engadget on Courtroom View Network, Zuckerberg was questioned about a document on the effect of feedback on Facebook users. The document stated that "contributors on Facebook are likely to learn to associate the act of posting with feedback" which will "lead contributors to seek rewards by visiting the site more often.” Zuckerberg said he wasn’t “sure if that's actually how it works in practice, but I agree that you're summarizing what they appear to be saying.”

Later, the CEO was questioned about a document that graphed the proportion of 11 and 12-year-olds who were monthly active users on Instagram. The chart indicated that at the time, around 20 percent of 11-year-olds were monthly users of the service. "I agree that the graph says that, I am not familiar with what methodology we were using to estimate this," Zuckerberg said. "I assume that if we had direct knowledge that any given person was under the age of 13, that we would have them removed from our services."

New Mexico's attorney general sued the company in 2023 for alleged lapses in child safety, including facilitating predators' access to minors and building features it knew were addictive. In court, Meta's lawyers and executives have disputed the idea that social media should be considered an "addiction." In public statements, the company has said that lawsuits have relied on "cherry-picked quotes and snippets of conversations taken out of context" and that it "has consistently put teen safety ahead of growth for over a decade."

As with his recent testimony in a separate trial over social media addiction in Los Angeles, Zuckerberg repeatedly rejected the "characterization" of questions that were posed to him. And he said that Meta's goal was to make its apps "useful" rather than to increase the amount of time people spend with them. 

Zuckerberg was also questioned about a document written by a company researcher that stated "there is increasing scientific evidence, particularly in the US, … that the average net effect of Facebook on people's well being is slightly negative." The CEO said that "my understanding is that the general consensus view is not that."

It's not the first time a Meta executive has tried to downplay the significance of internal research. The company used a similar strategy in 2021 after former employee turned whistleblower Frances Haugen disclosed documents showing that Facebook's researchers had found that Instagram made some teen girls feel worse about themselves.

Zuckerberg's testimony was played one day after jurors heard recorded testimony from Instagram chief Adam Mosseri. The exec was also asked about Haugen's disclosures and Meta's response to them. Some of those disclosures were based on "problematic research," he said. "Most research is surveys. We run hundreds of surveys every month."

This article originally appeared on Engadget at https://www.engadget.com/social-media/mark-zuckerberg-downplays-metas-own-research-in-new-mexico-child-safety-trial-222924340.html?src=rss

Meta signs a multimillion dollar AI licensing deal with News Corp

Meta has signed an AI licensing deal with News Corp that will allow the Meta AI maker to use content from The Wall Street Journal and other brands in its chatbot responses and for training of its AI models. News Corp confirmed to Engadget that it had struck a deal with Meta, but didn't provide specifics on the terms of the arrangement. According to The Wall Street Journal, Meta will pay News Corp. "up to $50 million a year" for a three-year deal that covers content from The Journal, as well as the media giant's other brands in the US and UK. 

News Corp previously struck a five-year deal with OpenAI that was valued at around $250 million. During a recent appearance at Morgan Stanley's annual Technology, Media & Telecom (TMT) conference, News Corp CEO Robert Thomson hinted that the media company was in the "advanced stage with other negotiations."

He described the company's overall approach to such arrangements as "a woo and a sue" strategy, depending on whether companies want to pay for content or scrape it without permission. "We have what you might call a woo and a sue strategy," he said. "We'll woo you. We'd like you to be our partner. But if you're stealing our stuff, we are going to sue you. So there'll be a discount for those who hand themselves in, and there'll be a penalty for those that resist."

A spokesperson for Meta confirmed that the two companies had reached an agreement . The company, which has been reorganizing its AI teams as it looks to create its next model, has struck a number of licensing deals in recent months. It previously signed multi-year agreements with USA Today, People, CNN, Fox News and other outlets. The company said at the time that “by integrating more and different types of news sources, our aim is to improve Meta AI’s ability to deliver timely and relevant content and information with a wide variety of viewpoints and content types.”

Update, March 3, 2026, 4:18PM PT: This story was updated with additional information from a Meta spokesperson.

This article originally appeared on Engadget at https://www.engadget.com/ai/meta-signs-a-multimillion-dollar-ai-licensing-deal-with-news-corp-234157902.html?src=rss

Meta sues advertisers in Brazil and China over ‘celeb bait’ scams

Meta has sued the people and groups behind three scam operations that used images and deepfakes of celebrities to lure users to scam websites. According to the company, the three entities were based in China and Brazil and targeted people in the US, Japan and other countries. The ads promoted fraudulent investment schemes and fake health products.

Meta said that it had filed lawsuits against several people in Brazil who promoted fake or unapproved healthcare products and online courses promoting them. The company also sued a China-based entity it says used ads featuring celebrities "as part of a larger fraud scheme that lured people into joining so-called investment groups." The company didn't provide details on how many ads these groups had run on Facebook, how many social media users had seen or interacted with the ads or how long the scammers had been operating on the platform.

So-called "celeb bait" ads have been a long-running issue for the company. Engadget has previously documented celeb bait scams on Facebook, including ones that frequently use Elon Musk and Fox News personalities to hawk fake cures for diabetes. The Oversight Board has also criticized the company for not doing enough to combat such scams. In its update, Meta says that "because scam ads are designed to look real, they’re not always easy to detect." The company also noted that it has now enrolled "more than 500,000" celebrities and public figures into its facial recognition system that's meant to automatically detect scam ads using the faces of famous people. 

Meta's handling of scammy advertisers has come under increased scrutiny in recent months after Reuters reported that researchers at the company at one point estimated that as much as 10 percent of its ad revenue could be coming from scams and banned products. The fact that Meta has made billions of dollars from problematic advertisers has also caused the company to be slow to take action against repeat offenders.

In addition to the groups behind the celeb bait ads, Meta says that it's upgraded its ability to detect scam ads that use cloaking, which has at times hindered its internal review systems. The company also sued a Vietnam-based advertiser it says used scam ads to hawk "deeply discounted items from well-known brands," including Longchamp.

Meta also took legal action against eight former "Meta Business Partners," who promoted services that would "un-ban" or other "account restoration services." The company says it will "consider taking additional legal action, including litigation, if they don’t comply" with cease and desist orders.

Update, February 26, 2026, 1:16PM PT: This story was updated to specify that Meta’s internal estimates around ad revenue included scams and banned products.

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-sues-advertisers-in-brazil-and-china-over-celeb-bait-scams-190000268.html?src=rss