Meta is testing clickable links in Instagram captions for verified subscribers

Instagram has long limited users' ability to share links, restricting link-sharing to Stories, Reels and user profiles. But that might now be changing. The company has started to test clickable links inside of post captions for subscribers to Meta Verified. 

The new feature, which has been a long-requested update from creators, was spotted by blogger Andrea Valeria, who posted screenshots of a clickable Substack link she was able to add to an Instagram post. According to Valeria, an in-app message indicated she could share up to 10 links a month.

Meta confirmed to Engadget that it's testing links in captions for subscribers to Meta Verified, but didn't provide details on how many people have access to the feature or if it will be widely available. It does seem to be somewhat limited, however, as the link on Valeria's post appears on Instagram's mobile app, but now when viewing the same post on Instagram's website. 

Instagram's restrictions on link-sharing have been a notable part of the platform since its early days. The limitation helped kickstart an entire industry of "link in bio" platforms like Linktree, which help creators direct followers to off-platform websites based on what they share on Instagram. If Meta begins implementing the feature widely, it could drastically change how creators are able to interact with their followers (although a 10-link per month limit would likely still require "link in bio" solutions). 

The test is also the latest way that Meta has experimented with making link-sharing a paid feature. The company has also recently tested restricting creators' ability to share links on Facebook by requiring a Meta Verified subscription. Meta Verified for creators starts at $14.99 a month, with the most expensive plans costing $499.99 a month. 


This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-is-testing-clickable-links-in-instagram-captions-for-verified-subscribers-184555406.html?src=rss

The Oversight Board says Meta needs new rules for AI-generated content

The Oversight Board is once again urging Meta to overhaul its rules around AI-generated content. This time, the board says Meta should create a separate rule for AI content that's independent of its misinformation policy, invest in more reliable detection tools and make better use of digital watermarks among other changes. 

The group's recommendations stem from an AI-generated video shared last year that claimed to show damaged buildings in the Israeli city of Haifa during the Israel-Iran conflict in 2025. The clip, which racked up more than 700,000 views, was posted by an account that claimed to be a news outlet but was actually run by someone in the Philippines.

After the video was reported to Meta, the company declined to remove it or add a "high risk" AI label that would have clearly indicated the content had been created or manipulated with AI. The board overturned Meta's decision not to add the "high risk" label and says the case shines a light on several areas where the company's current AI rules are falling short.

"Meta must do more to address the proliferation of deceptive AI- generated content on its platforms, including by inauthentic or abusive networks of accounts and pages, particularly on matters of public interest, so that users can distinguish between what is real and fake," the board wrote in its decision. Meta eventually disabled three accounts linked to the page after the board flagged "obvious signals of deception."

One of the board's top recommendations is that Meta create a dedicated rule for AI-generated content that's separate from its misinformation policy. The rule, according to the board, should include specifics about how and when users are required to label AI content as well as information about how Meta penalizes those who break the rule. 

The board was also highly critical of how Meta uses its current "AI Info" labels, noting that the way they are applied is "neither robust nor comprehensive enough to contend with the scale and velocity of AI-generated content,” especially in times of conflict or crisis. “A system overly dependent on self-disclosure of AI usage and escalated review (which occurs infrequently) to properly label this output cannot meet the challenges posed in the current environment.”

Meta, the board said, also needs to invest in more sophisticated detection technology that can reliably label AI media, including audio and video. The group added that it was "concerned" about reports that the company is "inconsistently implementing" digital watermarks on AI content created by its own AI tools. 

In a statement, Meta said it “welcomed” the decision and that it would also take action “on content that is identical and in the same context” when “it is technically and operationally possible to do so.” The company has 60 days to formally respond to its recommendations. 

The decision isn't the first time the board has been critical of Meta's handling of AI content. The group has described the company's manipulated media rules as "incoherent" on two other occasions, and has criticized it for relying on third-parties, including fact checking organizations, to flag problematic content. Meta's reliance on fact checkers and other "trusted partners" was again raised in this case, with the board saying that it had heard from these groups that Meta "is less responsive to outreach and concerns, in part due to a significant reduction in capacities for Meta’s internal teams." Meta, the board writes, "should be capable of conducting such assessments of harm itself, rather than rely solely on partners reaching out to them during an armed conflict."

While the Oversight Board's decision relates to a post from last year, the issue of AI-generated content during armed conflicts has taken on a new urgency during the latest conflict in the Middle East. Since the start of the US and Israel's strikes on Iran earlier this month, there has been a sharp rise in viral AI-generated misinformation across social media. The board, which has previously hinted that it would like to work with generative AI companies, included a suggestion that would seem to apply to not just Meta. 

"The industry needs coherence in helping users distinguish deceptive AI-generated content and platforms should address abusive accounts and pages sharing such output," it wrote.

Update, March 10, 10:53AM ET: This story was updated to reflect Meta’s response to the Oversight Board.

This article originally appeared on Engadget at https://www.engadget.com/social-media/the-oversight-board-says-meta-needs-new-rules-for-ai-generated-content-100000268.html?src=rss

Bluesky’s CEO is stepping down after nearly 5 years

Bluesky CEO Jay Graber, who has led the upstart social platform since 2021, is stepping down from her role as its top executive. Toni Schneider, who has been an advisor and investor in Bluesky, will take over the job temporarily while Graber stays on as Chief Innovation Officer. 

"As Bluesky matures, the company needs a seasoned operator focused on scaling and execution, while I return to what I do best: building new things," Graber wrote in a blog post. Schneider, who was previously CEO at Wordpress parent Automattic, will be that "experienced operator and leader" while Blueksy's board searches for a permanent CEO, she said.

Graber's history with Bluesky dates back to its early days as a side project at Jack Dorsey's Twitter. She was officially brought on as CEO in 2021 as Bluesky spun off into an independent company (it officially ended its association with Twitter in 2022 and Dorsey cut ties with Bluesky in 2024). She led the company through its launch and early viral success as it grew from an invitation-only platform to the 43 million-user service it is today. During that time, she's become known as an advocate for decentralized social media and for trolling Mark Zuckerberg's t-shirt choices. 

Nearly three years since it launched publicly, Bluesky has carved out a small but influential niche in the post-Twitter social landscape. The platform is less than a third of the size of Meta's competitor, Threads, which has also copied some of Bluesky's signature features. Bluesky also has yet to roll out any meaningful monetization features, though it has teased a premium subscription service in the past.

As Chief Innovation Officer, Graber will presumably still be an influential voice at the company going forward. And, as Wired points out, she still has a seat on Bluesky's board so she will get some say in who steps into the role permanently. Until then, Schneider, who is also a partner at VC firm Tre Ventures, will lead the company. "I deeply believe in what this team has built and the open social web they're fighting for," he wrote in a post on Bluesky. 


This article originally appeared on Engadget at https://www.engadget.com/social-media/blueskys-ceo-is-stepping-down-after-nearly-5-years-201900960.html?src=rss

Meta hit with a class action lawsuit over smart glasses’ privacy claims

Meta is facing a class action lawsuit for false advertising related to its AI glasses following reports about the company's use of human contractors to review footage captured from users' glasses. The lawsuit, filed Wednesday in federal court in San Francisco, alleges that Meta's claims about the devices' privacy features have misled users. 

The lawsuit comes after a Swedish newspaper reported that subcontractors in Kenya have raised concerns about viewing footage recorded via Ray-Ban Meta glasses. According to Svenska Dagbladet, workers have reported witnessing "intimate" material, including bathroom visits, sexual encounters and other private details as part of their job labeling objects in videos captured on users' smart glasses.

"This nationwide class action seeks to hold Meta responsible for its affirmatively false advertising and failure to disclose the true nature of surveillance and its connection to the company’s AI data collection pipeline," the lawsuit, filed by Clarkson Law Firm, states. The filing names two individuals who live in California and New Jersey who purchased Meta's smart glasses. It says that both "relied" on Meta's marketing claims about the glasses' privacy protecting features and that they would not have purchased them if they knew about the company's use of contractors. The lawsuit seeks monetary damages and injunctive relief.

A spokesperson for Meta confirmed to Engadget that data from its smart glasses can be shared with human contractors in some cases. The company declined to comment on the claims in the lawsuit.

"Ray-Ban Meta glasses help you use AI, hands free, to answer questions about the world around you," the spokesperson said. "Unless users choose to share media they've captured with Meta or others, that media stays on the user's device. When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people's experience, as many other companies do. We take steps to filter this data to protect people's privacy and to help prevent identifying information from being reviewed."

What the company doesn't explicitly say there is that there is no way to use the smart glasses' "multimodal" features without sharing the captures of your surroundings with the company. As I noted in my review of the second-generation Ray-Ban Meta smart glasses last year: "images of your surroundings processed for the glasses' multimodal features like Live AI can be used for training purposes (these images aren't saved to your device's camera roll)." 

So while Meta claims that users' own recordings are kept private, footage that is captured but not stored locally for users — like video when Live AI is in use — can be sent to contractors who help train the company's AI models. Meta's privacy policy doesn't specifically mention the use of human contractors, though it states that such data can be used for training purposes. 

"The undisclosed human review pipeline renders the Meta AI Glasses’ privacy features materially misleading, transforms the product from a personal device into a surveillance conduit, and exposes consumers to unreasonable risks of dignitary harm, emotional distress, stalking, extortion, identity theft, and reputational injury," the lawsuit says. "Indeed, Meta employees and contractors have described viewing credit card numbers, nudity, sexual activity, and identifiable faces in the footage they reviewed, and reported that Meta’s purported anonymization safeguards do not reliably function."

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-hit-with-a-class-action-lawsuit-over-smart-glasses-privacy-claims-182846817.html?src=rss

Mark Zuckerberg downplays Meta’s own research in New Mexico child safety trial

Jurors in a New Mexico child safety trial heard testimony from Meta CEO Mark Zuckerberg today. During pre-recorded testimony, Zuckerberg was repeatedly asked about the company's understanding of social media addiction and other issues that had been studied by its researchers. 

During the deposition, which was recorded last March, Zuckerberg was asked about numerous findings from researchers at Meta who studied how the company's apps affect users and teens. The CEO downplayed the significance of many of these documents.

Early in the testimony, which was viewed by Engadget on Courtroom View Network, Zuckerberg was questioned about a document on the effect of feedback on Facebook users. The document stated that "contributors on Facebook are likely to learn to associate the act of posting with feedback" which will "lead contributors to seek rewards by visiting the site more often.” Zuckerberg said he wasn’t “sure if that's actually how it works in practice, but I agree that you're summarizing what they appear to be saying.”

Later, the CEO was questioned about a document that graphed the proportion of 11 and 12-year-olds who were monthly active users on Instagram. The chart indicated that at the time, around 20 percent of 11-year-olds were monthly users of the service. "I agree that the graph says that, I am not familiar with what methodology we were using to estimate this," Zuckerberg said. "I assume that if we had direct knowledge that any given person was under the age of 13, that we would have them removed from our services."

New Mexico's attorney general sued the company in 2023 for alleged lapses in child safety, including facilitating predators' access to minors and building features it knew were addictive. In court, Meta's lawyers and executives have disputed the idea that social media should be considered an "addiction." In public statements, the company has said that lawsuits have relied on "cherry-picked quotes and snippets of conversations taken out of context" and that it "has consistently put teen safety ahead of growth for over a decade."

As with his recent testimony in a separate trial over social media addiction in Los Angeles, Zuckerberg repeatedly rejected the "characterization" of questions that were posed to him. And he said that Meta's goal was to make its apps "useful" rather than to increase the amount of time people spend with them. 

Zuckerberg was also questioned about a document written by a company researcher that stated "there is increasing scientific evidence, particularly in the US, … that the average net effect of Facebook on people's well being is slightly negative." The CEO said that "my understanding is that the general consensus view is not that."

It's not the first time a Meta executive has tried to downplay the significance of internal research. The company used a similar strategy in 2021 after former employee turned whistleblower Frances Haugen disclosed documents showing that Facebook's researchers had found that Instagram made some teen girls feel worse about themselves.

Zuckerberg's testimony was played one day after jurors heard recorded testimony from Instagram chief Adam Mosseri. The exec was also asked about Haugen's disclosures and Meta's response to them. Some of those disclosures were based on "problematic research," he said. "Most research is surveys. We run hundreds of surveys every month."

This article originally appeared on Engadget at https://www.engadget.com/social-media/mark-zuckerberg-downplays-metas-own-research-in-new-mexico-child-safety-trial-222924340.html?src=rss

Meta signs a multimillion dollar AI licensing deal with News Corp

Meta has signed an AI licensing deal with News Corp that will allow the Meta AI maker to use content from The Wall Street Journal and other brands in its chatbot responses and for training of its AI models. News Corp confirmed to Engadget that it had struck a deal with Meta, but didn't provide specifics on the terms of the arrangement. According to The Wall Street Journal, Meta will pay News Corp. "up to $50 million a year" for a three-year deal that covers content from The Journal, as well as the media giant's other brands in the US and UK. 

News Corp previously struck a five-year deal with OpenAI that was valued at around $250 million. During a recent appearance at Morgan Stanley's annual Technology, Media & Telecom (TMT) conference, News Corp CEO Robert Thomson hinted that the media company was in the "advanced stage with other negotiations."

He described the company's overall approach to such arrangements as "a woo and a sue" strategy, depending on whether companies want to pay for content or scrape it without permission. "We have what you might call a woo and a sue strategy," he said. "We'll woo you. We'd like you to be our partner. But if you're stealing our stuff, we are going to sue you. So there'll be a discount for those who hand themselves in, and there'll be a penalty for those that resist."

A spokesperson for Meta confirmed that the two companies had reached an agreement . The company, which has been reorganizing its AI teams as it looks to create its next model, has struck a number of licensing deals in recent months. It previously signed multi-year agreements with USA Today, People, CNN, Fox News and other outlets. The company said at the time that “by integrating more and different types of news sources, our aim is to improve Meta AI’s ability to deliver timely and relevant content and information with a wide variety of viewpoints and content types.”

Update, March 3, 2026, 4:18PM PT: This story was updated with additional information from a Meta spokesperson.

This article originally appeared on Engadget at https://www.engadget.com/ai/meta-signs-a-multimillion-dollar-ai-licensing-deal-with-news-corp-234157902.html?src=rss

Meta sues advertisers in Brazil and China over ‘celeb bait’ scams

Meta has sued the people and groups behind three scam operations that used images and deepfakes of celebrities to lure users to scam websites. According to the company, the three entities were based in China and Brazil and targeted people in the US, Japan and other countries. The ads promoted fraudulent investment schemes and fake health products.

Meta said that it had filed lawsuits against several people in Brazil who promoted fake or unapproved healthcare products and online courses promoting them. The company also sued a China-based entity it says used ads featuring celebrities "as part of a larger fraud scheme that lured people into joining so-called investment groups." The company didn't provide details on how many ads these groups had run on Facebook, how many social media users had seen or interacted with the ads or how long the scammers had been operating on the platform.

So-called "celeb bait" ads have been a long-running issue for the company. Engadget has previously documented celeb bait scams on Facebook, including ones that frequently use Elon Musk and Fox News personalities to hawk fake cures for diabetes. The Oversight Board has also criticized the company for not doing enough to combat such scams. In its update, Meta says that "because scam ads are designed to look real, they’re not always easy to detect." The company also noted that it has now enrolled "more than 500,000" celebrities and public figures into its facial recognition system that's meant to automatically detect scam ads using the faces of famous people. 

Meta's handling of scammy advertisers has come under increased scrutiny in recent months after Reuters reported that researchers at the company at one point estimated that as much as 10 percent of its ad revenue could be coming from scams and banned products. The fact that Meta has made billions of dollars from problematic advertisers has also caused the company to be slow to take action against repeat offenders.

In addition to the groups behind the celeb bait ads, Meta says that it's upgraded its ability to detect scam ads that use cloaking, which has at times hindered its internal review systems. The company also sued a Vietnam-based advertiser it says used scam ads to hawk "deeply discounted items from well-known brands," including Longchamp.

Meta also took legal action against eight former "Meta Business Partners," who promoted services that would "un-ban" or other "account restoration services." The company says it will "consider taking additional legal action, including litigation, if they don’t comply" with cease and desist orders.

Update, February 26, 2026, 1:16PM PT: This story was updated to specify that Meta’s internal estimates around ad revenue included scams and banned products.

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-sues-advertisers-in-brazil-and-china-over-celeb-bait-scams-190000268.html?src=rss

Mark Zuckerberg testifies in social media addiction trial that Meta just wants Instagram to be ‘useful’

Mark Zuckerberg took the stand Wednesday in a high-profile jury trial over social media addiction. In an appearance that was described by NBC News as "combative," the Facebook founder reportedly said that Meta's goal was to make Instagram "useful" not increase the time users are spending in the app. 

On the stand, Zuckerberg was questioned about a company document that said improving engagement was among "company goals," according to CNBC. But Zuckerberg claimed that the company had "made the conscious decision to move away from those goals, focusing instead on utility," according to The Associated Press. "If something is valuable, people will use it more because it’s useful to them,” he said. 

The trial stems from a lawsuit brought by a California woman identified as "KGM" in court documents. The now 20-year-old alleges that she was harmed as a child by addictive features in Instagram, YouTube, Snapchat and TikTok. TikTok and Snap opted to settle before the case went to trial. 

Zuckerberg was also asked about previous public statements, including his remarks on Joe Rogan's podcast last year that he can't be fired by Meta's board because he controls a majority of the voting power. According to The New York Times, Zuckerberg accused the plaintiffs' lawyer of "mischaracterizing" his past comments more than a dozen times.  

Zuckerberg's appearance in court also apparently prompted the judge to warn people in the courtroom not to record the proceedings using AI glasses. As CNBC notes, members of Zuckerberg's entourage were spotted wearing Meta's smart glasses as the CEO was escorted into the courthouse. It's unclear if anyone was actually using the glasses in court, but legal affairs journalist Meghann Cuniff reported that the judge was particularly concerned about the possibility of jurors being recorded or subjected to facial recognition. (Meta's smart glasses do not currently have native facial recognition abilities, but recent reports suggest the company is considering adding such features.)

The Los Angeles trial has been closely watched not just because it marked a rare in-court appearance for Zuckerberg. It's among the first of several cases where Meta will face allegations that its platforms have harmed children. In this case and in a separate proceeding in New Mexico, Meta's lawyers have cast doubt on the idea that social media should be considered a real addiction. Instagram chief Adam Mosseri previously testified in the same Los Angeles trial that Instagram isn't "clinically addictive."

This article originally appeared on Engadget at https://www.engadget.com/social-media/mark-zuckerberg-testifies-in-social-media-addiction-trial-that-meta-just-wants-instagram-to-be-useful-234332316.html?src=rss

Meta really wants you to believe social media addiction is ‘not a real thing’

Meta went to court this week in two major trials over alleged harms facilitated by its platform. In New Mexico, the state's attorney general has accused the company of facilitating child exploitation and harming children through addictive features. In a separate case in Los Angeles, a California woman sued the company over mental health harms she says she suffered as the result of addictive design choices from Meta and others.

In both cases, Meta has disputed the idea that social media should be considered an "addiction." On the stand this week, Instagram chief Adam Mosseri said that social media isn't "clinically addictive," comparing it to being "addicted" to a Netflix show.

In opening statements in the New Mexico trial, Meta's lawyer Kevin Huff went further. He told the jury that "social media addiction is not a thing" because it's not in the Diagnostic and Statistical Manual of Mental Disorders (DSM), the handbook used by mental health professionals in the US.

"According to the American Psychiatric Association, they don't recognize the concept of social media addiction in the same way as addiction to drugs and alcohol," Huff said during opening arguments that were broadcast by Courtroom View Network. "What you see on the screen is what's called the DSM, which is basically the official manual for recognized mental disorders. The American Psychiatric Association studied this and decided that social media addiction is not a thing."

But the American Psychiatric Association (APA) has never said that social media addiction doesn't exist. The organization provides information and resources about social media addiction on its website. "Social media addiction is not currently listed as a diagnosis in the DSM-5-TR—but that does not mean it doesn’t exist," the APA said in a statement to Engadget.

Dr. Tania Moretta, a clinical pyschophysiology researcher who has studied social media addiction, agrees. "The absence of a DSM classification does not mean that a behavior cannot be addictive, maladaptive or clinically significant," she told Engadget. That argument, she said, "reflects a misunderstanding" of how psychiatry professionals define and classify conditions. "Diagnostic manuals formalize scientific consensus; they do not define the boundaries of legitimate scientific inquiry. Many maladaptive behaviors and clinically significant symptom patterns are studied and treated well before receiving official classification."

Meta's critics have long claimed that the company has profited from addictive features that hook children and teens. The trials in Los Angeles and New Mexico are just the start of several court battles over the issue. The social media company is also facing a high-profile trial with school districts in June, and lawsuits from 41 state attorneys general

Moretta said that social media addiction is a field that requires more study, but that there is already evidence that it can have harmful effects on some people. "At present, from a scientific perspective, there is documented evidence that social media use disorder is associated with both psychophysiological alterations, including changes in reward/motivational and inhibitory/regulatory systems, and clinically significant negative impacts on functioning (e.g., sleep disturbances, psychological distress, impairment in social, academic, or occupational domains)," she said. "The key question is not whether all social media use is addictive, but whether a subset of users exhibits patterns consistent with behavioral addiction models and whether specific platform design features may exacerbate vulnerability in predisposed individuals."

Both trials are ongoing and expected to last the next several weeks. In New Mexico, jurors have already heard from former employee turned whistleblower Arturo Bejar and former exec Brian Boland, both of whom have publicly criticized the company for not prioritizing safety. In Los Angeles, Mosseri's testimony has wrapped up, but Meta CEO Mark Zuckerberg is expected to testify next week. The trials will also feature extensive internal documents from Meta, including details about the company's own research into the mental health impacts of its platform on young people.

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-really-wants-you-to-believe-social-media-addiction-is-not-a-real-thing-130000257.html?src=rss

Meta turned Threads algorithm complaints into an official feature

Threads users have been complaining about its recommendation algorithm pretty much since the beginning of the platform. At some point, this turned into a meme, with users writing posts jokingly addressed to the algorithm in which they requested to see more posts about the topics they're actually interested in.

Now, Meta is turning those "Dear algorithm" posts into an official feature that it says will allow Threads users to tune their recommendations in real time. With the change, users can write a post that begins with "dear algo" to adjust their preferences. For example, you could write "dear algo, show me more posts about cute cats." You can also ask to see fewer posts about topics you don't want to see, like "dear algo, stop showing me posts about sick pets."

You can track your requests to the algorithm in the app's settings in order to revisit them or remove them. You can also retweet other users' "dear algo" posts to have those topics reflected in your feed. Importantly, "dear algo" requests are temporary and only last for three days at a time, which Meta says is meant to keep the algorithm feel fresher and more flexible.  

The rollout of the feature follows a limited test late last year. Now, "dear algo" posts will work for Threads users in the US, UK, Australia and New Zealand with more countries coming "soon."

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-turned-threads-algorithm-complaints-into-an-official-feature-180000236.html?src=rss