A Facebook test makes link-sharing a paid feature for creators

Creators and publishers have long worried about Meta's ability to throttle links to outside content. Now, the company is testing out a new scheme that effectively puts link-sharing behind a paywall for creators on Facebook.

Under the test, a Meta Verified subscription will determine how many links a creator can share another profile per month. According to a screenshot shared by social meda consultant Matt Navarra, creators in the test recently received a notification from Meta informing them that "certain Facebook profiles without Meta Verified, including yours, will be limited to sharing links in 2 organic posts per month."  

Meta is making link sharing pay to play with a new test.
Meta is making link sharing pay to play with a new test.

A spokesperson for Meta confirmed the test to Engadget. The test is currently affecting an unspecified number of creators and pages using "professional mode" on Facebook. Publishers aren't affected for now. "This is a limited test to understand whether the ability to publish an increased volume of posts with links adds additional value for Meta Verified subscribers," the spokesperson said.

While Meta seems to be trying to downplay the significance of the test, it's a notable shift for the company. Many creators and businesses rely on Facebook and reducing their ability to send traffic to outside websites could be a significant hit. Many creators are already frustrated that the company puts its better customer service features behind the Meta Verified subscription, which starts at $14.99/month. Making link-sharing a premium feature as well would be even more unpopular.

Have a tip for Karissa? You can reach her by email, on X, Bluesky, Threads, or send a message to @karissabe.51 to chat confidentially on Signal.

This article originally appeared on Engadget at https://www.engadget.com/social-media/a-facebook-test-makes-link-sharing-a-paid-feature-for-creators-224632957.html?src=rss

Meta is ‘pausing’ third-party VR headsets from ASUS and Lenovo

Last year, Meta announced that it was opening up its VR operating system to other headset makers, starting with ASUS and Lenovo. Now, it seems that Meta is pumping the brakes on the effort and those third-party Horizon OS headsets might never actually launch.

The company has "paused" the program, Road to VR reported. Meta confirmed the move in a statement to Engadget, saying that it's instead focusing on "building the world-class first-party hardware and software needed to advance the VR market." ASUS and Lenovo didn't immediately respond to a request for comment. Both companies have said little about the headsets since they were first announced in 2024. ASUS' was going to be a "performance gaming" headset under its Republic of Gamers (ROG) brand, while Lenovo's was intended to be a mixed reality device focused on "productivity, learning and entertainment" experiences 

The shift isn't entirely surprising. Meta Connect was very light on virtual reality news this year as smart glasses have become a central focus for the company. Earlier this month, Bloomberg reported that Meta was planning significant cuts to the teams working on virtual reality and Horizon Worlds. The company said at the time it was "shifting some of our investment from Metaverse toward AI glasses and wearables given the momentum there."

Still, Meta is seemingly leaving the door open for third-party VR headsets in the future. "We’re committed to this for the long term and will revisit opportunities for 3rd-party device partnerships as the category evolves," the company said.


This article originally appeared on Engadget at https://www.engadget.com/ar-vr/meta-is-pausing-third-party-vr-headsets-from-asus-and-lenovo-193622900.html?src=rss

Meta is rolling out Conversation Focus and AI-powered Spotify features to its smart glasses

Back in September during Meta Connect, the company previewed a new ability for its smart glasses lineup called Conversation Focus. The feature, which is able to amplify the voices of people around you, is now starting to roll out in the company's latest batch of software updates.

When enabled, the feature is meant to make it easier to hear the people you're speaking with in a crowded or otherwise noisy environment. "You’ll hear the amplified voice sound slightly brighter, which will help you distinguish the conversation from ambient background noise,” Meta explains. It can be enabled either via voice commands ("hey Meta, start Conversation Focus") or by adding it as a dedicated "tap-and-hold" shortcut

Meta is also adding a new multimodal AI feature for Spotify. With the update, users can ask their glasses to play music on Spotify that corresponds with what they're looking at by saying “hey Meta, play a song to match this view.” Spotify will then start a playlist "based on your unique taste, customized for that specific moment." For example, looking at holiday decor might trigger a similarly-themed playlist, though it's not clear how Meta and Spotify may translate more abstract concepts into themed playlists. 

Both updates are starting to roll out now to Meta Ray-Ban glasses (both Gen 1 and Gen 2 models), as well as the Oakley Meta HSTN frames. The update will arrive first to those enrolled in Meta's early access program, and will be available "gradually" to everyone else.

Meta's newest mode of smart glasses, the Oakley Meta Vanguard shades, are also getting some new features in the latest software update. Meta is adding the option to trigger specific commands with a single word, rather than having to say "hey Meta." For example, saying "photo" will be enough to snap a picture and saying "video" will start a new recording. The company says the optional feature is meant to help athletes "save some breath" while on a run or bike ride. 


This article originally appeared on Engadget at https://www.engadget.com/wearables/meta-is-rolling-out-conversation-focus-and-ai-powered-spotify-features-to-its-smart-glasses-192133928.html?src=rss

Judge blocks Louisiana’s social media age verification law

A Louisiana law that would have required social media platforms to verify the ages of their users has been blocked by a judge. The law, known as the Secure Online Child Interaction and Age Limitation, was passed in 2023 and required Meta, Reddit, Snap, YouTube Discord and others to implement age verification and parental control features.

The ruling came just days before the law, which technically took effect over the summer, would have started to be enforced. In his ruling, Judge John W. deGravelles wrote that the law's "age-verification and parental-consent requirements are both over- and under-inclusive," and that its definition of "social media platform" was "nebulous."

The ruling was a victory for NetChoice, a lobbying group that represents the tech industry and has challenged the growing number of age verification laws around the world. The group had argued that the law was unconstitutional and posed a safety and security risk.

In a statement following the ruling, the group pointed to the "massive privacy risk" posed by the Louisiana law and others like it. "Louisiana’s law would have done more than chill speech," Paul Taske, the co-director of NetChoice’s Litigation Center said. "It would have created a massive privacy risk for Louisianans like those playing out in real time in countries without a First Amendment, like the UK."

In a statement, Louisiana Attorney General Liz Murrill said she would appeal the ruling. “The assault on children by online predators is an all-hands-on-deck problem,” Murrill said. “It’s unfortunate that the court chose to protect huge corporations that facilitate child exploitation over the legislative policy to require simple age verification mechanisms.”

Update, December 16, 11:50AM PT: This story has been updated to add a statement from the Louisiana Attorney General’s office.

This article originally appeared on Engadget at https://www.engadget.com/social-media/judge-blocks-louisianas-social-media-age-verification-law-001212758.html?src=rss

Disney+ is now available to stream on Meta’s Quest headsets

Meta revealed that Disney+ was coming to its Quest headsets earlier this year during its Connect event. Now, the streaming app and its vast catalog are finally available to Meta's VR users in the United States.

Meta recently overhauled the Quest's entertainment experience with a new Horizon TV hub that brings its streaming features into one place. Horizon TV also added support for Dolby Vision and Dolby Atmos sound, both of which Disney+ subscribers can now take advantage of. According to Meta, there are a"select" number of titles available to stream in Dolby Vision 4K HDR, and Disney+ Premium subscribers can stream with Dolby Atmos Sound. The company also says there are more than 100 titles in Disney's catalog that support 4K UHD and HDR and some Marvel and Pixar titles that support IMAX's expanded aspect ratio

The app is available now on the latest version of Horizon OS. Though Disney+ is for now limited to US-based Quest users, Meta says that international availability is "coming soon." 


This article originally appeared on Engadget at https://www.engadget.com/ar-vr/disney-is-now-available-to-stream-on-metas-quest-headsets-203622392.html?src=rss

Meta is reportedly working on a new AI model called ‘Avocado’ and it might not be open source

Mark Zuckerberg has for months publicly hinted that he is backing away from open-source AI models. Now, Meta's latest AI pivot is starting to come into focus. The company is reportedly working on a new model, known inside of Meta as "Avocado," which could mark a major shift away from its previous open-source approach to AI development. 

Both CNBC and Bloomberg have reported on Meta's plans surrounding "Avocado," with both outlets saying the model "could" be proprietary rather than open-source. Avocado, which is due out sometime in 2026, is being worked on inside of "TBD," a smaller group within Meta's AI Superintelligence Labs that's headed up by  Chief AI Officer Alexandr Wang, who apparently favors closed models.

It's not clear what Avocado could mean for Llama. Earlier this year, Zuckerberg said he expected Meta would "continue to be a leader" in open source but that it wouldn't "open source everything that we do." He's also cited safety concerns as they relate to superintelligence. As both CNBC and Bloomberg note, Meta's shift has also been driven by issues surrounding the release of Llama 4. The Llama 4 "Behemoth" model has been delayed for months; The New York Times reported earlier this year that Wang and other execs had "discussed abandoning" it altogether. And developers have reportedly been unimpressed with the Llama 4 models that are available. 

There have been other shakeups within the ranks of Meta's AI groups as Zuckerberg has spent billions of dollars building a team dedicated to superintelligence. The company laid off several hundred workers from its Fundamental Artificial Intelligence Research (FAIR) unit. And Meta veteran and Chief AI Scientist Yann LeCun, who has been a proponent for open-source and skeptical of LLMs, recently announced he was leaving the company. 

That Meta may now be pursuing a closed AI model is a significant shift for Zuckerberg, who just last year said "fuck that" about closed platforms and penned a lengthy memo titled "Open Source AI is the Path Forward." But the notoriously competitive CEO is also apparently intensely worried about falling behind OpenAI, Google and other rivals. Meta has said it expects to spend $600 billion over the next few years to fund its AI ambitions.

This article originally appeared on Engadget at https://www.engadget.com/ai/meta-is-reportedly-working-on-a-new-ai-model-called-avocado-and-it-might-not-be-open-source-215426778.html?src=rss

Reddit is starting to verify public figures

Like it or not, the checkmark has become an almost universal symbol on most social platforms, even though its exact meaning can vary significantly between services. Now, Reddit, which historically hasn't cared that much about its users' identity, is joining the club and starting to test verification for public figures on its platform.

The company is beginning "a limited alpha test" of the feature with a small "curated" group of accounts that includes journalists from major media outlets like NBC News and the Boston Globe. Businesses that are already using an "official" badge, which Reddit started testing in 2023, will also now have a grey "verified" checkmark instead of the "official" label. 

Verification has long been a thorny issue for many platforms. For users, it's at times been a source of confusion, especially on sites where verified badges only require a paid subscription. Reddit's approach, at least for now, is closer to how Twitter handled verification prior to Elon Musk's takeover of the company.

The company has handpicked the initial group who will get checkmarks indicating they have verified their identity and seems to be geared around high-visibility accounts. "This feature is designed to help redditors understand who they're engaging with in moments when verification matters, whether it’s an expert or celebrity hosting an AMA, a journalist reporting news, or a brand sharing information," Reddit explains in a blog post. "Our approach to verification is voluntary, opt-in, and explicitly not about status. It’s designed to add clarity for redditors and ease the burden on moderators who often verify users manually." 

For now, Reddit users — even notable ones — won't be able to apply for verification. But the company notes that its intention isn't to limit checkmarks to famous people only. A Reddit spokesperson tells Engadget that "our goal is that anyone who wishes to self-identify will be able to do so in the future." 

The company also notes that verification doesn't come with any exclusive perks, like increased visibility or immunity from the rules of individual subreddits. Reddit requires accounts to be in good standing and already active on the platform in order to be eligible for verification. Accounts that are marked NSFW or that "primarily engage in NSFW-tagged communities" won't be eligible. 


This article originally appeared on Engadget at https://www.engadget.com/social-media/reddit-is-starting-to-verify-public-figures-170000833.html?src=rss

Nearly one-third of teens use AI chatbots daily

AI chatbots haven't come close to replacing teens' social media habits, but they are playing a significant role in their online habits. Nearly one-third of US teens report using AI chatbots daily or more, according to a new report from Pew Research. 

The report is the first from Pew to specifically examine how often teens are using AI overall, and was published alongside its latest research on teens' social media use. It's based on an online survey of 1,458 US teens who were polled between September 25 to October 9, 2025. According to Pew, the survey was "weighted to be representative of U.S. teens ages 13 to 17 who live with their parents by age, gender, race and ethnicity, household income, and other categories."

According to Pew, 48 percent of teens use AI chatbots "several times a week" or more often, with 12 percent reporting their use at "several times a day" and 4 percent saying they use the tools "almost constantly." That's far fewer than the 21 percent of teens who report almost constant use of TikTok and the 17 percent who say the same about YouTube. But those numbers are still significant considering how much newer these services are compared with mainstream social media apps. 

The report also offers some insight into which AI companies' chatbots are most used among teens. OpenAI's ChatGPT came out ahead by far, with 59 percent of teens saying they had used the service, followed by Google's Gemini at 23 percent and Meta AI at 20 percent. Just 14 percent of teens said they had ever used Microsoft Copilot, and 9 percent and 3 percent reported using Character AI and Anthropic's Claude, respectively.

The survey is Pew's first to study Ai chatbot use among teens broadly.
The survey is Pew's first to study Ai chatbot use among teens broadly.
Pew Research

Pew's research comes as there's been growing scrutiny over AI companies' handling of younger users. Both OpenAI and Character AI are currently facing wrongful deaths lawsuits from the parents of teens who died by suicide. In both cases, the parents allege that their child's interactions with a chatbot played a role in their death. (Character AI briefly banned teens from its service before introducing a more limited format for younger users.) Other companies, including Alphabet and Meta, are being probed by the FTC over their safety policies for younger users.

Interestingly, the report also indicates there has been little change in US teens' social media use.  Pew, which has regularly polled teens about how they use social media, notes that teens' daily use of these platforms "remains relatively stable" compared with recent years. YouTube is still the most widely-used platform, reaching 92 percent of teens, followed by TikTok at 69 percent, Instagram at 63 percent and Snapchat at 55 percent. Of the major apps the report surveyed, WhatsApp is the only service to see significant change in recent years, with 24 percent of teens now reporting they use the messaging app, compared with 17 percent in 2022.


This article originally appeared on Engadget at https://www.engadget.com/ai/nearly-one-third-of-teens-use-ai-chatbots-daily-200000888.html?src=rss

The year age verification laws came for the open internet

When the nonprofit Freedom House recently published its annual report, it noted that 2025 marked the 15th straight year of decline for global internet freedom. The biggest decline, after Georgia and Germany, came within the United States.

Among the culprits cited in the report: age verification laws, dozens of which have come into effect over the last year. "Online anonymity, an essential enabler for freedom of expression, is entering a period of crisis as policymakers in free and autocratic countries alike mandate the use of identity verification technology for certain websites or platforms, motivated in some cases by the legitimate aim of protecting children," the report warns.

Age verification laws are, in some ways, part of a years-long reckoning over child safety online, as tech companies have shown themselves unable to prevent serious harms to their most vulnerable users. Lawmakers, who have failed to pass data privacy regulations, Section 230 reform or any other meaningful legislation that would thoughtfully reimagine what responsibilities tech companies owe their users, have instead turned to the blunt tool of age-based restrictions — and with much greater success.  

Over the last two years, 25 states have passed laws requiring some kind of age verification to access adult content online. This year, the Supreme Court delivered a major victory to backers of age verification standards when it upheld a Texas law requiring sites hosting adult content to check the ages of their users.

Age checks have also expanded to social media and online platforms more broadly. Sixteen states now have laws requiring parental controls or other age-based restrictions for social media services. (Six of these measures are currently in limbo due to court challenges.) A federal bill to ban kids younger than 13 from social media has gained bipartisan support in Congress. Utah, Texas and Louisiana passed laws requiring app stores to check the ages of their users, all of which are set to go into effect next year. California plans to enact age-based rules for app stores in 2027.

These laws have started to fragment the internet. Smaller platforms and websites that don't have the resources to pay for third-party verification services may have no choice but to exit markets where age checks are required. Blogging service Dreamwidth pulled out of Mississippi after its age verification laws went into effect, saying that the $10,000 per user fines it could face were an "existential threat" to the company. Bluesky also opted to go dark in Mississippi rather than comply. (The service has complied with age verification laws in South Dakota and Wyoming, as well as the UK.) Pornhub, which has called existing age verification laws "haphazard and dangerous," has blocked access in 23 states

Pornhub is not an outlier in its assessment. Privacy advocates have long warned that age verification laws put everyone's privacy at risk. Practically, there's no way to limit age verification standards only to minors. Confirming the ages of everyone under 18 means you have to confirm the ages of everyone. In practice, this often means submitting a government-issued ID or allowing an app to scan your face. Both are problematic and we don't need to look far to see how these methods can go wrong. 

Discord recently revealed that around 70,000 users "may" have had their government IDs leaked due to an "incident" involving a third-party vendor the company contracts with to provide customer service related to age verification. Last year, another third-party identity provider that had worked with TikTok, Uber and other services exposed drivers' licenses. As a growing number of platforms require us to hand over an ID, these kinds of incidents will likely become even more common. 

Similar risks exist for face scans. Because most minors don't have official IDs, platforms often rely on AI-based tools that can guess users' ages. A face scan may seem more private than handing over a social security number, but we could be turning over far more information than we realize, according to experts at the Electronic Frontier Foundation (EFF).

"When we submit to a face scan to estimate our age, a less scrupulous company could flip a switch and use the same face scan, plus a slightly different algorithm, to guess our name or other demographics," the organization notes. "A poorly designed system might store this personal data, and even correlate it to the online content that we look at. In the hands of an adversary, and cross-referenced to other readily available information, this information can expose intimate details about us."

These issues aren't limited to the United States. Australia, Denmark and Malaysia have taken steps to ban younger teens from social media entirely. Officials in France are pushing for a similar ban, as well as a "curfew" for older teens. These measures would also necessitate some form of age verification in order to block the intended users. In the UK, where the Online Safety Act went into effect earlier this year, we've already seen how well-intentioned efforts to protect teens from supposedly harmful content can end up making large swaths of the internet more difficult to access. 

The law is ostensibly meant to "prevent young people from encountering harmful content relating to suicide, self-harm, eating disorders and pornography," according to the BBC. But the law has also resulted in age checks that reach far beyond porn sites. Age verification is required, in some cases, to access music videos and other content on Spotify. It will soon be required for Xbox accounts. On X, videos of protests have been blocked. Redditors have reported being blocked from a lengthy number of subreddits that are marked NSFW but don't actually host porn, including those related to menstruation, news and addiction recovery. Wikipedia, which recently lost a challenge to be excluded from the law's strictest requirements, is facing the prospect of being forced to verify the ages of its UK contributors, which the organization has said could have disastrous consequences. 

The UK law has also shown how ineffective existing age verification methods are. Users have been able to circumvent the checks by using selfies of video game characters, AI-generated images of ID documents and, of course, Virtual Private Networks (VPNs). 

As the EFF notes, VPNs are incredibly widely used. The software allows people to browse the internet while masking their actual location. They're used by activists and students and people who want to get around geoblocks built into streaming services. Many universities and businesses (including Engadget parent company Yahoo) require their students and workers to use VPNs in order to access certain information. Blocking VPNs would have serious repercussions for all of these groups. 

The makers of several popular VPN services reported major spikes in the UK following the Online Safety Act going into effect this summer, with ProtonVPN reporting a 1,400 percent surge in sign-ups. That's also led to fears of a renewed crackdown on VPNs. Ofcom, the regulator tasked with enforcing the law, told TechRadar it was "monitoring" VPN usage, which has further fueled speculation it could try to ban or restrict their use. And here in the States, lawmakers in Wisconsin have proposed an age verification law that would require sites that host "harmful" content to also block VPNs.

While restrictions on VPNs are, for now, mostly theoretical, the fact that such measures are even being considered is alarming. Up to now, VPN bans are more closely associated with authoritarian countries without an open internet, like Russia and China. If we continue down a path of trying to put age gates up around every piece of potentially objectionable content, the internet could get a lot worse for everyone. 

Correction, December 9, 2025, 11:23AM PT: A previous version of this story stated that Spotify requires age checks to access music in the UK. The service requires some users to complete age verification in order to access music videos tagged 18+ and messaging. We apologize for the error.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/the-year-age-verification-laws-came-for-the-open-internet-130000979.html?src=rss

Meta will let Facebook and Instagram users in the EU share less data

Meta will soon allow Facebook and Instagram users in the European Union to choose to share less data and see less personalized ads on the platform, the European Commission announced. The change will begin to roll out in January, according to the regulator. 

"This is the first time that such a choice is offered on Meta's social networks," the commission said in a statement. "Meta will give users the effective choice between: consenting to share all their data and seeing fully personalised advertising, and opting to share less personal data for an experience with more limited personalised advertising."

The move from Meta comes after the European Commission had fined the company €200 million over its ad-free subscription plans in the EU, which the regulator deemed "consent or pay." Meta began offering ad-free subscriptions to EU users in 2023, and later lowered the price of the plans in response to criticism from the commission. Those plans haven't been very popular, however, with one Meta executive admitting earlier this year that there's been "very little interest" from users. 

In a statement, a Meta spokesperson said that "we acknowledge" the European Commission's statement. "Personalized ads are vital for Europe’s economy — last year, Meta’s ads were linked to €213 billion in economic activity and supported 1.44 million jobs across the EU."


This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-will-let-facebook-and-instagram-users-in-the-eu-share-less-data-183535897.html?src=rss