Meta is bringing usernames to Facebook Groups

Meta has long required Facebook users to post under their real names (with some exceptions), but at least for Facebook Groups, the company is now offering new options. Members of Facebook Groups will now be able to participate under a custom nickname and avatar, rather than being forced to use their real name or post anonymously.

You can set a custom nickname via the same toggle that lets you create an anonymous post, Meta says. Nicknames have to be enabled by a group's administrators, and in some cases individually approved, but once they are, you can switch between posting under your real name or a nickname freely. The only other limitation is that the nickname needs to comply with Meta's existing Community Standards and Terms of Service. While you set your new nickname, you can also pick from a selection of custom avatars, which seem to mostly be pictures of cute animals wearing sunglasses.

Groups are one of several areas of Facebook that Meta has continually tried to tweak in the last few years to bring back users. In 2024, the company introduced a tab that highlighted local events shared in Facebook groups. More recently, it added tools for admins to convert private groups into public ones to try and draw in new members. No single change can make Facebook the center of young people's lives in the way it was in the early 2000s, but letting people use what amounts to a username might encourage Facebook users to explore new groups and post more freely.

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-is-bringing-usernames-to-facebook-groups-231405698.html?src=rss

Australia is adding Twitch to its social media ban for children

The breadth and reach of Australia's pioneering social media ban grows as livestream platform Twitch has now been added to the list of banned platforms for users under 16 years of age. The nationwide ban is the first of its kind and encompasses Facebook, X, TikTok, Snapchat, YouTube and recently Reddit.

According to the BBC, Australia's eSafety Commissioner Julie Inman Grant said Twitch had been included because it was "a platform most commonly used for livestreaming or posting content that enables users, including Australian children, to interact with others in relation to the content posted."

No other platforms are expected to be added before the law goes into effect next month. Grant also said on Friday that Pinterest would not be included in the ban because the core purpose of the platform was not online social interaction.

Under the ban, platforms are expected to take "reasonable steps" to prevent underage users from accessing their platforms, and face steep fees for failure to comply. While VPNs may provide a workaround in some instances, the law still creates an enormous barrier to entry for users under 16.

Earlier this month, Denmark announced its lawmakers had reached a bipartisan agreement to enact a similar ban for users under 15, though details were scarce. In the US, several states have attempted to enact such a ban including Texas and Florida, though these measures either failed to pass or are held up in court. Even laws that don't go as far, such as Utah's law requiring parents to grant permission for teens to open social media accounts, are facing stiff opposition on First Amendment grounds.

Concern around minors' social media continues to grow in the zeitgeist as evidence mounts surrounding the potential ill effects these platforms have on their youngest users.

This article originally appeared on Engadget at https://www.engadget.com/social-media/australia-is-adding-twitch-to-its-social-media-ban-for-children-202033276.html?src=rss

DeleteMe is 30 percent off for Black Friday — and it’s the most effective anti-spam tool I’ve ever used

We like our hardware here at Engadget, from high-end gaming headsets to powerful heaters to the fabric shaver you never knew you needed. For Black Friday, we've found great deals on all of it. However, since I ascended to the software plane years ago and now swim in the digital aetherium, my favorite product of the year is an app — and not even one you use yourself.

DeleteMe will boost your quality of life, no matter where you are or what you're doing, by sharply reducing the amount of spam you receive on every channel. From now until December 5, it's offering 30 percent off all subscriptions with the coupon code BFCM30OFF25.

Chances are you've seen at least one public-facing "people search" site. You know the ones: they usually have names like 411.info or Find.people, and you can type in a person's name and find all the info the site has been able to scrape on them. If you search your own name, it's hard to avoid immediately running to the kitchen to make yourself a tinfoil hat. The most annoying thing is that these "data broker" sites are perfectly legal to run and use.

However, that's also their Achilles' heel. If they want to operate in the open, brokers legally have to include a way for you to remove yourself from their database. Most of them make it as aggravating and time-consuming as possible, but the option is there.

That's where DeleteMe comes in. All you have to do is sign up and enter all the data you want removed from brokerage sites. DeleteMe handles the rest. It searches for your information on people database sites, automatically sends opt-out requests, bugs the broker if they don't comply quickly enough and gives you a weekly report on how it's doing. You do have to be OK with DeleteMe itself having your data, but I trust them way more than the randos over at violate.privacy.

It's so much faster than handling all the opt-out requests yourself, which — if you've ever tried it — rapidly becomes a full-time job. Since I've been using DeleteMe, I almost never get spam calls or texts anymore, except in short bursts before its crawlers catch my name on another site. And yes, it doesn't work on shady data brokers who don't follow the rules, but it's still a massive reduction of your online footprint.

The only problem is that it's pretty expensive, so I strongly recommend jumping on this Black Friday deal. A few months on DeleteMe should be long enough for you to see if it reduces spam for you — and I'm betting it will.

This article originally appeared on Engadget at https://www.engadget.com/deals/deleteme-is-30-percent-off-for-black-friday--and-its-the-most-effective-anti-spam-tool-ive-ever-used-190526056.html?src=rss

Meta’s Chief AI Scientist is leaving the company after 12 years

One of Meta's top AI researchers, Yann LeCun, is leaving after 12 years with the company to found his own AI startup, he announced. LeCun, who is also a professor at New York University, joined the company in 2013 to lead Meta's Fundamental AI Research (FAIR) lab and later took on the role of Chief AI Scientist. 

LeCun said his new startup would "continue the Advanced Machine Intelligence research program (AMI) I have been pursuing over the last several years with colleagues at FAIR, at NYU, and beyond" and that it would partner with Meta. "The goal of the startup is to bring about the next big revolution in AI: systems that understand the physical world, have persistent memory, can reason, and can plan complex action sequences," he wrote in an update on Threads. "AMI will have far-ranging applications in many sectors of the economy, some of which overlap with Meta’s commercial interests, but many of which do not. Pursuing the goal of AMI in an independent entity is a way to maximize its broad impact."

Speculation about LeCun's future at Meta has been mounting in recent months. Earlier this year, the company invested nearly $15 billion into Scale AI and made the 28-year-old CEO, Alexandr Wang, its Chief AI Officer. Meta also recruited Shengjia Zhao, who helped create GPT-4, making him Chief AI Scientist of its newly created Meta Superintelligence Labs unit. 

LeCun, on the other hand, has been openly skeptical of LLMs. "We are not going to get to human-level AI by just scaling LLMs," he said during an appearance on the Big technology podcast earlier this year. And in a recent talk at a conference, he advised aspiring researchers to "absolutely not work on LLMs," according to remarks reported by The Wall Street Journal.

At the same time, Meta has been reshuffling its AI teams. The company cut "several hundred" jobs from its Superintelligence group, including from FAIR, last month. And LeCun has "had difficulty getting resources for his projects at Meta as the company focused more intently on building models to compete with immediate threats from rivals including OpenAI, Alphabet Inc.’s Google and Anthropic," Bloomberg reported

LeCun said he will stay on at Meta until the end of the year. "I am extremely grateful to Mark Zuckerberg, Andrew Bosworth (Boz), Chris Cox, and Mike Schroepfer for their support of FAIR, and for their support of the AMI program over the last few years," he wrote.


This article originally appeared on Engadget at https://www.engadget.com/ai/metas-chief-ai-scientist-is-leaving-the-company-after-12-years-224325268.html?src=rss

YouTube is once again trying to make DMs happen

YouTube has started a renewed effort to integrate direct messaging into its platform. According to a support page, the service has started testing DMs as a way for users to share and discuss videos. The test is for users aged 18 and up in Ireland and Poland. But while a DM usually comes with some expectation of privacy, Google noted that "messages may be reviewed to ensure they follow our Community Guidelines."

This isn't the video platform's first attempt to provide a messaging angle. YouTube added DMs to its app in 2017, then removed the feature in 2019 in order to emphasize public conversations in comments sections. The new test for sharing within YouTube's ecosystem won't mean any change to other ways you might send people videos. Re-introducing the same system six years after cutting it seems like an odd choice, but Google claims this is "a top feature request," so maybe it'll get a broader adoption this time around.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/youtube/youtube-is-once-again-trying-to-make-dms-happen-205724221.html?src=rss

Meta asks the Oversight Board to weigh in (a little) on Community Notes ahead of expansion

When Meta announced last year that it was ditching third-party fact checkers in favor of an X-style Community Notes system, the company was careful to note that it would only implement the changes within the United States to start. Now, nearly a year later, the social media company is getting ready to expand the crowd-sourced fact checks to more countries, and is asking the Oversight Board for advice on a potential rollout. 

The company has requested that board weigh in on "factors we should consider when deciding which countries, if any, to omit from the international roll out" of Community Notes. Notably, Meta isn't asking the Oversight Board to advise on the merits of replacing traditional fact-checking organizations. Instead, the company wants guidance on how to approach country-specific challenges and whether there should be any carveouts. 

"We respectfully ask the Board to focus its examination on the country-level factors relevant to omitting countries from the international roll-out, and not on topics such as general product design or the operation of the Community Notes algorithm," the company wrote in its request shared by the board. 

Up to now, Meta has been experimenting with Community Notes on Facebook, Instagram and Threads in the United States only. People who want to be able to author notes still need to be approved, but the company allows anyone to rate notes. However, it seems that the feature so far hasn't gained as much traction for Meta as it has on X. In September, the company said that just 6 percent of the more than 15,000 notes that had been contributed had actually been published. By the end of the month, the number of published notes had risen to over 2,000, according to Meta. More than 90,000 people have signed on as contributors.

In a statement from the Oversight Board, the group said it would consider issues like whether a crowd-sourced fact checking system would make sense in countries with "low levels of freedom of expression" or without a free press, as well as places with "low levels of digital literacy." It also said it was hoping to hear public comments from researchers who have studied different approaches to countering misinformation.

Unlike with a typical case from the Oversight Board, which deals with specific content moderation decisions, Meta has no obligation to implement any of the group's recommendations. But, the company has previously chosen to follow its suggestions in previous policy advisory opinions, including its decision to roll back COVID-19 misinformation rules following a recommendation from the board.

Update, November 19, 2025, 9:15AM PT: Added additional stats about Community Notes adoption.

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-asks-the-oversight-board-to-weigh-in-a-little-on-community-notes-ahead-of-expansion-110000208.html?src=rss

Snapchat’s new ‘Topic Chats’ feature makes it easier to comment publicly on things you’re interested in

Snapchat has introduced a new feature called “Topic Chats,” which allows users of the social network to participate in public conversations about popular trends. By its own admission, Snapchat has previously focused on private conversations, but says the growth of its TikTok-like Spotlight feature made it clear that people want to comment publicly about topics they’re interested in.

Topic Chats, which are coming to Canada, New Zealand and the US first, will appear in different areas of the Snapchat app in the shape of a big yellow button that says "Join the Chat." Tap it and you’ll join that conversation, where you can also browse related Spotlight videos. The company used F1 and the reality show Below Deck as examples of topics that could feature. 

Snapchat will show you when your friends are in a particular chat, and any that you join will then appear at the top of your personalized Topic Chat page. Snap says it will moderate the new platform to ensure it remains safe, and told TechCrunch that it will use LLMs, among other measures, to ensure that topics being engaged with are of an appropriate nature. All profiles will remain private unless you’re already friends, which Snap says will prevent unwanted friend requests or direct messages.

Topic Chats are set to go live in the coming weeks, and will appear in Chat shortcuts and the Stories page, as well as when searching or viewing Spotlight videos.

This article originally appeared on Engadget at https://www.engadget.com/social-media/snapchats-new-topic-chats-feature-makes-it-easier-to-comment-publicly-on-things-youre-interested-in-171111409.html?src=rss

Roblox begins asking tens of millions of children to verify their age with a selfie

Roblox is starting to roll out the mandatory age checks that will require all of its users to submit an ID or scan their face in order to access the platform's chat features. The updated policy, which the company announced earlier this year, will be enforced first in Australia, New Zealand and the Netherlands and will expand to all other markets by early next year.

The company also detailed a new "age-based chat" system, which will limit users' ability to interact with people outside of their age group. After verifying or estimating a user's age, Roblox will assign them to an age group ranging from 9 years and younger to 21 years and older (there are six total age groups). Teens and children will then be limited from connecting with people that aren't in or close to their estimated age group in in-game chats.

Unlike most social media apps which have a minimum age of 13, Roblox permits much younger children to use its platform. Since most children and many teens don't have IDs, the company uses "age estimation" tech provided by identity company Persona. The checks, which use video selfies, are conducted within Roblox's app and the company says that images of users' faces are immediately deleted after completing the process.

Roblox didn't provide details on how accurate its age estimation features are, but the company's Chief Safety Officer, Matt Kaufman, said that it was "pretty accurate" at guessing the approximate age of most of its users. "What we find is that the algorithms between that 5 and 25 years old [range] are typically pretty accurate within one or two years of their age," he said during a briefing with reporters.

A spokesperson for Roblox later added that Personas’s models “achieved a Mean Absolute Error (MAE) of 1.4 years for minors under 18 based on testing by the Age Check Certification Scheme (ACCS) in UK.” Parents are also able to adjust their child’s birthday in the app via its parental control settings.

All Roblox users can now voluntarily submit to a face scan or provide an ID to the company to ensure their access to its chat features isn't interrupted. The company says it will be enforcing age checks for all users by January and that people in the Netherlands, Australia and New Zealand will need to comply beginning in early December. Next year, the company also plans to put age restrictions around users' ability to access links to outside social media sites and to participate in Roblox Studio. 

Roblox has repeatedly come under fire for alleged safety lapses even as it's released a flurry of child safety updates in recent years. The company is facing lawsuits from Texas, Louisiana and Kentucky amid accusations that it hasn't done enough to prevent adults from targeting teens and children on its service.

Update, November 18, 2025, 9:45AM PT: This story was updated to add additional information from a Roblox spokesperson.

This article originally appeared on Engadget at https://www.engadget.com/gaming/roblox-begins-asking-tens-of-millions-of-children-to-verify-their-age-with-a-selfie-120000311.html?src=rss

Facebook rolls out new tools for creators to track accounts stealing their content

Creators on Facebook and Instagram have long griped about accounts that lift their videos without permission. Now, Meta is rolling out a new tool that allows creators to more easily track when their videos have been reposted by others.

The company introduced a new tool for creators called "content protection," which can automatically detect when a creator's original reel is reposted, either fully or partially, on Facebook or Instagram. Creators who are enrolled will be able to see which accounts have shared their work and will be able to take a range of actions on the clip. 

Available actions include "track," which allows the creator to add a label indicating the clip originally came from their account. In addition to the link back, creators will also be able to keep tabs on the number of views it's getting. Creators can also opt to block a clip entirely, which will prevent anyone else from being able to view the reel. (Meta notes choosing this option won't impose additional penalties on the account that lifted the original content.) Finally, creators can choose to "release" the video, which removes it from their dashboard so they will no longer have any visibility into how it's performing.

The dashboard tracks instances of reused content,
The dashboard tracks instances of reused content,
Meta

The dashboard also provides some other details that could help creators decide how to respond. For example, they can see whether the video using their content is being monetized, which may influence their decision to track with attribution or block entirely. On the other hand, if a reel was lifted from an account with few followers, they may opt to simply keep an eye on it. 

Meta has already offered Facebook creators some of these abilities in the past through its rights manager platform, but the company says making the features available directly in the Facebook app will make it accessible to more people. Notably, the company is only offering content protection to creators who share reels on Facebook. So even though the feature will detect copycats on Instagram, it will only do so if the original video has been posted to Facebook. 

Meta says content protection is rolling out now to creators in its monetization program "who meet enhanced integrity and originality standards" as well as those already using rights manager. Creators can also apply for access directly.

This article originally appeared on Engadget at https://www.engadget.com/social-media/facebook-rolls-out-new-tools-for-creators-to-track-accounts-stealing-their-content-201020255.html?src=rss

‘Divine’ is a Jack Dorsey-backed Vine reboot for 2025

Nearly a decade after going offline, Vine is (sort of) back and, in a truly bizarre twist, Jack Dorsey is at least partially responsible. An early Twitter employee has released a beta version of a rebooted Vine — now called "Divine" — that revives the app's six-second videos and includes a portion of the original app's archive. 

The project comes from Evan Henshaw-Plath, a former Twitter employee who goes by "Rabble," and has backing from Dorsey's nonprofit "and Other Stuff," which funds experimental social media apps built on the open source nostr protocol. Rabble has so far managed to resurrect about 170,000 videos from the original Vine thanks to an old archive created before Twitter shut down the app in 2017. In an FAQ on Divine's website, he says that he also hopes to restore "millions" of user comments and profile photos associated with those original posts as well. 

But Divine is more than just a home for decade-old clips. New users can create six-second looping videos of their own for the platform. The app also has many elements that will be familiar to people who have used Bluesky or other decentralized platforms, including customizable controls for content moderation and multiple feed algorithms to choose from. The site's FAQ says Divine plans to support custom, user-created algorithms too.

Divine is also taking a pretty strong stance against AI-generated content. The app will have built-in AI detection tools that will add badges to content that's been verified as not created or edited with AI tools. And, according to TechCrunch, the app will block uploads of suspected AI content.

"We're in the middle of an AI takeover of social media," Divine explains on its website. New apps like Sora are entirely AI-generated. TikTok, YouTube, and Instagram are increasingly flooded with AI slop—videos that look real but were never captured by a camera, people who don't exist, scenarios that never happened. Divine is fighting back. We're creating a space where human creativity is celebrated and protected, where you can trust that what you're watching was made by a real person with a real camera, not generated by an algorithm."

While all that may sound intriguing, Divine has a long way to go before it can accomplish all that. The app hasn't made it onto either app store yet, though it's already added 10,000 people to an iOS beta, according to its founder. In the meantime, you can also browse some of the app's videos, including some old Vine posts, on its website, though not all of the videos are working properly at the moment.

Still, any kind of reboot is good news for fans of the original, who have long hoped the app might make a comeback. Elon Musk has suggested more than once that he would revive Vine in some way, but has yet to follow through.

This article originally appeared on Engadget at https://www.engadget.com/social-media/divine-is-a-jack-dorsey-backed-vine-reboot-for-2025-192307190.html?src=rss