Facebook is pushing ‘local’ content and events to try to win back young adults

Meta has spent the last few years saying that “young adults” are crucial to the future of Facebook. Now, the company is introducing a number of changes to its 20-year-old social network in an effort to get younger users to spend more time in the app.

The updates include a new “local” section in the Facebook app that aims to surface information relevant to your local community, a renewed focus on events planned on the service and a new “Communities” feature for Messenger. The changes, Meta claims, will help young adults “explore their interests and connect with the world beyond their close friends.”

Emphasizing events isn’t an entirely new strategy for the company. It launched a standalone events app in 2016 and then rebranded it a year later to focus on “local” businesses and happenings. It quietly killed the app in 2021.

Meta is taking a slightly different approach this time. The new “local” section will surface Marketplace listings, Reels and posts from Facebook groups alongside event listings from your community. Local news, which Meta has also previously boosted, is notably absent Meta’s announcement.

In addition to the local tab, the company is also trying to make events more prominent in Facebook. Facebook will now provide personalized event recommendations in the form of a weekly and weekend digest that will be pushed to users via in-app notifications. The company is also changing how invitations to Facebook events work so users can send invites to their connections on Instagram and via SMS and email.

Groups on Facebook, which Meta has said is among the most-used features by young adults, is also getting attention in this update. Meta is experimenting with a “a customizable Group AI” that allows admins to create a bot that can chat with members to answer questions based on posts that have been shared in the group. Elsewhere in the app, Meta is starting to test an Instagram-like Explore section and a dedicated space for Reels inside of Facebook.

On Messenger, Meta is adding a new “Communities” feature, a concept it previously introduced on WhatsApp. Communities allows “small to medium-sized” groups to organize their conversations and interact in a way that’s more like a Facebook group. Members can create topic-based chats and there are built in moderation and admin tools for controlling who can join.

The changes are part of a broader effort by Meta to bring younger people back to its app with features tailored around how they use social media. “Facebook is still for everyone, but in order to build for the next generation of social media consumers, we’ve made significant changes with young adults in mind,” the Facebook app’s head, Tom Alison, wrote in May.

Whether Meta’s latest efforts will be successful, though, is unclear. The company says there are more than 40 million young adults on Facebook in the US and Canada, a number that’s “the highest it’s been in more than 3 years.” But that’s still a relatively small percentage of its total users in the region and an even tinier slice of its users overall.

This article originally appeared on Engadget at https://www.engadget.com/social-media/facebook-is-pushing-local-content-and-events-to-try-to-win-back-young-adults-161742961.html?src=rss

Texas is suing TikTok for allegedly violating its new child privacy law

Texas Attorney General Ken Paxton has filed a lawsuit against TikTok claiming the company violated a new child privacy law in the state. It's set to be the first test of Texas’ Securing Children Online Through Parental Empowerment (SCOPE) Act since it went into effect just over a month ago.

Under the law, parts of which were struck down by a federal judge, social media platforms are required to verify the ages of younger users and offer parental control features, including the ability for parents to opt their children out of data collection.

Paxton alleges that TikTok’s existing parental control features are insufficient. "However, Defendants do not provide the parents or guardians of users known to be 13 to 17 years old with parental tools that allow them to control or limit most of a known minor’s privacy and account settings,” the lawsuit states. “For example, parents or guardians do not have the ability to control Defendants’ sharing, disclosing, and selling of a known minor’s personal identifying information, nor control Defendants’ ability to display targeted advertising to a known minor."

The lawsuit also argues that the app’s “Family Pairing” tool isn’t “commercially reasonable” because it requires parents to make their own TikTok account and because teens are free to deny their parents’ requests to set up the monitoring tool. TikTok didn’t immediately respond to a request for comment. The app already prohibits most targeted advertising to anyone younger than 18.

"We strongly disagree with these allegations and, in fact, we offer robust safeguards for teens and parents, including family pairing, all of which are publicly available," the company said in a statement shared on X. "We stand by the protections we provide families."

The lawsuit adds to TikTok’s growing legal challenges in the United States. The company is currently fighting a law that could result in a total ban of the app in the United States. It’s also facing a separate Justice Department lawsuit related to child privacy.

Update, October 3, 2024, 8:05 PM ET: This story has been updated to add a statement from TikTok. 

This article originally appeared on Engadget at https://www.engadget.com/big-tech/texas-is-suing-tiktok-for-allegedly-violating-its-new-child-privacy-law-235432146.html?src=rss

Women of color running for Congress are attacked disproportionately on X, report finds

Women of color running for Congress in 2024 have faced a disproportionate number of attacks on X compared with other candidates, according to a new report from the nonprofit Center for Democracy and Technology (CDT) and the University of Pittsburgh.

The report sought to “compare the levels of offensive speech and hate speech that different groups of Congressional candidates are targeted with based on race and gender, with a particular emphasis on women of color.” To do this, the report’s authors analyzed 800,000 tweets that covered a three-month period between May 20 and August 23 of this year. That dataset represented all posts mentioning a candidate running for Congress with an account on X.

The report’s authors found that more than 20 percent of posts directed at Black and Asian women candidates “contained offensive language about the candidate.” It also found that Black women in particular were targeted with hate speech more often compared with other candidates.

“On average, less than 1% of all tweets that mentioned a candidate contained hate speech,” the report says. “However, we found that African-American women candidates were more likely than any other candidate to be subject to this type of post (4%).” That roughly lines up with X’s recent transparency report — the company’s first since Elon Musk took over the company — which said that rule-breaking content accounts for less than 1 percent of all posts on its platform.

In a statement, an X spokesperson said the company had suspended more than 1 million accounts and removed more than 2 million posts in the first half of 2024 for breaking the company's rules. "While we encourage people to express themselves freely on X, abuse, harassment, and hateful conduct have no place on our platform and violate the X Rules," the spokesperson said. 

Notably, the CDT’s report analyzed both hate speech — which ostensibly violates X’s policies — and “offensive speech,” which the report defined as “words or phrases that demean, threaten, insult, or ridicule a candidate.” While the latter category may not be against X’s rules, the report notes that the volume of suck attacks could still deter women of color from running for office. It recommends that X and other platforms take “specific measures” to counteract such effects.

“This should include clear policies that prohibit attacks against someone based on race or gender, greater transparency into how their systems address these types of attacks, better reporting tools and means for accountability, regular risk assessments with an emphasis on race and gender, and privacy preserving mechanisms for independent researchers to conduct studies using their data. The consequences of the status-quo where women of color candidates are targeted with significant attacks online at much higher rates than other candidates creates an immense barrier to creating a truly inclusive democracy.”

Update: October 2, 2024, 12:13 PM ET: This post was updated to include a statement from an X spokesperson. 

This article originally appeared on Engadget at https://www.engadget.com/social-media/women-of-color-running-for-congress-are-attacked-disproportionately-on-x-report-finds-043206066.html?src=rss

Google allegedly got the Juno YouTube app removed from the Vision Pro App Store

Juno, a widely praised (unofficial) YouTube app for Vision Pro, has been removed from Apple’s App Store after complaints from Google, according to an update from Juno’s developer Christian Selig. Google, Selig says, suggested that his app violates their trademark.

It’s the latest setback for Selig, who shut down his popular Reddit client Apollo last year after the company changed its developer policies to charge for use of its API. The shutdown of Apollo and other apps like it ignited a sitewide protest from Reddit users and moderators.

This time, Selig says he doesn’t want drama, noting the $5 app was a “hobby project” for him to tinker with developing for visionOS. “I really enjoyed building Juno, but it was always something I saw as fundamentally a little app I built for fun,” Selig wrote on his website. “Because of that, I have zero desire to spin this into a massive fight akin to what happened with Reddit years ago.”

It’s unclear what aspect of Juno may have been the issue. Selig says that Google referenced its “trademarks and iconography” in a message to Apple, “stating that Juno does not adhere to YouTube guidelines and modifies the website” in a way that’s not permitted. “I don’t personally agree with this, as Juno is just a web view, and acts as little more than a browser extension that modifies CSS to make the website and video player look more ‘visionOS’ like,” Selig explains. “No logos are placed other than those already on the website, and the ‘for YouTube’ suffix is permitted in their branding guidelines.”

Google hasn’t made its own YouTube app for Vision Pro, though the company said in February such an app was “on our roadmap.” The company didn’t immediately respond to a request for comment.

Selig says that people who have already paid for the app should be able to keep using it for the time being, though there’s a chance a future YouTube update could end up bricking it.

This article originally appeared on Engadget at https://www.engadget.com/ar-vr/google-allegedly-got-the-juno-youtube-app-removed-from-the-vision-pro-app-store-232155656.html?src=rss

Threads will show how many followers you have in the fediverse

Meta has been steadily improving Threads’ compatibility with the fediverse over the last year. Now, the company is taking another significant step with an update that allows users to see more details about their followers and interactions with people from other servers across the fediverse.

Up to now, Threads has surfaced replies from Mastodon and other servers, and has alerted users to likes on their posts from other fediverse apps. But there was no way for a Threads user to see details about their followers from those services. That’s now changing, Adam Mosseri explained in a post.

With the update, anyone who has opted-in to fediverse sharing on Threads will be able to see a detailed list of their followers from other servers and view their profiles. This will give people on Threads a better sense of their reach and audience on Mastodon and other apps.

Threads’ fediverse support is still somewhat limited overall. Users still can’t reply to replies that originate on apps outside of Threads, and there’s no way to search for people on other servers from Threads. There’s also still a delay in cross-posting; it will now take 15 minutes for a post from Threads to appear as Meta also expanded the edit window for posts.

Elsewhere, third-party developers are also making it easier for users who want to post on multiple decentralized services. A new app called Croissant enables cross-posting to Threads, Mastodon and Bluesky all at once. The paid app, first spotted by TechCrunch, aims to replicate the functionality of enterprise social media management apps like Buffer.

This article originally appeared on Engadget at https://www.engadget.com/social-media/threads-will-show-how-many-followers-you-have-in-the-fediverse-215441432.html?src=rss

Threads is adding location sharing to posts

Threads seems to be rolling out a new location tagging feature that allows users to add a location to their posts. Some users have reported seeing the change in Threads’ app, though it doesn’t seem to be available to everyone just yet.

The feature is similar to location tagging on Instagram. When you give Threads access to your location, you’ll see a list of nearby places to tag, though you can also manually search for a place. For example, I saw that a few users already jokingly tagged their posts as “hell.”

According to an in-app disclaimer from Meta, the company plans to use location sharing to better customize Threads by showing “personalized content” about places nearby. The change could help improve Threads’ search functionality, which still often falls short, and make the app slightly more useful for following breaking news and other timely events.

Meta's in-app disclaimer for location sharing in threads.
Threads

The change could also come in handy in the future when Meta finally flips the switch on advertising in Threads. Mark Zuckerberg has said the company plans to continue growing the service before bringing ads to the platform, but getting users’ consent to sharing locations would provide a crucial bit of information for the company’s ad machine.

Meta didn’t immediately respond to questions about the feature, but the company appears to still be rolling it out. Location sharing appeared for me in the Threads app, but then disappeared about an hour later. It doesn’t seem to be visible at all yet on the web version of Threads.

This article originally appeared on Engadget at https://www.engadget.com/social-media/threads-is-adding-location-location-sharing-to-posts-224114320.html?src=rss

Meta’s Orion holographic avatars will (eventually) be in VR too

The biggest reveal at Meta’s Connect event was its long-promised AR glasses, Orion. As expected, the prototype, each of which reportedly costs around $10,000, won’t be ready for the public any time soon.

In the meantime, Meta offered a glimpse of its new holographic avatars, which will allow people to talk with lifelike holograms in augmented reality. The holograms are Meta’s Codec Avatars, a technology it’s been working on for several years. Mark Zuckerberg teased a version of this last year when he participated in a podcast interview “in the metaverse.”

That technology may now be closer than we think. Following the keynote at Connect, I sat down with Mark Rabkin, a VP at Meta leading Horizon OS and Quest, who shared more about Meta’s codec avatars and how they will one day come to the company’s VR headsets as well.

“Generally, pretty much everything you can do on Orion you can do on Quest,” Rabkin said. The Codec Avatars in particular have also gotten much easier to create. While they once required advanced camera scans, most of the internal avatars are now created with phone scans, Rabkin explains.

“It’s an almost identical process in many ways in generating the stylized avatars [for VR], but with a different training set and a different amount of computation required,” Rabkin explained. “For the stylized avatars, the model has to be trained on a lot of stylized avatars and how they look and how they move. [It has to] get a lot of training data on what people perceive to look like their picture, and what they perceive to move nicely.”

“For the Codec avatars ... it's the same process. You gather a tremendous amount of data. You gather data from very high-quality, fancy camera scans. You gather data from phone scans, because that's how people will be really creating, and you just build a model until it improves. And one of the challenges with both problems is to make it fast enough and computationally cheap enough so that millions and millions can use it.”

Rabkin said that he eventually expects these avatars to be able to play in virtual reality on the company’s headsets. Right now, the Quest 3 and 3S don’t have the necessary sensors, including eye tracking, necessary for the photorealistic avatars. But that could change for the next-generation VR headset, he said: “I think probably, if we do really well, it should be possible in the next generation [of headset].”

This article originally appeared on Engadget at https://www.engadget.com/ar-vr/metas-orion-holographic-avatars-will-eventually-be-in-vr-too-235206805.html?src=rss

Meta’s Orion holographic avatars will (eventually) be in VR too

The biggest reveal at Meta’s Connect event was its long-promised AR glasses, Orion. As expected, the prototype, each of which reportedly costs around $10,000, won’t be ready for the public any time soon.

In the meantime, Meta offered a glimpse of its new holographic avatars, which will allow people to talk with lifelike holograms in augmented reality. The holograms are Meta’s Codec Avatars, a technology it’s been working on for several years. Mark Zuckerberg teased a version of this last year when he participated in a podcast interview “in the metaverse.”

That technology may now be closer than we think. Following the keynote at Connect, I sat down with Mark Rabkin, a VP at Meta leading Horizon OS and Quest, who shared more about Meta’s codec avatars and how they will one day come to the company’s VR headsets as well.

“Generally, pretty much everything you can do on Orion you can do on Quest,” Rabkin said. The Codec Avatars in particular have also gotten much easier to create. While they once required advanced camera scans, most of the internal avatars are now created with phone scans, Rabkin explains.

“It’s an almost identical process in many ways in generating the stylized avatars [for VR], but with a different training set and a different amount of computation required,” Rabkin explained. “For the stylized avatars, the model has to be trained on a lot of stylized avatars and how they look and how they move. [It has to] get a lot of training data on what people perceive to look like their picture, and what they perceive to move nicely.”

“For the Codec avatars ... it's the same process. You gather a tremendous amount of data. You gather data from very high-quality, fancy camera scans. You gather data from phone scans, because that's how people will be really creating, and you just build a model until it improves. And one of the challenges with both problems is to make it fast enough and computationally cheap enough so that millions and millions can use it.”

Rabkin said that he eventually expects these avatars to be able to play in virtual reality on the company’s headsets. Right now, the Quest 3 and 3S don’t have the necessary sensors, including eye tracking, necessary for the photorealistic avatars. But that could change for the next-generation VR headset, he said: “I think probably, if we do really well, it should be possible in the next generation [of headset].”

This article originally appeared on Engadget at https://www.engadget.com/ar-vr/metas-orion-holographic-avatars-will-eventually-be-in-vr-too-235206805.html?src=rss

Meta’s Ray-Ban branded smart glasses are getting AI-powered reminders and translation features

Meta’s AI assistant has always been the most intriguing feature of its second-generation Ray-Ban smart glasses. While the generative AI assistant had fairly limited capabilities when the glasses launched last fall, the addition of real-time information and multimodal capabilities offered a range of new possibilities for the accessory.

Now, Meta is significantly upgrading the Ray-Ban Meta smart glasses’ AI powers. The company showed off a number of new abilities for the year-old frames onstage at its Connect event, including reminders and live translations.

With reminders, you’ll be able to look at items in your surroundings and ask Meta to send a reminder about it. For example, “hey Meta, remind me to buy that book next Monday.” The glasses will also be able to scan QR codes and call a phone number written in front of you.

In addition, Meta is adding video support to Meta AI so that the glasses will be better able to scan your surroundings and respond to queries about what’s around you. There are other more subtle improvements. Previously, you had to start a command with “Hey Meta, look and tell me” in order to get the glasses to respond to a command based on what you were looking at. With the update though, Meta AI will be able to respond to queries about what’s in front of you with more natural requests. In a demo with Meta, I was able to ask several questions and follow-ups with questions like “hey Meta, what am I looking at” or “hey Meta, tell me about what I’m looking at.”

When I tried out Meta AI’s multimodal capabilities on the glasses last year, I found that Meta AI was able to translate some snippets of text but struggled with anything more than a few words. Now, Meta AI should be able to translate longer chunks of text. And later this year the company is adding live translation abilities for English, French, Italian and Spanish, which could make the glasses even more useful as a travel accessory.

And while I still haven’t fully tested Meta AI’s new capabilities on its smart glasses just yet, it already seems to have a better grasp of real-time information than what I found last year. During a demo with Meta, I asked Meta AI to tell me who is the Speaker of the House of Representatives — a question it repeatedly got wrong last year — and it answered correctly the first time.

Catch up on all the news from Meta Connect 2024!

This article originally appeared on Engadget at https://www.engadget.com/wearables/metas-ray-ban-branded-smart-glasses-are-getting-ai-powered-reminders-and-translation-features-173921120.html?src=rss

Meta’s Ray-Ban branded smart glasses are getting AI-powered reminders and translation features

Meta’s AI assistant has always been the most intriguing feature of its second-generation Ray-Ban smart glasses. While the generative AI assistant had fairly limited capabilities when the glasses launched last fall, the addition of real-time information and multimodal capabilities offered a range of new possibilities for the accessory.

Now, Meta is significantly upgrading the Ray-Ban Meta smart glasses’ AI powers. The company showed off a number of new abilities for the year-old frames onstage at its Connect event, including reminders and live translations.

With reminders, you’ll be able to look at items in your surroundings and ask Meta to send a reminder about it. For example, “hey Meta, remind me to buy that book next Monday.” The glasses will also be able to scan QR codes and call a phone number written in front of you.

In addition, Meta is adding video support to Meta AI so that the glasses will be better able to scan your surroundings and respond to queries about what’s around you. There are other more subtle improvements. Previously, you had to start a command with “Hey Meta, look and tell me” in order to get the glasses to respond to a command based on what you were looking at. With the update though, Meta AI will be able to respond to queries about what’s in front of you with more natural requests. In a demo with Meta, I was able to ask several questions and follow-ups with questions like “hey Meta, what am I looking at” or “hey Meta, tell me about what I’m looking at.”

When I tried out Meta AI’s multimodal capabilities on the glasses last year, I found that Meta AI was able to translate some snippets of text but struggled with anything more than a few words. Now, Meta AI should be able to translate longer chunks of text. And later this year the company is adding live translation abilities for English, French, Italian and Spanish, which could make the glasses even more useful as a travel accessory.

And while I still haven’t fully tested Meta AI’s new capabilities on its smart glasses just yet, it already seems to have a better grasp of real-time information than what I found last year. During a demo with Meta, I asked Meta AI to tell me who is the Speaker of the House of Representatives — a question it repeatedly got wrong last year — and it answered correctly the first time.

Catch up on all the news from Meta Connect 2024!

This article originally appeared on Engadget at https://www.engadget.com/wearables/metas-ray-ban-branded-smart-glasses-are-getting-ai-powered-reminders-and-translation-features-173921120.html?src=rss