Threads is adding location sharing to posts

Threads seems to be rolling out a new location tagging feature that allows users to add a location to their posts. Some users have reported seeing the change in Threads’ app, though it doesn’t seem to be available to everyone just yet.

The feature is similar to location tagging on Instagram. When you give Threads access to your location, you’ll see a list of nearby places to tag, though you can also manually search for a place. For example, I saw that a few users already jokingly tagged their posts as “hell.”

According to an in-app disclaimer from Meta, the company plans to use location sharing to better customize Threads by showing “personalized content” about places nearby. The change could help improve Threads’ search functionality, which still often falls short, and make the app slightly more useful for following breaking news and other timely events.

Meta's in-app disclaimer for location sharing in threads.
Threads

The change could also come in handy in the future when Meta finally flips the switch on advertising in Threads. Mark Zuckerberg has said the company plans to continue growing the service before bringing ads to the platform, but getting users’ consent to sharing locations would provide a crucial bit of information for the company’s ad machine.

Meta didn’t immediately respond to questions about the feature, but the company appears to still be rolling it out. Location sharing appeared for me in the Threads app, but then disappeared about an hour later. It doesn’t seem to be visible at all yet on the web version of Threads.

This article originally appeared on Engadget at https://www.engadget.com/social-media/threads-is-adding-location-location-sharing-to-posts-224114320.html?src=rss

Meta’s Orion holographic avatars will (eventually) be in VR too

The biggest reveal at Meta’s Connect event was its long-promised AR glasses, Orion. As expected, the prototype, each of which reportedly costs around $10,000, won’t be ready for the public any time soon.

In the meantime, Meta offered a glimpse of its new holographic avatars, which will allow people to talk with lifelike holograms in augmented reality. The holograms are Meta’s Codec Avatars, a technology it’s been working on for several years. Mark Zuckerberg teased a version of this last year when he participated in a podcast interview “in the metaverse.”

That technology may now be closer than we think. Following the keynote at Connect, I sat down with Mark Rabkin, a VP at Meta leading Horizon OS and Quest, who shared more about Meta’s codec avatars and how they will one day come to the company’s VR headsets as well.

“Generally, pretty much everything you can do on Orion you can do on Quest,” Rabkin said. The Codec Avatars in particular have also gotten much easier to create. While they once required advanced camera scans, most of the internal avatars are now created with phone scans, Rabkin explains.

“It’s an almost identical process in many ways in generating the stylized avatars [for VR], but with a different training set and a different amount of computation required,” Rabkin explained. “For the stylized avatars, the model has to be trained on a lot of stylized avatars and how they look and how they move. [It has to] get a lot of training data on what people perceive to look like their picture, and what they perceive to move nicely.”

“For the Codec avatars ... it's the same process. You gather a tremendous amount of data. You gather data from very high-quality, fancy camera scans. You gather data from phone scans, because that's how people will be really creating, and you just build a model until it improves. And one of the challenges with both problems is to make it fast enough and computationally cheap enough so that millions and millions can use it.”

Rabkin said that he eventually expects these avatars to be able to play in virtual reality on the company’s headsets. Right now, the Quest 3 and 3S don’t have the necessary sensors, including eye tracking, necessary for the photorealistic avatars. But that could change for the next-generation VR headset, he said: “I think probably, if we do really well, it should be possible in the next generation [of headset].”

This article originally appeared on Engadget at https://www.engadget.com/ar-vr/metas-orion-holographic-avatars-will-eventually-be-in-vr-too-235206805.html?src=rss

Meta’s Orion holographic avatars will (eventually) be in VR too

The biggest reveal at Meta’s Connect event was its long-promised AR glasses, Orion. As expected, the prototype, each of which reportedly costs around $10,000, won’t be ready for the public any time soon.

In the meantime, Meta offered a glimpse of its new holographic avatars, which will allow people to talk with lifelike holograms in augmented reality. The holograms are Meta’s Codec Avatars, a technology it’s been working on for several years. Mark Zuckerberg teased a version of this last year when he participated in a podcast interview “in the metaverse.”

That technology may now be closer than we think. Following the keynote at Connect, I sat down with Mark Rabkin, a VP at Meta leading Horizon OS and Quest, who shared more about Meta’s codec avatars and how they will one day come to the company’s VR headsets as well.

“Generally, pretty much everything you can do on Orion you can do on Quest,” Rabkin said. The Codec Avatars in particular have also gotten much easier to create. While they once required advanced camera scans, most of the internal avatars are now created with phone scans, Rabkin explains.

“It’s an almost identical process in many ways in generating the stylized avatars [for VR], but with a different training set and a different amount of computation required,” Rabkin explained. “For the stylized avatars, the model has to be trained on a lot of stylized avatars and how they look and how they move. [It has to] get a lot of training data on what people perceive to look like their picture, and what they perceive to move nicely.”

“For the Codec avatars ... it's the same process. You gather a tremendous amount of data. You gather data from very high-quality, fancy camera scans. You gather data from phone scans, because that's how people will be really creating, and you just build a model until it improves. And one of the challenges with both problems is to make it fast enough and computationally cheap enough so that millions and millions can use it.”

Rabkin said that he eventually expects these avatars to be able to play in virtual reality on the company’s headsets. Right now, the Quest 3 and 3S don’t have the necessary sensors, including eye tracking, necessary for the photorealistic avatars. But that could change for the next-generation VR headset, he said: “I think probably, if we do really well, it should be possible in the next generation [of headset].”

This article originally appeared on Engadget at https://www.engadget.com/ar-vr/metas-orion-holographic-avatars-will-eventually-be-in-vr-too-235206805.html?src=rss

Meta’s Ray-Ban branded smart glasses are getting AI-powered reminders and translation features

Meta’s AI assistant has always been the most intriguing feature of its second-generation Ray-Ban smart glasses. While the generative AI assistant had fairly limited capabilities when the glasses launched last fall, the addition of real-time information and multimodal capabilities offered a range of new possibilities for the accessory.

Now, Meta is significantly upgrading the Ray-Ban Meta smart glasses’ AI powers. The company showed off a number of new abilities for the year-old frames onstage at its Connect event, including reminders and live translations.

With reminders, you’ll be able to look at items in your surroundings and ask Meta to send a reminder about it. For example, “hey Meta, remind me to buy that book next Monday.” The glasses will also be able to scan QR codes and call a phone number written in front of you.

In addition, Meta is adding video support to Meta AI so that the glasses will be better able to scan your surroundings and respond to queries about what’s around you. There are other more subtle improvements. Previously, you had to start a command with “Hey Meta, look and tell me” in order to get the glasses to respond to a command based on what you were looking at. With the update though, Meta AI will be able to respond to queries about what’s in front of you with more natural requests. In a demo with Meta, I was able to ask several questions and follow-ups with questions like “hey Meta, what am I looking at” or “hey Meta, tell me about what I’m looking at.”

When I tried out Meta AI’s multimodal capabilities on the glasses last year, I found that Meta AI was able to translate some snippets of text but struggled with anything more than a few words. Now, Meta AI should be able to translate longer chunks of text. And later this year the company is adding live translation abilities for English, French, Italian and Spanish, which could make the glasses even more useful as a travel accessory.

And while I still haven’t fully tested Meta AI’s new capabilities on its smart glasses just yet, it already seems to have a better grasp of real-time information than what I found last year. During a demo with Meta, I asked Meta AI to tell me who is the Speaker of the House of Representatives — a question it repeatedly got wrong last year — and it answered correctly the first time.

Catch up on all the news from Meta Connect 2024!

This article originally appeared on Engadget at https://www.engadget.com/wearables/metas-ray-ban-branded-smart-glasses-are-getting-ai-powered-reminders-and-translation-features-173921120.html?src=rss

Meta’s Ray-Ban branded smart glasses are getting AI-powered reminders and translation features

Meta’s AI assistant has always been the most intriguing feature of its second-generation Ray-Ban smart glasses. While the generative AI assistant had fairly limited capabilities when the glasses launched last fall, the addition of real-time information and multimodal capabilities offered a range of new possibilities for the accessory.

Now, Meta is significantly upgrading the Ray-Ban Meta smart glasses’ AI powers. The company showed off a number of new abilities for the year-old frames onstage at its Connect event, including reminders and live translations.

With reminders, you’ll be able to look at items in your surroundings and ask Meta to send a reminder about it. For example, “hey Meta, remind me to buy that book next Monday.” The glasses will also be able to scan QR codes and call a phone number written in front of you.

In addition, Meta is adding video support to Meta AI so that the glasses will be better able to scan your surroundings and respond to queries about what’s around you. There are other more subtle improvements. Previously, you had to start a command with “Hey Meta, look and tell me” in order to get the glasses to respond to a command based on what you were looking at. With the update though, Meta AI will be able to respond to queries about what’s in front of you with more natural requests. In a demo with Meta, I was able to ask several questions and follow-ups with questions like “hey Meta, what am I looking at” or “hey Meta, tell me about what I’m looking at.”

When I tried out Meta AI’s multimodal capabilities on the glasses last year, I found that Meta AI was able to translate some snippets of text but struggled with anything more than a few words. Now, Meta AI should be able to translate longer chunks of text. And later this year the company is adding live translation abilities for English, French, Italian and Spanish, which could make the glasses even more useful as a travel accessory.

And while I still haven’t fully tested Meta AI’s new capabilities on its smart glasses just yet, it already seems to have a better grasp of real-time information than what I found last year. During a demo with Meta, I asked Meta AI to tell me who is the Speaker of the House of Representatives — a question it repeatedly got wrong last year — and it answered correctly the first time.

Catch up on all the news from Meta Connect 2024!

This article originally appeared on Engadget at https://www.engadget.com/wearables/metas-ray-ban-branded-smart-glasses-are-getting-ai-powered-reminders-and-translation-features-173921120.html?src=rss

Meta AI can now talk to you and edit your photos

Over the last year, Meta has made its AI assistant so ubiquitous in its apps it’s almost hard to believe that Meta AI is only a year old. But, one year after its launch at the last Connect, the company is infusing Meta AI with a load of new features in the hopes that more people will find its assistant useful.

One of the biggest changes is that users will be able to have voice chats with Meta AI. Up till now, the only way to speak with Meta AI was via the Ray-Ban Meta smart glasses. And like last year’s Meta AI launch, the company tapped a group of celebrities for the change.

Meta AI will be able to take on the voices of Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key and Kristen Bell, in addition to a handful of more generic voices. While the company is hoping the celebrities will sell users on Meta AI’s new abilities, it’s worth noting that the company quietly phased out its celebrity chatbot personas that launched at last year’s Connect.

In addition to voice chat support, Meta AI is also getting new image capabilities. Meta AI will be able to respond to requests to change and edit photos from text chats within Instagram, Messenger and WhatsApp. The company says that users can ask the AI to add or remove objects or to change elements of an image, like swapping a background or clothing item.

Meta is testing AI-generated content recommendations in the main feed of Facebook and Instagram.
Meta is testing AI-generated content recommendations in the main feed of Facebook and Instagram.
Meta

The new abilities arrive alongside the company’s latest Llama 3.2 model. The new iteration, which comes barely two months after the Llama 3.1 release, is the first to have vision capabilities and can “bridge the gap between vision and language by extracting details from an image, understanding the scene, and then crafting a sentence or two that could be used as an image caption to help tell the story.” Llama 3.2 is “competitive” on “image recognition and a range of visual understanding tasks” compared with similar offerings from ChatGPT and Claude, Meta says.

The social network is testing other, potentially controversial, ways to bring AI into the core features of its main apps. The company will test AI-generated translation features for Reels with “automatic dubbing and lip syncing.” According to Meta, that “will simulate the speaker’s voice in another language and sync their lips to match.” It will arrive first to “some creators’ videos” in English and Spanish in the US and Latin America, though the company hasn't shared details on rollout timing.

Meta also plans to experiment with AI-generated content directly in the main feeds on Facebook and Instagram. With the test, Meta AI will surface AI-generated images that are meant to be personalized to each users’ interests and past activity. For example, Meta AI could surface an image “imagined for you” that features your face.

Catch up on all the news from Meta Connect 2024!

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-ai-can-now-talk-to-you-and-edit-your-photos-172853219.html?src=rss

Meta AI can now talk to you and edit your photos

Over the last year, Meta has made its AI assistant so ubiquitous in its apps it’s almost hard to believe that Meta AI is only a year old. But, one year after its launch at the last Connect, the company is infusing Meta AI with a load of new features in the hopes that more people will find its assistant useful.

One of the biggest changes is that users will be able to have voice chats with Meta AI. Up till now, the only way to speak with Meta AI was via the Ray-Ban Meta smart glasses. And like last year’s Meta AI launch, the company tapped a group of celebrities for the change.

Meta AI will be able to take on the voices of Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key and Kristen Bell, in addition to a handful of more generic voices. While the company is hoping the celebrities will sell users on Meta AI’s new abilities, it’s worth noting that the company quietly phased out its celebrity chatbot personas that launched at last year’s Connect.

In addition to voice chat support, Meta AI is also getting new image capabilities. Meta AI will be able to respond to requests to change and edit photos from text chats within Instagram, Messenger and WhatsApp. The company says that users can ask the AI to add or remove objects or to change elements of an image, like swapping a background or clothing item.

Meta is testing AI-generated content recommendations in the main feed of Facebook and Instagram.
Meta is testing AI-generated content recommendations in the main feed of Facebook and Instagram.
Meta

The new abilities arrive alongside the company’s latest Llama 3.2 model. The new iteration, which comes barely two months after the Llama 3.1 release, is the first to have vision capabilities and can “bridge the gap between vision and language by extracting details from an image, understanding the scene, and then crafting a sentence or two that could be used as an image caption to help tell the story.” Llama 3.2 is “competitive” on “image recognition and a range of visual understanding tasks” compared with similar offerings from ChatGPT and Claude, Meta says.

The social network is testing other, potentially controversial, ways to bring AI into the core features of its main apps. The company will test AI-generated translation features for Reels with “automatic dubbing and lip syncing.” According to Meta, that “will simulate the speaker’s voice in another language and sync their lips to match.” It will arrive first to “some creators’ videos” in English and Spanish in the US and Latin America, though the company hasn't shared details on rollout timing.

Meta also plans to experiment with AI-generated content directly in the main feeds on Facebook and Instagram. With the test, Meta AI will surface AI-generated images that are meant to be personalized to each users’ interests and past activity. For example, Meta AI could surface an image “imagined for you” that features your face.

Catch up on all the news from Meta Connect 2024!

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-ai-can-now-talk-to-you-and-edit-your-photos-172853219.html?src=rss

Meta AI can now talk to you and edit your photos

Over the last year, Meta has made its AI assistant so ubiquitous in its apps it’s almost hard to believe that Meta AI is only a year old. But, one year after its launch at the last Connect, the company is infusing Meta AI with a load of new features in the hopes that more people will find its assistant useful.

One of the biggest changes is that users will be able to have voice chats with Meta AI. Up till now, the only way to speak with Meta AI was via the Ray-Ban Meta smart glasses. And like last year’s Meta AI launch, the company tapped a group of celebrities for the change.

Meta AI will be able to take on the voices of Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key and Kristen Bell, in addition to a handful of more generic voices. While the company is hoping the celebrities will sell users on Meta AI’s new abilities, it’s worth noting that the company quietly phased out its celebrity chatbot personas that launched at last year’s Connect.

In addition to voice chat support, Meta AI is also getting new image capabilities. Meta AI will be able to respond to requests to change and edit photos from text chats within Instagram, Messenger and WhatsApp. The company says that users can ask the AI to add or remove objects or to change elements of an image, like swapping a background or clothing item.

Meta is testing AI-generated content recommendations in the main feed of Facebook and Instagram.
Meta is testing AI-generated content recommendations in the main feed of Facebook and Instagram.
Meta

The new abilities arrive alongside the company’s latest Llama 3.2 model. The new iteration, which comes barely two months after the Llama 3.1 release, is the first to have vision capabilities and can “bridge the gap between vision and language by extracting details from an image, understanding the scene, and then crafting a sentence or two that could be used as an image caption to help tell the story.” Llama 3.2 is “competitive” on “image recognition and a range of visual understanding tasks” compared with similar offerings from ChatGPT and Claude, Meta says.

The social network is testing other, potentially controversial, ways to bring AI into the core features of its main apps. The company will test AI-generated translation features for Reels with “automatic dubbing and lip syncing.” According to Meta, that “will simulate the speaker’s voice in another language and sync their lips to match.” It will arrive first to “some creators’ videos” in English and Spanish in the US and Latin America, though the company hasn't shared details on rollout timing.

Meta also plans to experiment with AI-generated content directly in the main feeds on Facebook and Instagram. With the test, Meta AI will surface AI-generated images that are meant to be personalized to each users’ interests and past activity. For example, Meta AI could surface an image “imagined for you” that features your face.

Catch up on all the news from Meta Connect 2024!

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-ai-can-now-talk-to-you-and-edit-your-photos-172853219.html?src=rss

X just released its first full transparency report since Elon Musk took over

X has published its most detailed accounting of its content moderation practices since Elon Musk’s takeover of the company. The report, X’s first in more than a year, provides new insight into how X is enforcing its rules as it struggles to hang on to advertisers who have raised concerns about toxicity on the platform.

The report, which details content takedowns and account suspensions from the first half of 2024, shows that suspensions have more than tripled since the last time the company shared data. X suspended just under 5.3 million accounts during the period, compared with 1.6 million suspensions during the first six months of 2022.

In addition to the suspensions, X says it “removed or labeled” more than 10.6 million posts for violating its rules. Violations of the company’s hateful conduct policy accounted for nearly half of that number, with X taking action on 4.9 million such posts. Posts containing abuse and harassment (2.6 million) and violent content (2.2 million) also accounted for a significant percentage of the takedowns and labels.

While these numbers don’t tell a complete story about the state of content on X — the company doesn’t distinguish between posts it removes and those that it labels, for example — it shows that hateful, abusive and violent content are among the biggest issues facing the platform. Those are also the same issues numerous advertisers and civil rights groups have raised concerns about since Musk’s takeover of the company. In the report, X claims that rule-breaking content accounted for less than 1 percent of all posts shared on the platform.

Numbers shared by X.
X

The numbers also suggest there have been significant increases in this type of content since Twitter last shared numbers prior to Musk’s takeover. For example, in the last half of 2021, the last time Twitter shared such data, the company reported it suspended about 1.3 million accounts for terms of service violations and “actioned” about 4.3 million.

X previously published an abbreviated report in a 383-word blog post last April, which shared some stats on content takedowns, but offered almost no details on government requests for information or post removals. The new report is a significant improvement on that front. It says that X received 18,737 government requests for information, with the majority of the requests coming from within the EU and a reported disclosure rate of 53 percent. X also received 72,703 requests from governments to remove content from its platform. The company says it took action in just over 70 percent of cases. Japan accounted for the vast majority of those requests (46,648), followed by Turkey (9,364).

This article originally appeared on Engadget at https://www.engadget.com/social-media/x-just-released-its-first-full-transparency-report-since-elon-musk-took-over-110038194.html?src=rss

X just released its first full transparency report since Elon Musk took over

X has published its most detailed accounting of its content moderation practices since Elon Musk’s takeover of the company. The report, X’s first in more than a year, provides new insight into how X is enforcing its rules as it struggles to hang on to advertisers who have raised concerns about toxicity on the platform.

The report, which details content takedowns and account suspensions from the first half of 2024, shows that suspensions have more than tripled since the last time the company shared data. X suspended just under 5.3 million accounts during the period, compared with 1.6 million suspensions during the first six months of 2022.

In addition to the suspensions, X says it “removed or labeled” more than 10.6 million posts for violating its rules. Violations of the company’s hateful conduct policy accounted for nearly half of that number, with X taking action on 4.9 million such posts. Posts containing abuse and harassment (2.6 million) and violent content (2.2 million) also accounted for a significant percentage of the takedowns and labels.

While these numbers don’t tell a complete story about the state of content on X — the company doesn’t distinguish between posts it removes and those that it labels, for example — it shows that hateful, abusive and violent content are among the biggest issues facing the platform. Those are also the same issues numerous advertisers and civil rights groups have raised concerns about since Musk’s takeover of the company. In the report, X claims that rule-breaking content accounted for less than 1 percent of all posts shared on the platform.

Numbers shared by X.
X

The numbers also suggest there have been significant increases in this type of content since Twitter last shared numbers prior to Musk’s takeover. For example, in the last half of 2021, the last time Twitter shared such data, the company reported it suspended about 1.3 million accounts for terms of service violations and “actioned” about 4.3 million.

X previously published an abbreviated report in a 383-word blog post last April, which shared some stats on content takedowns, but offered almost no details on government requests for information or post removals. The new report is a significant improvement on that front. It says that X received 18,737 government requests for information, with the majority of the requests coming from within the EU and a reported disclosure rate of 53 percent. X also received 72,703 requests from governments to remove content from its platform. The company says it took action in just over 70 percent of cases. Japan accounted for the vast majority of those requests (46,648), followed by Turkey (9,364).

This article originally appeared on Engadget at https://www.engadget.com/social-media/x-just-released-its-first-full-transparency-report-since-elon-musk-took-over-110038194.html?src=rss