Meta just announced an intriguing tool that uses AI to automatically dub Reels into other languages, complete with lip-sync. This feature was revealed at the annual Meta Connect livestream event and was introduced by CEO Mark Zuckerberg.
Zuckerberg showed this off during the keynote, and everything seemed to work flawlessly. The technology not only translates the content, according to Meta, but will also “simulate the speaker’s voice in another language and sync their lips to match.” It’s worth noting, however, that this didn’t appear to be a live demo, but was still pretty impressive.
As for a rollout, the company says the feature will arrive first to “some creators’ videos” in English and Spanish in the US and Latin America. Meta didn’t give a timetable here. It just said the US and Latin America will be getting it first, which indicates that it’ll be tied to English and Spanish at launch. The company did mention that more languages are coming soon.
That wasn’t the only AI tool spotlighted during Meta Connect. The company’s AI platform will now allow voice chats, with a selection of celebrity voices to choose from. Meta AI is also getting new image capabilities, as it will be able to change and edit photos based on instructions from text chats within Instagram. Messenger and WhatsApp.
This article originally appeared on Engadget at https://www.engadget.com/ai/meta-will-use-ai-to-create-lip-synced-translations-of-creators-reels-175949373.html?src=rss
Alongside the Quest 3S and AI updates, we got a glimpse of Meta's future at Meta Connect. After teasing the device several times in recent months, the company finally gave the world a proper look at its "full holographic" augmented reality glasses, which it's currently calling Orion. Meta is packing a lot of tech into those chunky frames, which aren't coming to market just yet.
The company first revealed five years ago that it was developing holographic smart glasses, but it has actually been working on the project for a decade. It claims that this is "the most advanced pair of AR glasses ever made" and results from "breakthrough inventions in virtually every field of modern computing." For one thing, it uses itty bitty projectors to display holograms onto the glasses.
These glasses appear far less cumbersome to wear than previous mainstream AR products such as Magic Leap, Microsoft's Hololens and even Google Glass. They also don't block you out from the rest of the world like a virtual reality headset (though Meta's headsets do allow you to see what's around you via the onboard cameras). As a result, you can see wearers' full faces, eyes and expressions without having to resort to a weird, eerie workaround like Apple is doing with EyeSight on the Vision Pro.
Meta
Meta said Orion is lightweight and works both indoors and outdoors. The company claims that the glasses allow for "digital experiences that are unconstrained by the limits of a smartphone screen" as they overlay holographic elements on top of the real world. In addition, Meta said Orion integrates contextual AI to help you gain a better understanding of the world around you.
The company added that you'll be able to look inside a fridge with the glasses on and get Meta AI to come up with a recipe based on what you have. You should be able hop onto video calls via Orion and view and send messages on Messenger and WhatsApp. Based on images that Meta shared, there will also be holographic versions of various other apps, such as Spotify, YouTube and Pinterest.
Meta
There's one key reason why Meta has been able to keep Orion lightweight: not all of the required tech is actually in the frame of the glasses. Orion comes with a required wireless puck that handles much of the processing and beams apps and content to the device. There's also a bracelet that you'll need to wear for gesture control.
You're likely going to have to wait a few years to get your hands on this device (or at least a version of it). For the time being, Meta employees and "select external audiences" are able to use Orion. That's in order to help the company learn more and iterate on the product as it works toward a consumer version of the AR glasses.
Still, Meta claims that Orion is not just a research prototype but is instead "one of the most polished product prototypes we’ve ever developed, and is truly representative of something that could ship to consumers." By continuing to work on the product internally, "we can keep building quickly and continue to push the boundaries of the technology, helping us arrive at an even better consumer product faster," the company said. Part of that iteration includes bringing down the price of the glasses to make them more affordable, according to Meta CEO Mark Zuckerberg.
A roadmap that leaked last year indicated that Meta planned to release its first consumer AR glasses in 2027, though the company says it's aiming to do so "in the near future." As it happens, Snap also recently debuted its fifth-gen AR Spectacles, but for now those are only available to developers who are willing to pay a monthly $99 fee.
Alongside the Quest 3S and AI updates, we got a glimpse of Meta's future at Meta Connect. After teasing the device several times in recent months, the company finally gave the world a proper look at its "full holographic" augmented reality glasses, which it's currently calling Orion. Meta is packing a lot of tech into those chunky frames, which aren't coming to market just yet.
The company first revealed five years ago that it was developing holographic smart glasses, but it has actually been working on the project for a decade. It claims that this is "the most advanced pair of AR glasses ever made" and results from "breakthrough inventions in virtually every field of modern computing." For one thing, it uses itty bitty projectors to display holograms onto the glasses.
These glasses appear far less cumbersome to wear than previous mainstream AR products such as Magic Leap, Microsoft's Hololens and even Google Glass. They also don't block you out from the rest of the world like a virtual reality headset (though Meta's headsets do allow you to see what's around you via the onboard cameras). As a result, you can see wearers' full faces, eyes and expressions without having to resort to a weird, eerie workaround like Apple is doing with EyeSight on the Vision Pro.
Meta
Meta said Orion is lightweight and works both indoors and outdoors. The company claims that the glasses allow for "digital experiences that are unconstrained by the limits of a smartphone screen" as they overlay holographic elements on top of the real world. In addition, Meta said Orion integrates contextual AI to help you gain a better understanding of the world around you.
The company added that you'll be able to look inside a fridge with the glasses on and get Meta AI to come up with a recipe based on what you have. You should be able hop onto video calls via Orion and view and send messages on Messenger and WhatsApp. Based on images that Meta shared, there will also be holographic versions of various other apps, such as Spotify, YouTube and Pinterest.
Meta
There's one key reason why Meta has been able to keep Orion lightweight: not all of the required tech is actually in the frame of the glasses. Orion comes with a required wireless puck that handles much of the processing and beams apps and content to the device. There's also a bracelet that you'll need to wear for gesture control.
You're likely going to have to wait a few years to get your hands on this device (or at least a version of it). For the time being, Meta employees and "select external audiences" are able to use Orion. That's in order to help the company learn more and iterate on the product as it works toward a consumer version of the AR glasses.
Still, Meta claims that Orion is not just a research prototype but is instead "one of the most polished product prototypes we’ve ever developed, and is truly representative of something that could ship to consumers." By continuing to work on the product internally, "we can keep building quickly and continue to push the boundaries of the technology, helping us arrive at an even better consumer product faster," the company said. Part of that iteration includes bringing down the price of the glasses to make them more affordable, according to Meta CEO Mark Zuckerberg.
A roadmap that leaked last year indicated that Meta planned to release its first consumer AR glasses in 2027, though the company says it's aiming to do so "in the near future." As it happens, Snap also recently debuted its fifth-gen AR Spectacles, but for now those are only available to developers who are willing to pay a monthly $99 fee.
Meta’s AI assistant has always been the most intriguing feature of its second-generation Ray-Ban smart glasses. While the generative AI assistant had fairly limited capabilities when the glasses launched last fall, the addition of real-time information and multimodal capabilities offered a range of new possibilities for the accessory.
Now, Meta is significantly upgrading the Ray-Ban Meta smart glasses’ AI powers. The company showed off a number of new abilities for the year-old frames onstage at its Connect event, including reminders and live translations.
With reminders, you’ll be able to look at items in your surroundings and ask Meta to send a reminder about it. For example, “hey Meta, remind me to buy that book next Monday.” The glasses will also be able to scan QR codes and call a phone number written in front of you.
In addition, Meta is adding video support to Meta AI so that the glasses will be better able to scan your surroundings and respond to queries about what’s around you. There are other more subtle improvements. Previously, you had to start a command with “Hey Meta, look and tell me” in order to get the glasses to respond to a command based on what you were looking at. With the update though, Meta AI will be able to respond to queries about what’s in front of you with more natural requests. In a demo with Meta, I was able to ask several questions and follow-ups with questions like “hey Meta, what am I looking at” or “hey Meta, tell me about what I’m looking at.”
When I tried out Meta AI’s multimodal capabilities on the glasses last year, I found that Meta AI was able to translate some snippets of text but struggled with anything more than a few words. Now, Meta AI should be able to translate longer chunks of text. And later this year the company is adding live translation abilities for English, French, Italian and Spanish, which could make the glasses even more useful as a travel accessory.
And while I still haven’t fully tested Meta AI’s new capabilities on its smart glasses just yet, it already seems to have a better grasp of real-time information than what I found last year. During a demo with Meta, I asked Meta AI to tell me who is the Speaker of the House of Representatives — a question it repeatedly got wrong last year — and it answered correctly the first time.
This article originally appeared on Engadget at https://www.engadget.com/wearables/metas-ray-ban-branded-smart-glasses-are-getting-ai-powered-reminders-and-translation-features-173921120.html?src=rss
Meta’s AI assistant has always been the most intriguing feature of its second-generation Ray-Ban smart glasses. While the generative AI assistant had fairly limited capabilities when the glasses launched last fall, the addition of real-time information and multimodal capabilities offered a range of new possibilities for the accessory.
Now, Meta is significantly upgrading the Ray-Ban Meta smart glasses’ AI powers. The company showed off a number of new abilities for the year-old frames onstage at its Connect event, including reminders and live translations.
With reminders, you’ll be able to look at items in your surroundings and ask Meta to send a reminder about it. For example, “hey Meta, remind me to buy that book next Monday.” The glasses will also be able to scan QR codes and call a phone number written in front of you.
In addition, Meta is adding video support to Meta AI so that the glasses will be better able to scan your surroundings and respond to queries about what’s around you. There are other more subtle improvements. Previously, you had to start a command with “Hey Meta, look and tell me” in order to get the glasses to respond to a command based on what you were looking at. With the update though, Meta AI will be able to respond to queries about what’s in front of you with more natural requests. In a demo with Meta, I was able to ask several questions and follow-ups with questions like “hey Meta, what am I looking at” or “hey Meta, tell me about what I’m looking at.”
When I tried out Meta AI’s multimodal capabilities on the glasses last year, I found that Meta AI was able to translate some snippets of text but struggled with anything more than a few words. Now, Meta AI should be able to translate longer chunks of text. And later this year the company is adding live translation abilities for English, French, Italian and Spanish, which could make the glasses even more useful as a travel accessory.
And while I still haven’t fully tested Meta AI’s new capabilities on its smart glasses just yet, it already seems to have a better grasp of real-time information than what I found last year. During a demo with Meta, I asked Meta AI to tell me who is the Speaker of the House of Representatives — a question it repeatedly got wrong last year — and it answered correctly the first time.
This article originally appeared on Engadget at https://www.engadget.com/wearables/metas-ray-ban-branded-smart-glasses-are-getting-ai-powered-reminders-and-translation-features-173921120.html?src=rss
Meta just revealed the budget-friendly Quest 3S VR headset at its annual Connect keynote event, but it also made a sad announcement about some of its previous headsets. The company will stop selling both the Quest 2 and the Quest Pro by the end of the year.
“With Quest 3S on the shelf, we’re officially winding down sales of Quest 2 and Pro. We’ll be selling our remaining headsets through the end of the year or until they’re gone, whichever comes first,” the company wrote in a blog post that also announced the pending launch of the Quest 3S.
The company will be selling Quest 2 and Pro accessories for “a bit longer” after the stock of headsets runs out. This includes the carrying case, the Touch Pro controllers and bundles like the Quest 2 Active Pack. Meta recently lowered the price of the Quest 2 to $200, and it’s still a decent headset for beginners. The Quest 3S is better in every way, but it starts at $300, while the standard Quest 3 costs $500.
It’s the end of an era for the Quest 2. This was a hugely successful headset, as it launched during the dog days of COVID-19. For many, it became a crucial item to survive endless isolation, along with stuff like Zoom and Animal Crossing: New Horizons.
It’s the end of an error (see what I did there?) for the Quest Pro. This headset never caught on, likely because it was originally priced at $1,500 before being quickly lowered to $1,000. It still costs a grand from Meta, but can typically be found for around $900 via Amazon and other retailers.
As they say, out with the old and in with the new. The Quest 3S is, essentially, the new Quest 2. It starts at $300, boasts the same CPU as the original Quest 3 and handles full-color passthrough.
This article originally appeared on Engadget at https://www.engadget.com/ar-vr/meta-will-stop-selling-the-quest-2-and-quest-pro-by-the-end-of-the-year-173704500.html?src=rss
Meta just revealed the budget-friendly Quest 3S VR headset at its annual Connect keynote event, but it also made a sad announcement about some of its previous headsets. The company will stop selling both the Quest 2 and the Quest Pro by the end of the year.
“With Quest 3S on the shelf, we’re officially winding down sales of Quest 2 and Pro. We’ll be selling our remaining headsets through the end of the year or until they’re gone, whichever comes first,” the company wrote in a blog post that also announced the pending launch of the Quest 3S.
The company will be selling Quest 2 and Pro accessories for “a bit longer” after the stock of headsets runs out. This includes the carrying case, the Touch Pro controllers and bundles like the Quest 2 Active Pack. Meta recently lowered the price of the Quest 2 to $200, and it’s still a decent headset for beginners. The Quest 3S is better in every way, but it starts at $300, while the standard Quest 3 costs $500.
It’s the end of an era for the Quest 2. This was a hugely successful headset, as it launched during the dog days of COVID-19. For many, it became a crucial item to survive endless isolation, along with stuff like Zoom and Animal Crossing: New Horizons.
It’s the end of an error (see what I did there?) for the Quest Pro. This headset never caught on, likely because it was originally priced at $1,500 before being quickly lowered to $1,000. It still costs a grand from Meta, but can typically be found for around $900 via Amazon and other retailers.
As they say, out with the old and in with the new. The Quest 3S is, essentially, the new Quest 2. It starts at $300, boasts the same CPU as the original Quest 3 and handles full-color passthrough.
This article originally appeared on Engadget at https://www.engadget.com/ar-vr/meta-will-stop-selling-the-quest-2-and-quest-pro-by-the-end-of-the-year-173704500.html?src=rss
Over the last year, Meta has made its AI assistant so ubiquitous in its apps it’s almost hard to believe that Meta AI is only a year old. But, one year after its launch at the last Connect, the company is infusing Meta AI with a load of new features in the hopes that more people will find its assistant useful.
One of the biggest changes is that users will be able to have voice chats with Meta AI. Up till now, the only way to speak with Meta AI was via the Ray-Ban Meta smart glasses. And like last year’s Meta AI launch, the company tapped a group of celebrities for the change.
Meta AI will be able to take on the voices of Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key and Kristen Bell, in addition to a handful of more generic voices. While the company is hoping the celebrities will sell users on Meta AI’s new abilities, it’s worth noting that the company quietly phased out its celebrity chatbot personas that launched at last year’s Connect.
In addition to voice chat support, Meta AI is also getting new image capabilities. Meta AI will be able to respond to requests to change and edit photos from text chats within Instagram, Messenger and WhatsApp. The company says that users can ask the AI to add or remove objects or to change elements of an image, like swapping a background or clothing item.
Meta is testing AI-generated content recommendations in the main feed of Facebook and Instagram.
Meta
The new abilities arrive alongside the company’s latest Llama 3.2 model. The new iteration, which comes barely two months after the Llama 3.1 release, is the first to have vision capabilities and can “bridge the gap between vision and language by extracting details from an image, understanding the scene, and then crafting a sentence or two that could be used as an image caption to help tell the story.” Llama 3.2 is “competitive” on “image recognition and a range of visual understanding tasks” compared with similar offerings from ChatGPT and Claude, Meta says.
The social network is testing other, potentially controversial, ways to bring AI into the core features of its main apps. The company will test AI-generated translation features for Reels with “automatic dubbing and lip syncing.” According to Meta, that “will simulate the speaker’s voice in another language and sync their lips to match.” It will arrive first to “some creators’ videos” in English and Spanish in the US and Latin America, though the company hasn't shared details on rollout timing.
Meta also plans to experiment with AI-generated content directly in the main feeds on Facebook and Instagram. With the test, Meta AI will surface AI-generated images that are meant to be personalized to each users’ interests and past activity. For example, Meta AI could surface an image “imagined for you” that features your face.
This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-ai-can-now-talk-to-you-and-edit-your-photos-172853219.html?src=rss
Over the last year, Meta has made its AI assistant so ubiquitous in its apps it’s almost hard to believe that Meta AI is only a year old. But, one year after its launch at the last Connect, the company is infusing Meta AI with a load of new features in the hopes that more people will find its assistant useful.
One of the biggest changes is that users will be able to have voice chats with Meta AI. Up till now, the only way to speak with Meta AI was via the Ray-Ban Meta smart glasses. And like last year’s Meta AI launch, the company tapped a group of celebrities for the change.
Meta AI will be able to take on the voices of Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key and Kristen Bell, in addition to a handful of more generic voices. While the company is hoping the celebrities will sell users on Meta AI’s new abilities, it’s worth noting that the company quietly phased out its celebrity chatbot personas that launched at last year’s Connect.
In addition to voice chat support, Meta AI is also getting new image capabilities. Meta AI will be able to respond to requests to change and edit photos from text chats within Instagram, Messenger and WhatsApp. The company says that users can ask the AI to add or remove objects or to change elements of an image, like swapping a background or clothing item.
Meta is testing AI-generated content recommendations in the main feed of Facebook and Instagram.
Meta
The new abilities arrive alongside the company’s latest Llama 3.2 model. The new iteration, which comes barely two months after the Llama 3.1 release, is the first to have vision capabilities and can “bridge the gap between vision and language by extracting details from an image, understanding the scene, and then crafting a sentence or two that could be used as an image caption to help tell the story.” Llama 3.2 is “competitive” on “image recognition and a range of visual understanding tasks” compared with similar offerings from ChatGPT and Claude, Meta says.
The social network is testing other, potentially controversial, ways to bring AI into the core features of its main apps. The company will test AI-generated translation features for Reels with “automatic dubbing and lip syncing.” According to Meta, that “will simulate the speaker’s voice in another language and sync their lips to match.” It will arrive first to “some creators’ videos” in English and Spanish in the US and Latin America, though the company hasn't shared details on rollout timing.
Meta also plans to experiment with AI-generated content directly in the main feeds on Facebook and Instagram. With the test, Meta AI will surface AI-generated images that are meant to be personalized to each users’ interests and past activity. For example, Meta AI could surface an image “imagined for you” that features your face.
This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-ai-can-now-talk-to-you-and-edit-your-photos-172853219.html?src=rss
Over the last year, Meta has made its AI assistant so ubiquitous in its apps it’s almost hard to believe that Meta AI is only a year old. But, one year after its launch at the last Connect, the company is infusing Meta AI with a load of new features in the hopes that more people will find its assistant useful.
One of the biggest changes is that users will be able to have voice chats with Meta AI. Up till now, the only way to speak with Meta AI was via the Ray-Ban Meta smart glasses. And like last year’s Meta AI launch, the company tapped a group of celebrities for the change.
Meta AI will be able to take on the voices of Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key and Kristen Bell, in addition to a handful of more generic voices. While the company is hoping the celebrities will sell users on Meta AI’s new abilities, it’s worth noting that the company quietly phased out its celebrity chatbot personas that launched at last year’s Connect.
In addition to voice chat support, Meta AI is also getting new image capabilities. Meta AI will be able to respond to requests to change and edit photos from text chats within Instagram, Messenger and WhatsApp. The company says that users can ask the AI to add or remove objects or to change elements of an image, like swapping a background or clothing item.
Meta is testing AI-generated content recommendations in the main feed of Facebook and Instagram.
Meta
The new abilities arrive alongside the company’s latest Llama 3.2 model. The new iteration, which comes barely two months after the Llama 3.1 release, is the first to have vision capabilities and can “bridge the gap between vision and language by extracting details from an image, understanding the scene, and then crafting a sentence or two that could be used as an image caption to help tell the story.” Llama 3.2 is “competitive” on “image recognition and a range of visual understanding tasks” compared with similar offerings from ChatGPT and Claude, Meta says.
The social network is testing other, potentially controversial, ways to bring AI into the core features of its main apps. The company will test AI-generated translation features for Reels with “automatic dubbing and lip syncing.” According to Meta, that “will simulate the speaker’s voice in another language and sync their lips to match.” It will arrive first to “some creators’ videos” in English and Spanish in the US and Latin America, though the company hasn't shared details on rollout timing.
Meta also plans to experiment with AI-generated content directly in the main feeds on Facebook and Instagram. With the test, Meta AI will surface AI-generated images that are meant to be personalized to each users’ interests and past activity. For example, Meta AI could surface an image “imagined for you” that features your face.
This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-ai-can-now-talk-to-you-and-edit-your-photos-172853219.html?src=rss