Meta’s Orion holographic avatars will (eventually) be in VR too

The biggest reveal at Meta’s Connect event was its long-promised AR glasses, Orion. As expected, the prototype, each of which reportedly costs around $10,000, won’t be ready for the public any time soon.

In the meantime, Meta offered a glimpse of its new holographic avatars, which will allow people to talk with lifelike holograms in augmented reality. The holograms are Meta’s Codec Avatars, a technology it’s been working on for several years. Mark Zuckerberg teased a version of this last year when he participated in a podcast interview “in the metaverse.”

That technology may now be closer than we think. Following the keynote at Connect, I sat down with Mark Rabkin, a VP at Meta leading Horizon OS and Quest, who shared more about Meta’s codec avatars and how they will one day come to the company’s VR headsets as well.

“Generally, pretty much everything you can do on Orion you can do on Quest,” Rabkin said. The Codec Avatars in particular have also gotten much easier to create. While they once required advanced camera scans, most of the internal avatars are now created with phone scans, Rabkin explains.

“It’s an almost identical process in many ways in generating the stylized avatars [for VR], but with a different training set and a different amount of computation required,” Rabkin explained. “For the stylized avatars, the model has to be trained on a lot of stylized avatars and how they look and how they move. [It has to] get a lot of training data on what people perceive to look like their picture, and what they perceive to move nicely.”

“For the Codec avatars ... it's the same process. You gather a tremendous amount of data. You gather data from very high-quality, fancy camera scans. You gather data from phone scans, because that's how people will be really creating, and you just build a model until it improves. And one of the challenges with both problems is to make it fast enough and computationally cheap enough so that millions and millions can use it.”

Rabkin said that he eventually expects these avatars to be able to play in virtual reality on the company’s headsets. Right now, the Quest 3 and 3S don’t have the necessary sensors, including eye tracking, necessary for the photorealistic avatars. But that could change for the next-generation VR headset, he said: “I think probably, if we do really well, it should be possible in the next generation [of headset].”

This article originally appeared on Engadget at https://www.engadget.com/ar-vr/metas-orion-holographic-avatars-will-eventually-be-in-vr-too-235206805.html?src=rss

Meta’s Ray-Ban branded smart glasses are getting AI-powered reminders and translation features

Meta’s AI assistant has always been the most intriguing feature of its second-generation Ray-Ban smart glasses. While the generative AI assistant had fairly limited capabilities when the glasses launched last fall, the addition of real-time information and multimodal capabilities offered a range of new possibilities for the accessory.

Now, Meta is significantly upgrading the Ray-Ban Meta smart glasses’ AI powers. The company showed off a number of new abilities for the year-old frames onstage at its Connect event, including reminders and live translations.

With reminders, you’ll be able to look at items in your surroundings and ask Meta to send a reminder about it. For example, “hey Meta, remind me to buy that book next Monday.” The glasses will also be able to scan QR codes and call a phone number written in front of you.

In addition, Meta is adding video support to Meta AI so that the glasses will be better able to scan your surroundings and respond to queries about what’s around you. There are other more subtle improvements. Previously, you had to start a command with “Hey Meta, look and tell me” in order to get the glasses to respond to a command based on what you were looking at. With the update though, Meta AI will be able to respond to queries about what’s in front of you with more natural requests. In a demo with Meta, I was able to ask several questions and follow-ups with questions like “hey Meta, what am I looking at” or “hey Meta, tell me about what I’m looking at.”

When I tried out Meta AI’s multimodal capabilities on the glasses last year, I found that Meta AI was able to translate some snippets of text but struggled with anything more than a few words. Now, Meta AI should be able to translate longer chunks of text. And later this year the company is adding live translation abilities for English, French, Italian and Spanish, which could make the glasses even more useful as a travel accessory.

And while I still haven’t fully tested Meta AI’s new capabilities on its smart glasses just yet, it already seems to have a better grasp of real-time information than what I found last year. During a demo with Meta, I asked Meta AI to tell me who is the Speaker of the House of Representatives — a question it repeatedly got wrong last year — and it answered correctly the first time.

Catch up on all the news from Meta Connect 2024!

This article originally appeared on Engadget at https://www.engadget.com/wearables/metas-ray-ban-branded-smart-glasses-are-getting-ai-powered-reminders-and-translation-features-173921120.html?src=rss

Meta’s Ray-Ban branded smart glasses are getting AI-powered reminders and translation features

Meta’s AI assistant has always been the most intriguing feature of its second-generation Ray-Ban smart glasses. While the generative AI assistant had fairly limited capabilities when the glasses launched last fall, the addition of real-time information and multimodal capabilities offered a range of new possibilities for the accessory.

Now, Meta is significantly upgrading the Ray-Ban Meta smart glasses’ AI powers. The company showed off a number of new abilities for the year-old frames onstage at its Connect event, including reminders and live translations.

With reminders, you’ll be able to look at items in your surroundings and ask Meta to send a reminder about it. For example, “hey Meta, remind me to buy that book next Monday.” The glasses will also be able to scan QR codes and call a phone number written in front of you.

In addition, Meta is adding video support to Meta AI so that the glasses will be better able to scan your surroundings and respond to queries about what’s around you. There are other more subtle improvements. Previously, you had to start a command with “Hey Meta, look and tell me” in order to get the glasses to respond to a command based on what you were looking at. With the update though, Meta AI will be able to respond to queries about what’s in front of you with more natural requests. In a demo with Meta, I was able to ask several questions and follow-ups with questions like “hey Meta, what am I looking at” or “hey Meta, tell me about what I’m looking at.”

When I tried out Meta AI’s multimodal capabilities on the glasses last year, I found that Meta AI was able to translate some snippets of text but struggled with anything more than a few words. Now, Meta AI should be able to translate longer chunks of text. And later this year the company is adding live translation abilities for English, French, Italian and Spanish, which could make the glasses even more useful as a travel accessory.

And while I still haven’t fully tested Meta AI’s new capabilities on its smart glasses just yet, it already seems to have a better grasp of real-time information than what I found last year. During a demo with Meta, I asked Meta AI to tell me who is the Speaker of the House of Representatives — a question it repeatedly got wrong last year — and it answered correctly the first time.

Catch up on all the news from Meta Connect 2024!

This article originally appeared on Engadget at https://www.engadget.com/wearables/metas-ray-ban-branded-smart-glasses-are-getting-ai-powered-reminders-and-translation-features-173921120.html?src=rss

Meta AI can now talk to you and edit your photos

Over the last year, Meta has made its AI assistant so ubiquitous in its apps it’s almost hard to believe that Meta AI is only a year old. But, one year after its launch at the last Connect, the company is infusing Meta AI with a load of new features in the hopes that more people will find its assistant useful.

One of the biggest changes is that users will be able to have voice chats with Meta AI. Up till now, the only way to speak with Meta AI was via the Ray-Ban Meta smart glasses. And like last year’s Meta AI launch, the company tapped a group of celebrities for the change.

Meta AI will be able to take on the voices of Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key and Kristen Bell, in addition to a handful of more generic voices. While the company is hoping the celebrities will sell users on Meta AI’s new abilities, it’s worth noting that the company quietly phased out its celebrity chatbot personas that launched at last year’s Connect.

In addition to voice chat support, Meta AI is also getting new image capabilities. Meta AI will be able to respond to requests to change and edit photos from text chats within Instagram, Messenger and WhatsApp. The company says that users can ask the AI to add or remove objects or to change elements of an image, like swapping a background or clothing item.

Meta is testing AI-generated content recommendations in the main feed of Facebook and Instagram.
Meta is testing AI-generated content recommendations in the main feed of Facebook and Instagram.
Meta

The new abilities arrive alongside the company’s latest Llama 3.2 model. The new iteration, which comes barely two months after the Llama 3.1 release, is the first to have vision capabilities and can “bridge the gap between vision and language by extracting details from an image, understanding the scene, and then crafting a sentence or two that could be used as an image caption to help tell the story.” Llama 3.2 is “competitive” on “image recognition and a range of visual understanding tasks” compared with similar offerings from ChatGPT and Claude, Meta says.

The social network is testing other, potentially controversial, ways to bring AI into the core features of its main apps. The company will test AI-generated translation features for Reels with “automatic dubbing and lip syncing.” According to Meta, that “will simulate the speaker’s voice in another language and sync their lips to match.” It will arrive first to “some creators’ videos” in English and Spanish in the US and Latin America, though the company hasn't shared details on rollout timing.

Meta also plans to experiment with AI-generated content directly in the main feeds on Facebook and Instagram. With the test, Meta AI will surface AI-generated images that are meant to be personalized to each users’ interests and past activity. For example, Meta AI could surface an image “imagined for you” that features your face.

Catch up on all the news from Meta Connect 2024!

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-ai-can-now-talk-to-you-and-edit-your-photos-172853219.html?src=rss

Meta AI can now talk to you and edit your photos

Over the last year, Meta has made its AI assistant so ubiquitous in its apps it’s almost hard to believe that Meta AI is only a year old. But, one year after its launch at the last Connect, the company is infusing Meta AI with a load of new features in the hopes that more people will find its assistant useful.

One of the biggest changes is that users will be able to have voice chats with Meta AI. Up till now, the only way to speak with Meta AI was via the Ray-Ban Meta smart glasses. And like last year’s Meta AI launch, the company tapped a group of celebrities for the change.

Meta AI will be able to take on the voices of Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key and Kristen Bell, in addition to a handful of more generic voices. While the company is hoping the celebrities will sell users on Meta AI’s new abilities, it’s worth noting that the company quietly phased out its celebrity chatbot personas that launched at last year’s Connect.

In addition to voice chat support, Meta AI is also getting new image capabilities. Meta AI will be able to respond to requests to change and edit photos from text chats within Instagram, Messenger and WhatsApp. The company says that users can ask the AI to add or remove objects or to change elements of an image, like swapping a background or clothing item.

Meta is testing AI-generated content recommendations in the main feed of Facebook and Instagram.
Meta is testing AI-generated content recommendations in the main feed of Facebook and Instagram.
Meta

The new abilities arrive alongside the company’s latest Llama 3.2 model. The new iteration, which comes barely two months after the Llama 3.1 release, is the first to have vision capabilities and can “bridge the gap between vision and language by extracting details from an image, understanding the scene, and then crafting a sentence or two that could be used as an image caption to help tell the story.” Llama 3.2 is “competitive” on “image recognition and a range of visual understanding tasks” compared with similar offerings from ChatGPT and Claude, Meta says.

The social network is testing other, potentially controversial, ways to bring AI into the core features of its main apps. The company will test AI-generated translation features for Reels with “automatic dubbing and lip syncing.” According to Meta, that “will simulate the speaker’s voice in another language and sync their lips to match.” It will arrive first to “some creators’ videos” in English and Spanish in the US and Latin America, though the company hasn't shared details on rollout timing.

Meta also plans to experiment with AI-generated content directly in the main feeds on Facebook and Instagram. With the test, Meta AI will surface AI-generated images that are meant to be personalized to each users’ interests and past activity. For example, Meta AI could surface an image “imagined for you” that features your face.

Catch up on all the news from Meta Connect 2024!

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-ai-can-now-talk-to-you-and-edit-your-photos-172853219.html?src=rss

Meta AI can now talk to you and edit your photos

Over the last year, Meta has made its AI assistant so ubiquitous in its apps it’s almost hard to believe that Meta AI is only a year old. But, one year after its launch at the last Connect, the company is infusing Meta AI with a load of new features in the hopes that more people will find its assistant useful.

One of the biggest changes is that users will be able to have voice chats with Meta AI. Up till now, the only way to speak with Meta AI was via the Ray-Ban Meta smart glasses. And like last year’s Meta AI launch, the company tapped a group of celebrities for the change.

Meta AI will be able to take on the voices of Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key and Kristen Bell, in addition to a handful of more generic voices. While the company is hoping the celebrities will sell users on Meta AI’s new abilities, it’s worth noting that the company quietly phased out its celebrity chatbot personas that launched at last year’s Connect.

In addition to voice chat support, Meta AI is also getting new image capabilities. Meta AI will be able to respond to requests to change and edit photos from text chats within Instagram, Messenger and WhatsApp. The company says that users can ask the AI to add or remove objects or to change elements of an image, like swapping a background or clothing item.

Meta is testing AI-generated content recommendations in the main feed of Facebook and Instagram.
Meta is testing AI-generated content recommendations in the main feed of Facebook and Instagram.
Meta

The new abilities arrive alongside the company’s latest Llama 3.2 model. The new iteration, which comes barely two months after the Llama 3.1 release, is the first to have vision capabilities and can “bridge the gap between vision and language by extracting details from an image, understanding the scene, and then crafting a sentence or two that could be used as an image caption to help tell the story.” Llama 3.2 is “competitive” on “image recognition and a range of visual understanding tasks” compared with similar offerings from ChatGPT and Claude, Meta says.

The social network is testing other, potentially controversial, ways to bring AI into the core features of its main apps. The company will test AI-generated translation features for Reels with “automatic dubbing and lip syncing.” According to Meta, that “will simulate the speaker’s voice in another language and sync their lips to match.” It will arrive first to “some creators’ videos” in English and Spanish in the US and Latin America, though the company hasn't shared details on rollout timing.

Meta also plans to experiment with AI-generated content directly in the main feeds on Facebook and Instagram. With the test, Meta AI will surface AI-generated images that are meant to be personalized to each users’ interests and past activity. For example, Meta AI could surface an image “imagined for you” that features your face.

Catch up on all the news from Meta Connect 2024!

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-ai-can-now-talk-to-you-and-edit-your-photos-172853219.html?src=rss

X just released its first full transparency report since Elon Musk took over

X has published its most detailed accounting of its content moderation practices since Elon Musk’s takeover of the company. The report, X’s first in more than a year, provides new insight into how X is enforcing its rules as it struggles to hang on to advertisers who have raised concerns about toxicity on the platform.

The report, which details content takedowns and account suspensions from the first half of 2024, shows that suspensions have more than tripled since the last time the company shared data. X suspended just under 5.3 million accounts during the period, compared with 1.6 million suspensions during the first six months of 2022.

In addition to the suspensions, X says it “removed or labeled” more than 10.6 million posts for violating its rules. Violations of the company’s hateful conduct policy accounted for nearly half of that number, with X taking action on 4.9 million such posts. Posts containing abuse and harassment (2.6 million) and violent content (2.2 million) also accounted for a significant percentage of the takedowns and labels.

While these numbers don’t tell a complete story about the state of content on X — the company doesn’t distinguish between posts it removes and those that it labels, for example — it shows that hateful, abusive and violent content are among the biggest issues facing the platform. Those are also the same issues numerous advertisers and civil rights groups have raised concerns about since Musk’s takeover of the company. In the report, X claims that rule-breaking content accounted for less than 1 percent of all posts shared on the platform.

Numbers shared by X.
X

The numbers also suggest there have been significant increases in this type of content since Twitter last shared numbers prior to Musk’s takeover. For example, in the last half of 2021, the last time Twitter shared such data, the company reported it suspended about 1.3 million accounts for terms of service violations and “actioned” about 4.3 million.

X previously published an abbreviated report in a 383-word blog post last April, which shared some stats on content takedowns, but offered almost no details on government requests for information or post removals. The new report is a significant improvement on that front. It says that X received 18,737 government requests for information, with the majority of the requests coming from within the EU and a reported disclosure rate of 53 percent. X also received 72,703 requests from governments to remove content from its platform. The company says it took action in just over 70 percent of cases. Japan accounted for the vast majority of those requests (46,648), followed by Turkey (9,364).

This article originally appeared on Engadget at https://www.engadget.com/social-media/x-just-released-its-first-full-transparency-report-since-elon-musk-took-over-110038194.html?src=rss

X just released its first full transparency report since Elon Musk took over

X has published its most detailed accounting of its content moderation practices since Elon Musk’s takeover of the company. The report, X’s first in more than a year, provides new insight into how X is enforcing its rules as it struggles to hang on to advertisers who have raised concerns about toxicity on the platform.

The report, which details content takedowns and account suspensions from the first half of 2024, shows that suspensions have more than tripled since the last time the company shared data. X suspended just under 5.3 million accounts during the period, compared with 1.6 million suspensions during the first six months of 2022.

In addition to the suspensions, X says it “removed or labeled” more than 10.6 million posts for violating its rules. Violations of the company’s hateful conduct policy accounted for nearly half of that number, with X taking action on 4.9 million such posts. Posts containing abuse and harassment (2.6 million) and violent content (2.2 million) also accounted for a significant percentage of the takedowns and labels.

While these numbers don’t tell a complete story about the state of content on X — the company doesn’t distinguish between posts it removes and those that it labels, for example — it shows that hateful, abusive and violent content are among the biggest issues facing the platform. Those are also the same issues numerous advertisers and civil rights groups have raised concerns about since Musk’s takeover of the company. In the report, X claims that rule-breaking content accounted for less than 1 percent of all posts shared on the platform.

Numbers shared by X.
X

The numbers also suggest there have been significant increases in this type of content since Twitter last shared numbers prior to Musk’s takeover. For example, in the last half of 2021, the last time Twitter shared such data, the company reported it suspended about 1.3 million accounts for terms of service violations and “actioned” about 4.3 million.

X previously published an abbreviated report in a 383-word blog post last April, which shared some stats on content takedowns, but offered almost no details on government requests for information or post removals. The new report is a significant improvement on that front. It says that X received 18,737 government requests for information, with the majority of the requests coming from within the EU and a reported disclosure rate of 53 percent. X also received 72,703 requests from governments to remove content from its platform. The company says it took action in just over 70 percent of cases. Japan accounted for the vast majority of those requests (46,648), followed by Turkey (9,364).

This article originally appeared on Engadget at https://www.engadget.com/social-media/x-just-released-its-first-full-transparency-report-since-elon-musk-took-over-110038194.html?src=rss

Qualcomm is reportedly eyeing a takeover of Intel

It seems that Qualcomm sees Intel’s struggling business as a potential opportunity. The San Diego-based chipmaker has reportedly expressed an interest in taking over Intel “in recent days,” according to a new report in The Wall Street Journal.

Though the report cautions that such a deal is “far from certain,” it would be a major upheaval in the US chip industry. It would also, as The WSJ notes, likely raise antitrust questions. But Qualcomm’s reported interest in a takeover underscores just how much Intel’s business has struggled over the last year.

Intel announced plans to cut 15,000 jobs last month as its quarterly losses climbed to $1.6 billion. Its foundry business is also struggling, with an operating loss of $2.8 billion last quarter. CEO Pat Gelsinger announced plans earlier this week to separate its foundry business into a separate unit from the rest of Intel.

Intel declined to comment on the report. Qualcomm didn’t immediately respond to a request for comment.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/qualcomm-is-reportedly-eyeing-a-takeover-of-intel-210920969.html?src=rss

Cards Against Humanity is suing SpaceX for trespassing and filling its property with ‘space garbage’

Cards Against Humanity is the latest entity to take on Elon Musk in court. The irreverent party game company filed a $15 million lawsuit against SpaceX for trespassing on property it owns in Texas, which happens to sit near SpaceX facilities.

According to a lawsuit filed in a federal court in Texas, Musk's rocket company began using its land without permission for the last six months. SpaceX took what was previously a “pristine” plot of land “and completely fucked that land with gravel, tractors, and space garbage,” CAH wrote in a statement.

As you might expect from the card game company known for its raunchy sense of humor and headline-grabbing stunts, there’s an amusing backstory to how it became neighbors with SpaceX in Texas in the first place. In 2017, the company bought land along the US-Mexico border as part of a crowdfunded effort to protest then President Donald Trump’s plan to build a border wall. Since then, the company writes, it has maintained the land with regular mowing, fencing and “no trespassing” signs.

SpaceX later purchased adjacent land and, earlier this year, allegedly began using CAH’s land amid some kind of construction project. From the lawsuit (emphasis theirs):

The site was cleared of vegetation, and the soil was compacted with gravel or other substance to allow SpaceX and its contractors to run and park its vehicles all over the Property. Generators were brought in to run equipment and lights while work was being performed before and after daylight. An enormous mound of gravel was unloaded onto the Property; the gravel is being stored and used for the construction of buildings by SpaceX’s contractors along the road. Large pieces of construction equipment and numerous construction-related vehicles are utilized and stored on the Property continuously. And, of course, workers are present performing construction work and staging materials and vehicles for work to be performed on other tracts. In short, SpaceX has treated the Property as its own for at least six (6) months without regard for CAH’s property rights nor the safety of anyone entering what has become a worksite that is presumably governed by OSHA safety requirements.

SpaceX, according to the filing, “never asked for permission” to use the land and “and hasnever reached out to CAH to explain or apologize for the damage.” The rocket company did, however, give “a 12-hour ultimatum to accept a lowball offer for less than half our land’s value,” according to a statement posted online. A spokesperson for CAH said the land in question is “about an acre” in size.

What CAH's Texas land looked like prior to SpaceX's alleged trespassing.
What CAH's Texas land looked like prior to SpaceX's alleged trespassing.
Christopher Markos / Cards Against Humanity

In response to the ultimatum, CAH filed a $15 million lawsuit against SpaceX for trespassing and damaging its property. The game company, which originally was funded via a Kickstarter campaign, says that if it’s successful in court it will share the proceeds with the 150,000 fans who helped originally purchase the land in 2017. It created a website where subscribers can sign-up for a chance to get up to $100 of the potential $15 million payout should their lawsuit succeed. (A disclaimer notes that “Elon Musk has way more money and lawyers than Cards Against Humanity, and while CAH will try its hardest to get me $100, they will probably only be able to get me like $2 or most likely nothing.)

SpaceX didn’t immediately respond to a request for comment. But CAH isn’t the only Texas landowner that's raised questions about the company’s tactics. SpaceX has been aggressively growing its footprint in Southern Texas in recent years. The expansion, which has resulted in many locals selling their land to SpaceX, has rankled some longtime residents, according to an investigation published by Reuters.

CAH says that Musk’s past behavior makes SpaceX’s actions “particularly offensive" to the company known for taking a stance on social issues. 

“The 2017 holiday campaign that resulted in the purchase of the Property was based upon CAH undertaking efforts to fight against ‘injustice, lies, [and] racism,” it states. “Thus, it is particularly offensive that these egregious acts against the Property have been committed by the company run by Elon Musk. As is widely known, Musk has been accused of tolerating racism and sexism at Tesla and of amplifying the antisemitic ‘Great Replacement Theory.’ Allowing Musk’s company to abuse the Property that CAH’s supporters contributed money to purchase for the sole purpose of stopping such behavior is totally contrary to both the reason for the contribution and the tenets on which CAH is based.”

This article originally appeared on Engadget at https://www.engadget.com/science/space/cards-against-humanity-is-suing-spacex-for-trespassing-and-filling-its-property-with-space-garbage-181828453.html?src=rss