Canada has ordered TikTok to shut down its operations in the country, citing unspecified “national security risks” posed by the company and its parent ByteDance. With the move, TikTok will be forced to “wind up” all business in the country, though the Canadian government stopped short of banning the app.
“The government is taking action to address the specific national security risks related to ByteDance Ltd.’s operations in Canada through the establishment of TikTok Technology Canada, Inc,” Canada’s Minister of Innovation, Science and Industry François-Philippe Champagne said in a statement. “The decision was based on the information and evidence collected over the course of the review and on the advice of Canada’s security and intelligence community and other government partners.”
Canada’s crackdown on TikTok follows a “multi-step national security review process” by its intelligence agencies, the government said in a statement. As the CBCpoints out, the country previously banned the app from official government devices. It also comes several months after the United States passed a law that could ban the app stateside. US lawmakers have also cited national security concerns and the app’s ties to China. TikTok has mounted an extensive legal challenge to the law.
In a statement, a TikTok spokesperson said the company would challenge Canada’s order as well. "Shutting down TikTok’s Canadian offices and destroying hundreds of well-paying local jobs is not in anyone's best interest, and today's shutdown order will do just that,” the spokesperson said. “We will challenge this order in court. The TikTok platform will remain available for creators to find an audience, explore new interests and for businesses to thrive."
This article originally appeared on Engadget at https://www.engadget.com/big-tech/canada-orders-tiktok-to-shut-down-its-business-operations-in-the-country-due-to-national-security-risks-002615440.html?src=rss
New details are continuing to surface about the hacking of US telecom companies by a China-linked group that targeted US officials and campaign staffers. Now, The Wall Street Journalreports that the hackers’ access was even greater than what’s been previously reported, and that the communications of “potentially thousands of Americans” may have been impacted.
Last week, The New York Times reported that FBI investigators suspected call logs and SMS messages had been accessed by the hacking group, known as “Salt Typhoon.” The group reportedly targeted the phones of diplomats and government officials, as well as people associated with both presidential campaigns.
Now, The WSJ is reporting that the hackers, who were “likely” working for a Chinese intelligence agency, spent “eight months or more” in US telecom infrastructure, and that they may have been able to scoop up the data of thousands of people who were in contact with the targeted individuals.
The Journal confirms earlier reports that the hackers “limited their targets to several dozen select, high-value political and national-security figures.” But the hackers, who reportedly exploited routers used by telecom firms, had “the ability to access the phone data of virtually any American who is a customer of a compromised carrier — a group that includes AT&T and Verizon.” Both AT&T and Verizon declined to comment on the report.
This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/new-report-details-vast-spying-by-china-linked-telecom-hackers-010347224.html?src=rss
Meta is opening up its Llama AI models to government agencies and contractors working on national security, the company said in an update. The group includes more than a dozen private sector companies that partner with the US government, including Amazon Web Services, Oracle and Microsoft, as well as defense contractors like Palantir and Lockheed Martin.
Mark Zuckerberg hinted at the move last week during Meta’s earnings call, when he said the company was “working with the public sector to adopt Llama across the US government.” Now, Meta is offering more details about the extent of that work.
Oracle, for example, is “building on Llama to synthesize aircraft maintenance documents so technicians can more quickly and accurately diagnose problems, speeding up repair time and getting critical aircraft back in service.” Amazon Web Services and Microsoft, according to Meta, are “using Llama to support governments by hosting our models on their secure cloud solutions for sensitive data.”
Meta is also providing similar access to Llama to governments and contractors in the UK, Canada, Australia and New Zealand, Bloombergreported. In a blog post, Meta’s President of Global Affairs, Nick Clegg, suggested the partnerships will help the US compete with China in the global arms race over artificial intelligence. “We believe it is in both America and the wider democratic world’s interest for American open source models to excel and succeed over models from China and elsewhere,” he wrote. “As an American company, and one that owes its success in no small part to the entrepreneurial spirit and democratic values the United States upholds, Meta wants to play its part to support the safety, security and economic prosperity of America – and of its closest allies too.”
This article originally appeared on Engadget at https://www.engadget.com/ai/meta-opens-its-llama-ai-models-to-government-agencies-for-national-security-182355077.html?src=rss
If you’re excited, or even just a little curious, about the future of augmented reality, Meta’s Orion prototype makes the most compelling case yet for the technology.
For Meta, Orion is about more than finally making AR glasses a reality. It’s also the company’s best shot at becoming less dependent on Apple and Google’s app stores, and the rules that come with them. If Orion succeeds, then maybe we won’t need smartphones for much at all. Glasses, Zuckerberg has speculated, might eventually become “the main way we do computing.”
At the moment, it’s still way too early to know if Zuckerberg’s bet will actually pay off. Orion is, for now, still a prototype. Meta hasn’t said when it might become widely available or how much it might cost. That's partly because the company, which has already poured tens of billions of dollars into AR and VR research, still needs to figure out how to make Orion significantly more affordable than the $10,000 it reportedly costs to make the current version. It also needs to refine Orion’s hardware and software. And, perhaps most importantly, the company will eventually need to persuade its vast user base that AI-infused, eye-tracking glasses offer a better way to navigate the world.
Still, Meta has been eager to show off Orion since its reveal at Connect. And, after recently getting a chance to try out Orion for myself, it’s easy to see why: Orion is the most impressive AR hardware I’ve seen.
Meta’s first AR glasses
Meta has clearly gone to great lengths to make its AR glasses look, well, normal. While Snap has been mocked for its oversized Spectacles, Orion’s shape and size is closer to a traditional pair of frames.
Even so, they’re still noticeably wide and chunky. The thick black frames, which house an array of cameras, sensors and custom silicon, may work on some face shapes, but I don’t think they are particularly flattering. And while they look less cartoonish than Snap’s AR Spectacles, I’m pretty sure I’d still get some funny looks if I walked around with them in public. At 98 grams, the glasses were noticeably bulkier than my typical prescription lenses, but never felt heavy.
In addition to the actual glasses, Orion relies on two other pieces of kit: a 182-gram “wireless compute puck, which needs to stay near the glasses, and an electromyography (EMG) wristband that allows you to control the AR interface with a series of hand gestures. The puck I saw was equipped with its own cameras and sensors, but Meta told me they’ve since simplified the remote control-shaped device so that it’s mainly used for connectivity and processing.
When I first saw the three-piece Orion setup at Connect, my first thought was that it was an interesting compromise in order to keep the glasses smaller. But after trying it all together, it really doesn’t feel like a compromise at all.
You control Orion’s interface through a combination of eye tracking and gestures. After a quick calibration the first time you put the glasses on, you can navigate the AR apps and menus by glancing around the interface and tapping your thumb and index finger together. Meta has been experimenting with wrist-based neural interfaces for years, and Orion’s EMG wristband is the result of that work. The band, which feels like little more than a fabric watch band, uses sensors to detect the electrical signals that occur with even subtle movements of your wrist and fingers. Meta then uses machine learning to decode those signals and send them to the glasses.
That may sound complicated, but I was surprised by how intuitive the navigation felt. The combination of quick gestures and eye tracking felt much more precise than hand tracking controls I’ve used in VR. And while Orion also has hand-tracking abilities, it feels much more natural to quickly tap your fingers together than to extend your hands out in front of your face.
What it’s like to use Orion
Meta walked me through a number of demos meant to show off Orion’s capabilities. I asked Meta AI to generate an image, and to come up with recipes based on a handful of ingredients on a shelf in front of me. The latter is a trick I’ve also tried with the Ray-Ban Meta Smart Glasses, except with Orion, Meta AI was also able to project the recipe steps onto the wall in front of me.
I also answered a couple of video calls, including one from a surprisingly lifelike Codec Avatar. I watched a YouTube video, scrolled Instagram Reels, and dictated a response to an incoming message. If you’ve used mixed reality headsets, much of this will sound familiar, and a lot of it wasn’t that different from what you can do in VR headsets.
The magic of AR, though, is that everything you see is overlaid onto the world around you and your surroundings are always fully visible. I particularly appreciated this when I got to the gaming portion of the walkthrough. I played a few rounds of a Meta-created game called Stargazer, where players control a retro-looking spacecraft by moving their head to avoid incoming obstacles while shooting enemies with finger tap gestures. Throughout that game, and a subsequent round of AR Pong, I was able to easily keep up a conversation with the people around me while I played. As someone who easily gets motion sick from VR gaming, I appreciated that I never felt disoriented or less aware of my surroundings.
Orion’s displays rely on silicon carbide lenses, micro-LED projectors and waveguides. The actual lenses are clear, though they can dim depending on your environment. One of the most impressive aspects is the 70-degree field of view. It was noticeably wider and more immersive than what I experienced with Snap’s AR Spectacles, which have a 46-degree field of view. At one point, I had three windows open in one multitasking view: Instagram Reels, a video call and a messaging inbox. And while I was definitely aware of the outer limits of the display, I could easily see all three windows without physically moving my head or adjusting my position. It’s still not the all-encompassing AR of sci-fi flicks, but it was wide enough I never struggled to keep the AR content in view.
What was slightly disappointing, though, was the resolution of Orion’s visuals. At 13 pixels per degree, the colors all seemed somewhat muted and projected text was noticeably fuzzy. None of it was difficult to make out, but it was much less vivid than what I saw on Snap’s AR Spectacles, which have a 37 pixels per degree resolution.
Meta’s VP of Wearable Devices, Ming Hua, told me that one of the company’s top priorities is to increase the brightness and resolution of Orion’s displays. She said that there’s already a version of the prototype with twice the pixel density, so there’s good reason to believe this will improve over time. She’s also optimistic that Meta will eventually be able to bring down the costs of its AR tech, eventually reducing it to something “similar to a high end phone.”
What does it mean?
Leaving my demo at Meta’s headquarters, I was reminded of the first time I tried out a prototype of the wireless VR headset that would eventually become known as Quest, back in 2016. Called Santa Cruz at the time, it was immediately obvious, even to an infrequent VR user, that the wireless, room-tracking headset was the future of the company’s VR business. Now, it’s almost hard to believe there was a time when Meta’s headsets weren’t fully untethered.
Orion has the potential to be much bigger. Now, Meta isn’t just trying to create a more convenient form factor for mixed reality hobbyists and gamers. It’s offering a glimpse into how it views the future, and what our lives might look like when we’re no longer tethered to our phones.
For now, Orion is still just that: a glimpse. It’s far more complex than anything the company has attempted with VR. Meta still has a lot of work to do before that AR-enabled future can be a reality. But the prototype shows that much of that vision is closer than we think.
This article originally appeared on Engadget at https://www.engadget.com/ar-vr/metas-orion-prototype-offers-a-glimpse-into-our-ar-future-123038066.html?src=rss
If you’re excited, or even just a little curious, about the future of augmented reality, Meta’s Orion prototype makes the most compelling case yet for the technology.
For Meta, Orion is about more than finally making AR glasses a reality. It’s also the company’s best shot at becoming less dependent on Apple and Google’s app stores, and the rules that come with them. If Orion succeeds, then maybe we won’t need smartphones for much at all. Glasses, Zuckerberg has speculated, might eventually become “the main way we do computing.”
At the moment, it’s still way too early to know if Zuckerberg’s bet will actually pay off. Orion is, for now, still a prototype. Meta hasn’t said when it might become widely available or how much it might cost. That's partly because the company, which has already poured tens of billions of dollars into AR and VR research, still needs to figure out how to make Orion significantly more affordable than the $10,000 it reportedly costs to make the current version. It also needs to refine Orion’s hardware and software. And, perhaps most importantly, the company will eventually need to persuade its vast user base that AI-infused, eye-tracking glasses offer a better way to navigate the world.
Still, Meta has been eager to show off Orion since its reveal at Connect. And, after recently getting a chance to try out Orion for myself, it’s easy to see why: Orion is the most impressive AR hardware I’ve seen.
Meta’s first AR glasses
Meta has clearly gone to great lengths to make its AR glasses look, well, normal. While Snap has been mocked for its oversized Spectacles, Orion’s shape and size is closer to a traditional pair of frames.
Even so, they’re still noticeably wide and chunky. The thick black frames, which house an array of cameras, sensors and custom silicon, may work on some face shapes, but I don’t think they are particularly flattering. And while they look less cartoonish than Snap’s AR Spectacles, I’m pretty sure I’d still get some funny looks if I walked around with them in public. At 98 grams, the glasses were noticeably bulkier than my typical prescription lenses, but never felt heavy.
In addition to the actual glasses, Orion relies on two other pieces of kit: a 182-gram “wireless compute puck, which needs to stay near the glasses, and an electromyography (EMG) wristband that allows you to control the AR interface with a series of hand gestures. The puck I saw was equipped with its own cameras and sensors, but Meta told me they’ve since simplified the remote control-shaped device so that it’s mainly used for connectivity and processing.
When I first saw the three-piece Orion setup at Connect, my first thought was that it was an interesting compromise in order to keep the glasses smaller. But after trying it all together, it really doesn’t feel like a compromise at all.
You control Orion’s interface through a combination of eye tracking and gestures. After a quick calibration the first time you put the glasses on, you can navigate the AR apps and menus by glancing around the interface and tapping your thumb and index finger together. Meta has been experimenting with wrist-based neural interfaces for years, and Orion’s EMG wristband is the result of that work. The band, which feels like little more than a fabric watch band, uses sensors to detect the electrical signals that occur with even subtle movements of your wrist and fingers. Meta then uses machine learning to decode those signals and send them to the glasses.
That may sound complicated, but I was surprised by how intuitive the navigation felt. The combination of quick gestures and eye tracking felt much more precise than hand tracking controls I’ve used in VR. And while Orion also has hand-tracking abilities, it feels much more natural to quickly tap your fingers together than to extend your hands out in front of your face.
What it’s like to use Orion
Meta walked me through a number of demos meant to show off Orion’s capabilities. I asked Meta AI to generate an image, and to come up with recipes based on a handful of ingredients on a shelf in front of me. The latter is a trick I’ve also tried with the Ray-Ban Meta Smart Glasses, except with Orion, Meta AI was also able to project the recipe steps onto the wall in front of me.
I also answered a couple of video calls, including one from a surprisingly lifelike Codec Avatar. I watched a YouTube video, scrolled Instagram Reels, and dictated a response to an incoming message. If you’ve used mixed reality headsets, much of this will sound familiar, and a lot of it wasn’t that different from what you can do in VR headsets.
The magic of AR, though, is that everything you see is overlaid onto the world around you and your surroundings are always fully visible. I particularly appreciated this when I got to the gaming portion of the walkthrough. I played a few rounds of a Meta-created game called Stargazer, where players control a retro-looking spacecraft by moving their head to avoid incoming obstacles while shooting enemies with finger tap gestures. Throughout that game, and a subsequent round of AR Pong, I was able to easily keep up a conversation with the people around me while I played. As someone who easily gets motion sick from VR gaming, I appreciated that I never felt disoriented or less aware of my surroundings.
Orion’s displays rely on silicon carbide lenses, micro-LED projectors and waveguides. The actual lenses are clear, though they can dim depending on your environment. One of the most impressive aspects is the 70-degree field of view. It was noticeably wider and more immersive than what I experienced with Snap’s AR Spectacles, which have a 46-degree field of view. At one point, I had three windows open in one multitasking view: Instagram Reels, a video call and a messaging inbox. And while I was definitely aware of the outer limits of the display, I could easily see all three windows without physically moving my head or adjusting my position. It’s still not the all-encompassing AR of sci-fi flicks, but it was wide enough I never struggled to keep the AR content in view.
What was slightly disappointing, though, was the resolution of Orion’s visuals. At 13 pixels per degree, the colors all seemed somewhat muted and projected text was noticeably fuzzy. None of it was difficult to make out, but it was much less vivid than what I saw on Snap’s AR Spectacles, which have a 37 pixels per degree resolution.
Meta’s VP of Wearable Devices, Ming Hua, told me that one of the company’s top priorities is to increase the brightness and resolution of Orion’s displays. She said that there’s already a version of the prototype with twice the pixel density, so there’s good reason to believe this will improve over time. She’s also optimistic that Meta will eventually be able to bring down the costs of its AR tech, eventually reducing it to something “similar to a high end phone.”
What does it mean?
Leaving my demo at Meta’s headquarters, I was reminded of the first time I tried out a prototype of the wireless VR headset that would eventually become known as Quest, back in 2016. Called Santa Cruz at the time, it was immediately obvious, even to an infrequent VR user, that the wireless, room-tracking headset was the future of the company’s VR business. Now, it’s almost hard to believe there was a time when Meta’s headsets weren’t fully untethered.
Orion has the potential to be much bigger. Now, Meta isn’t just trying to create a more convenient form factor for mixed reality hobbyists and gamers. It’s offering a glimpse into how it views the future, and what our lives might look like when we’re no longer tethered to our phones.
For now, Orion is still just that: a glimpse. It’s far more complex than anything the company has attempted with VR. Meta still has a lot of work to do before that AR-enabled future can be a reality. But the prototype shows that much of that vision is closer than we think.
This article originally appeared on Engadget at https://www.engadget.com/ar-vr/metas-orion-prototype-offers-a-glimpse-into-our-ar-future-123038066.html?src=rss
Last month at Meta Connect, Mark Zuckerberg said that Meta AI was “on track” to become the most-used generative AI assistant in the world. The company has now passed a significant milestone toward that goal, with Meta AI passing the 500 million user mark, Zuckerberg revealed during the company’s latest earnings call.
The half billion user mark comes just barely a year after the social network first launched its AI assistant last fall. Zuckerberg said the company still expects to become the “most-used” assistant by the end of 2024, though he's never specified how the company is measuring that metric.
Meta’s assistant isn’t the only AI tool that’s boosting the company’s business. Zuckerberg said that AI improvements in its feed and video recommendations have led to an 8 percent increase in time spent on Facebook and a 5 percent increase for Instagram this year. Advertisers are also taking advantage of the company’s AI tools, he said, with more than 15 million ads created with generative AI in the last month alone. “We believe that there's a lot more upside here,” Zuckerberg said.
Outside of AI, Meta’s Threads app also continues to surge. The service now has “almost 275 million” monthly users, according to Zuckerberg. “It's been growing more than a million sign ups per day,” Zuckerberg said, adding that “engagement is growing too.”
This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-ai-has-more-than-500-million-users-220353427.html?src=rss
Late last week, the FBI and Cybersecurity and Infrastructure Security Agency (CISA) confirmed they were investigating “the unauthorized access to commercial telecommunications infrastructure by actors affiliated with the People’s Republic of China.” At the same time, The New York Timesreported that phones used by Donald Trump, JD Vance and Kamala Harris’ campaign staff were among the targets, though it was unclear what data the group may have been able to access.
Now, The New York Times has new details about the extent of the hack, which is reportedly linked to a Chinese group known as “Salt Typhoon.” According to The Times, aides to President Joe Biden, as well as Trump’s family members were also targeted, in addition to diplomats and other government officials. Even more concerning, though, is what the hackers may have been able to access. From the report:
F.B.I. investigators think the hackers may have been able to access unencrypted SMS text messages on the targeted devices, as well as call logs, according to people familiar with the investigation. They said there was also evidence indicating that audio communications were captured, though it was not immediately clear whether that meant voice mail or phone call conversations.
CISA didn’t immediately respond to a request for comment The agency said last week in a joint statement with the FBI that the investigation was “ongoing” and that the affected companies and other potential victims had been notified. At least 10 companies, including Verizon and AT&T, were impacted, according to The Washington Post. A spokesperson for AT&T declined to comment. Verizon didn’t immediately respond to questions, but previously toldThe Times the company was “aware that a highly sophisticated nation-state actor has reportedly targeted several U.S. telecommunications providers to gather intelligence.”
This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/fbi-suspects-china-linked-hackers-accessed-officials-call-logs-and-sms-messages-report-says-000434865.html?src=rss
X is trying to speed up its crowdsourced fact-checking system, Community Notes. In an update, the company says it has “re-architected” the scoring system that powers the feature so that the user-generated notes can now appear less than 20 minutes after a post is published on its platform.
Community Notes, introduced in 2022, relies on other X users to fact-check or add missing context to posts on the platform. Contributors are required to cite their sources, and other users then rate the “helpfulness” of the note. Creators are also penalized for posts that get “community noted” in an effort to discourage them from trying to monetize misinformation. Now, that whole process should be able to move a lot quicker.
According to X, these new “lightning notes” can “go live in as little as 14m33s after being written, and 18m20s after the post itself was written.” The change could help address a long running criticism of the crowdsourced fact checking system: that it moves far too slowly compared with the speed of viral misinformation on the platform. For example, an analysis last year by Bloomberg found that it could take several hours for a Community Note to appear on a viral tweet and that, often, only a fraction of users see the fact check compared with the original post.
The new speedier system could change that, though it’s unclear how often the faster “lightning” version of the process will actually play out. Not all posts with incorrect information, misstated facts or AI-generated imagery are immediately flagged for review, if they are at all. X says it has more than 800,000 contributors to the program globally, but some posts will likely still take much longer to wind their way through the Community Notes process.
This article originally appeared on Engadget at https://www.engadget.com/social-media/x-is-trying-to-make-community-notes-faster-with-lightning-notes-202227151.html?src=rss
Two weeks before the US presidential election, the Oversight Board says it has “serious concerns” about Meta’s content moderation systems in “electoral contexts,” and that the company risks the “excessive removal of political speech” when it over-enforces its rules. The admonishment came as the board weighed in on a case involving a satirical image of Vice President Kamala Harris and her running mate, Minnesota Governor Tim Walz.
Meta originally removed the post, shared on Facebook in August, that showed an edited version of a movie poster from Dumb and Dumber. The original 1994 movie poster shows the two main characters grabbing each other’s nipples through their shirts. In the altered version, the actors’ faces were replaced by Harris and Walz.
According to the Oversight Board, Meta cited its bullying and harassment rules, which includes a provision barring “derogatory sexualized photoshop or drawings.” The social network later restored the post after it drew attention from the Oversight Board, and the company acknowledged the satirical image didn’t break its rules because it didn’t depict sexual activity.
Despite Meta’s reversal, the board says the case suggests larger issues in how Meta handles posts dealing with election-related content. “This post is nothing more than a commonplace satirical image of prominent politicians and is instantly recognizable as such,” the board wrote. “Nonetheless, the company’s failure to recognize the nature of this post and treat it accordingly raises serious concerns about the systems and resources Meta has in place to effectively make content determinations in such electoral contexts.”
In response to the Oversight Board's take on the situation, a Meta spokesperson gave the following brief statement: "We mistakenly removed this post but restored it after the issue was brought to our attention."
It’s unusually direct criticism from the Oversight Board, which released its analysis of the case in a summary decision, which comes without the group’s typical laundry list of recommendations for the social media company. The board has previously pushed Meta to clarify its rules around satirical content.The latest case highlights another issue that many of the company’s users have also complained about: over-enforcing its rules.
“In this case, however, the Board highlights the overenforcement of Meta’s Bullying and Harassment policy with respect to satire and political speech in the form of a non-sexualized derogatory depiction of political figures,” the board wrote. “It also points to the dangers that overenforcing the Bullying and Harassment policy can have, especially in the context of an election, as it may lead to the excessive removal of political speech and undermine the ability to criticize government officials and political candidates, including in a sarcastic manner.”
Update, October 23 2024, 1:00PM ET: This story has been updated to include a statement from Meta.
This article originally appeared on Engadget at https://www.engadget.com/social-media/oversight-board-says-metas-handling-of-a-satirical-image-of-harris-and-walz-raises-serious-concerns-100046800.html?src=rss
Meta is bringing facial recognition tech back to its apps more than three years after it shut down Facebook’s “face recognition” system amid a broader backlash against the technology. Now, the social network will begin to deploy facial recognition tools on Facebook and Instagram to fight scams and help users who have lost access to their accounts, the company said in an update.
The first test will use facial recognition to detect scam ads that use the faces of celebrities and other public figures. “If our systems suspect that an ad may be a scam that contains the image of a public figure at risk for celeb-bait, we will try to use facial recognition technology to compare faces in the ad against the public figure’s Facebook and Instagram profile pictures,” Meta explained in a blog post. “If we confirm a match and that the ad is a scam, we’ll block it.”
The company said that it’s already begun to roll the feature out to a small group of celebs and public figures and that it will begin automatically enrolling more people into the feature “in the coming weeks,” though individuals have the ability to opt out of the protection. While Meta already has systems in place to review ads for potential scams, the company isn’t always able to catch “celeb-bait” ads as many legitimate companies use celebrities and public figures to market their products, Monika Bickert, VP of content policy at Meta, said in a briefing. “This is a real time process,” she said of the new facial recognition feature. “It's faster and it's more accurate than manual review.”
Separately, Meta is also testing facial recognition tools to address another long-running issue on Facebook and Instagram: account recovery. The company is experimenting with a new “video selfie” option that allows users to upload a clip of themselves, which Meta will then match to their profile photos, when users have been locked out of their accounts. The company will also use it in cases of a suspected account compromise to prevent hackers from accessing accounts using stolen credentials.
The tool won’t be able to help everyone who loses access to a Facebook or Instagram account. Many business pages, for example, don’t include a profile photo of a person, so those users would need to use Meta’s existing account recovery options. But Bickert says the new process will make it much more difficult for bad actors to game the company’s support tools “It will be a much higher level of difficulty for them in trying to bypass our systems,” Bickert said.
With both new features, Meta says it will “immediately delete” facial data that’s used for comparisons and that the scans won’t be used for another purpose. The company is also making the features optional, though celebrities will need to opt-out of the scam ad protection rather than opt-ion.
That could draw criticism from privacy advocates, particularly given Meta’s messy history with facial recognition. The company previously used the technology to power automatic photo-tagging, which allowed the company to automatically recognize the faces of users in photos and videos. The feature was discontinued in 2021, with Meta deleting the facial data of more than 1 billion people, citing “growing societal concerns.” The company also faces lawsuits, notably from the Texas and Illinois, over its use of the tech. Meta paid $650 million to settle a lawsuit related to the Illinois law and $1.4 billion to resolve a similar suit in Texas.
It’s notable, then, that the new tools won’t be available in either Illinois or Texas to start. It also won’t roll out to users in the United Kingdom or European Union as the company is “continuing to have conversations there with regulators” in the region, according to Bickert. But the company is “hoping to scale this technology globally sometime in 2025,” according to a Meta spokesperson.
This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-is-bringing-back-facial-recognition-with-new-safety-features-for-facebook-and-instagram-222523426.html?src=rss