LG reveals its laundry-folding robot at CES 2026

LG has unveiled its humanoid robot that can handle household chores. After teasing the CLOiD last week, the company has offered its first look at the AI-powered robot it claims can fold laundry, unload the dishwasher, serve food and help out with other tasks. 

The CLOiD has a surprisingly cute "head unit" that's equipped with a display, speakers, cameras and other sensors. "Collectively, these elements allow the robot to communicate with humans through spoken language and 'facial expressions,' learn the living environments and lifestyle patterns of its users and control connected home appliances based on its learnings," LG says in its press release

The robot also has two robotic arms — complete with shoulder, elbow and wrist joints — and hands with fingers that can move independently. The company didn't share images of the CLOiD's base, but it uses wheels and technology similar to what the appliance maker has used for robot vacuums. The company notes that its arms are able to pick up objects that are "knee level" and higher, so it won't be able to pick up things from the floor.

The CLOiD robot unloading a dishwasher.
The CLOiD robot unloading a dishwasher.
LG

LG says it will show off the robot completing common chores in a variety of scenarios, like starting laundry cycles and folding freshly washed clothes. The company also shared images of it taking a croissant out of the oven, unloading plates from a dishwasher and serving a plate of food. Another image shows it standing alongside a woman in the middle of a home workout, though it's not clear how the CLOiD is aiding with that task.

We'll get a closer look at the CLOiD and its laundry-folding abilities once the CES show floor opens later this week, so we should get a better idea of just how capable it is. It sounds like for now LG intends this to be more of a concept rather than a product it plans to actually sell. The company says that it will "continue developing home robots with practical functions and forms for housework" and also bring its robotics technology to more of its home appliances, like refrigerators with doors that can automatically open.

This article originally appeared on Engadget at https://www.engadget.com/home/smart-home/lg-reveals-its-laundry-folding-robot-at-ces-2026-215121021.html?src=rss

Instagram chief: AI is so ubiquitous ‘it will be more practical to fingerprint real media than fake media’

It's no secret that AI-generated content took over our social media feeds in 2025. Now, Instagram's top exec Adam Mosseri has made it clear that he expects AI content to overtake non-AI imagery and the significant implications that shift has for its creators and photographers.

Mosseri shared the thoughts in a lengthy post about the broader trends he expects to shape Instagram in 2026. And he offered a notably candid assessment on how AI is upending the platform. "Everything that made creators matter—the ability to be real, to connect, to have a voice that couldn’t be faked—is now suddenly accessible to anyone with the right tools," he wrote. "The feeds are starting to fill up with synthetic everything."

But Mosseri doesn't seem particularly concerned by this shift. He says that there is "a lot of amazing AI content" and that the platform may need to rethink its approach to labeling such imagery by "fingerprinting real media, not just chasing fake."

From Mosseri (emphasis his):

Social media platforms are going to come under increasing pressure to identify and label AI-generated content as such. All the major platforms will do good work identifying AI content, but they will get worse at it over time as AI gets better at imitating reality. There is already a growing number of people who believe, as I do, that it will be more practical to fingerprint real media than fake media. Camera manufacturers could cryptographically sign images at capture, creating a chain of custody.

On some level, it's easy to understand how this seems like a more practical approach for Meta. As we've previously reported, technologies that are meant to identify AI content, like watermarks, have proved unreliable at best. They are easy to remove and even easier to ignore altogether. Meta's own labels are far from clear and the company, which has spent tens of billions of dollars on AI this year alone, has admitted it can't reliably detect AI-generated or manipulated content on its platform.

That Mosseri is so readily admitting defeat on this issue, though, is telling. AI slop has won. And when it comes to helping Instagram's 3 billion users understand what is real, that should largely be someone else's problem, not Meta's. Camera makers — presumably phone makers and actual camera manufacturers — should come up with their own system that sure sounds a lot like watermarking to "to verify authenticity at capture." Mosseri offers few details about how this would work or be implemented at the scale required to make it feasible.

Mosseri also doesn't really address the fact that this is likely to alienate the many photographers and other Instagram creators who have already grown frustrated with the app. The exec regularly fields complaints from the group who want to know why Instagram's algorithm doesn't consistently surface their posts to their on followers.

But Mosseri suggests those complaints stem from an outdated vision of what Instagram even is. The feed of "polished" square images, he says, "is dead." Camera companies, in his estimation, are "are betting on the wrong aesthetic" by trying to "make everyone look like a professional photographer from the past." Instead, he says that more "raw" and "unflattering" images will be how creators can prove they are real, and not AI. In a world where Instagram has more AI content than not, creators should prioritize images and videos that intentionally make them look bad. 


This article originally appeared on Engadget at https://www.engadget.com/social-media/instagram-chief-ai-is-so-ubiquitous-it-will-be-more-practical-to-fingerprint-real-media-than-fake-media-202620080.html?src=rss

Trump’s TikTok deal is another step closer to finally actually happening

Remember back in September when President Donald Trump signed an executive order that seemingly finalized some of the terms of a deal to spin off TikTok's US business? Three months later, that same deal is apparently one step closer to being official.

According to Bloomberg, TikTok CEO Shou Chew told employees that TikTok and ByteDance had signed off the agreement for control of TikTok's US business. It sounds like terms of the deal are roughly the same as what Trump announced earlier this year. A group of US investors, including Oracle, Silver Lake and MGX will control a majority of the new entity while ByteDance will keep a smaller stake in the venture. 

According to Chew's memo, the deal is expected to close January 22, 2026. “Upon the closing, the US joint venture, built on the foundation of the current TikTok US Data Security (USDS) organization, will operate as an independent entity with authority over US data protection, algorithm security, content moderation and software assurance,” he wrote according to Bloomberg.  TikTok didn’t immediately respond to a request for comment.

Notably, it's still not clear where Chinese officials stand on the deal. Trump said back in September that China was "fully on board," but subsequent meetings between the two sides have so far produced vague statements. In October, China's Commerce Ministry said it would "work with the U.S. to properly resolve issues related to TikTok." 

If a deal is indeed finalized by next month, it will come almost exactly a year after Trump's first executive order to delay a law that required a sale or ban of the app front taking effect. He has signed off several other extensions since.

This article originally appeared on Engadget at https://www.engadget.com/social-media/trumps-tiktok-deal-is-another-step-closer-to-finally-actually-happening-001813404.html?src=rss

A Facebook test makes link-sharing a paid feature for creators

Creators and publishers have long worried about Meta's ability to throttle links to outside content. Now, the company is testing out a new scheme that effectively puts link-sharing behind a paywall for creators on Facebook.

Under the test, a Meta Verified subscription will determine how many links a creator can share another profile per month. According to a screenshot shared by social meda consultant Matt Navarra, creators in the test recently received a notification from Meta informing them that "certain Facebook profiles without Meta Verified, including yours, will be limited to sharing links in 2 organic posts per month."  

Meta is making link sharing pay to play with a new test.
Meta is making link sharing pay to play with a new test.

A spokesperson for Meta confirmed the test to Engadget. The test is currently affecting an unspecified number of creators and pages using "professional mode" on Facebook. Publishers aren't affected for now. "This is a limited test to understand whether the ability to publish an increased volume of posts with links adds additional value for Meta Verified subscribers," the spokesperson said.

While Meta seems to be trying to downplay the significance of the test, it's a notable shift for the company. Many creators and businesses rely on Facebook and reducing their ability to send traffic to outside websites could be a significant hit. Many creators are already frustrated that the company puts its better customer service features behind the Meta Verified subscription, which starts at $14.99/month. Making link-sharing a premium feature as well would be even more unpopular.

Have a tip for Karissa? You can reach her by email, on X, Bluesky, Threads, or send a message to @karissabe.51 to chat confidentially on Signal.

This article originally appeared on Engadget at https://www.engadget.com/social-media/a-facebook-test-makes-link-sharing-a-paid-feature-for-creators-224632957.html?src=rss

Meta is ‘pausing’ third-party VR headsets from ASUS and Lenovo

Last year, Meta announced that it was opening up its VR operating system to other headset makers, starting with ASUS and Lenovo. Now, it seems that Meta is pumping the brakes on the effort and those third-party Horizon OS headsets might never actually launch.

The company has "paused" the program, Road to VR reported. Meta confirmed the move in a statement to Engadget, saying that it's instead focusing on "building the world-class first-party hardware and software needed to advance the VR market." ASUS and Lenovo didn't immediately respond to a request for comment. Both companies have said little about the headsets since they were first announced in 2024. ASUS' was going to be a "performance gaming" headset under its Republic of Gamers (ROG) brand, while Lenovo's was intended to be a mixed reality device focused on "productivity, learning and entertainment" experiences 

The shift isn't entirely surprising. Meta Connect was very light on virtual reality news this year as smart glasses have become a central focus for the company. Earlier this month, Bloomberg reported that Meta was planning significant cuts to the teams working on virtual reality and Horizon Worlds. The company said at the time it was "shifting some of our investment from Metaverse toward AI glasses and wearables given the momentum there."

Still, Meta is seemingly leaving the door open for third-party VR headsets in the future. "We’re committed to this for the long term and will revisit opportunities for 3rd-party device partnerships as the category evolves," the company said.


This article originally appeared on Engadget at https://www.engadget.com/ar-vr/meta-is-pausing-third-party-vr-headsets-from-asus-and-lenovo-193622900.html?src=rss

Meta is rolling out Conversation Focus and AI-powered Spotify features to its smart glasses

Back in September during Meta Connect, the company previewed a new ability for its smart glasses lineup called Conversation Focus. The feature, which is able to amplify the voices of people around you, is now starting to roll out in the company's latest batch of software updates.

When enabled, the feature is meant to make it easier to hear the people you're speaking with in a crowded or otherwise noisy environment. "You’ll hear the amplified voice sound slightly brighter, which will help you distinguish the conversation from ambient background noise,” Meta explains. It can be enabled either via voice commands ("hey Meta, start Conversation Focus") or by adding it as a dedicated "tap-and-hold" shortcut

Meta is also adding a new multimodal AI feature for Spotify. With the update, users can ask their glasses to play music on Spotify that corresponds with what they're looking at by saying “hey Meta, play a song to match this view.” Spotify will then start a playlist "based on your unique taste, customized for that specific moment." For example, looking at holiday decor might trigger a similarly-themed playlist, though it's not clear how Meta and Spotify may translate more abstract concepts into themed playlists. 

Both updates are starting to roll out now to Meta Ray-Ban glasses (both Gen 1 and Gen 2 models), as well as the Oakley Meta HSTN frames. The update will arrive first to those enrolled in Meta's early access program, and will be available "gradually" to everyone else.

Meta's newest mode of smart glasses, the Oakley Meta Vanguard shades, are also getting some new features in the latest software update. Meta is adding the option to trigger specific commands with a single word, rather than having to say "hey Meta." For example, saying "photo" will be enough to snap a picture and saying "video" will start a new recording. The company says the optional feature is meant to help athletes "save some breath" while on a run or bike ride. 


This article originally appeared on Engadget at https://www.engadget.com/wearables/meta-is-rolling-out-conversation-focus-and-ai-powered-spotify-features-to-its-smart-glasses-192133928.html?src=rss

Judge blocks Louisiana’s social media age verification law

A Louisiana law that would have required social media platforms to verify the ages of their users has been blocked by a judge. The law, known as the Secure Online Child Interaction and Age Limitation, was passed in 2023 and required Meta, Reddit, Snap, YouTube Discord and others to implement age verification and parental control features.

The ruling came just days before the law, which technically took effect over the summer, would have started to be enforced. In his ruling, Judge John W. deGravelles wrote that the law's "age-verification and parental-consent requirements are both over- and under-inclusive," and that its definition of "social media platform" was "nebulous."

The ruling was a victory for NetChoice, a lobbying group that represents the tech industry and has challenged the growing number of age verification laws around the world. The group had argued that the law was unconstitutional and posed a safety and security risk.

In a statement following the ruling, the group pointed to the "massive privacy risk" posed by the Louisiana law and others like it. "Louisiana’s law would have done more than chill speech," Paul Taske, the co-director of NetChoice’s Litigation Center said. "It would have created a massive privacy risk for Louisianans like those playing out in real time in countries without a First Amendment, like the UK."

In a statement, Louisiana Attorney General Liz Murrill said she would appeal the ruling. “The assault on children by online predators is an all-hands-on-deck problem,” Murrill said. “It’s unfortunate that the court chose to protect huge corporations that facilitate child exploitation over the legislative policy to require simple age verification mechanisms.”

Update, December 16, 11:50AM PT: This story has been updated to add a statement from the Louisiana Attorney General’s office.

This article originally appeared on Engadget at https://www.engadget.com/social-media/judge-blocks-louisianas-social-media-age-verification-law-001212758.html?src=rss

Disney+ is now available to stream on Meta’s Quest headsets

Meta revealed that Disney+ was coming to its Quest headsets earlier this year during its Connect event. Now, the streaming app and its vast catalog are finally available to Meta's VR users in the United States.

Meta recently overhauled the Quest's entertainment experience with a new Horizon TV hub that brings its streaming features into one place. Horizon TV also added support for Dolby Vision and Dolby Atmos sound, both of which Disney+ subscribers can now take advantage of. According to Meta, there are a"select" number of titles available to stream in Dolby Vision 4K HDR, and Disney+ Premium subscribers can stream with Dolby Atmos Sound. The company also says there are more than 100 titles in Disney's catalog that support 4K UHD and HDR and some Marvel and Pixar titles that support IMAX's expanded aspect ratio

The app is available now on the latest version of Horizon OS. Though Disney+ is for now limited to US-based Quest users, Meta says that international availability is "coming soon." 


This article originally appeared on Engadget at https://www.engadget.com/ar-vr/disney-is-now-available-to-stream-on-metas-quest-headsets-203622392.html?src=rss

Meta is reportedly working on a new AI model called ‘Avocado’ and it might not be open source

Mark Zuckerberg has for months publicly hinted that he is backing away from open-source AI models. Now, Meta's latest AI pivot is starting to come into focus. The company is reportedly working on a new model, known inside of Meta as "Avocado," which could mark a major shift away from its previous open-source approach to AI development. 

Both CNBC and Bloomberg have reported on Meta's plans surrounding "Avocado," with both outlets saying the model "could" be proprietary rather than open-source. Avocado, which is due out sometime in 2026, is being worked on inside of "TBD," a smaller group within Meta's AI Superintelligence Labs that's headed up by  Chief AI Officer Alexandr Wang, who apparently favors closed models.

It's not clear what Avocado could mean for Llama. Earlier this year, Zuckerberg said he expected Meta would "continue to be a leader" in open source but that it wouldn't "open source everything that we do." He's also cited safety concerns as they relate to superintelligence. As both CNBC and Bloomberg note, Meta's shift has also been driven by issues surrounding the release of Llama 4. The Llama 4 "Behemoth" model has been delayed for months; The New York Times reported earlier this year that Wang and other execs had "discussed abandoning" it altogether. And developers have reportedly been unimpressed with the Llama 4 models that are available. 

There have been other shakeups within the ranks of Meta's AI groups as Zuckerberg has spent billions of dollars building a team dedicated to superintelligence. The company laid off several hundred workers from its Fundamental Artificial Intelligence Research (FAIR) unit. And Meta veteran and Chief AI Scientist Yann LeCun, who has been a proponent for open-source and skeptical of LLMs, recently announced he was leaving the company. 

That Meta may now be pursuing a closed AI model is a significant shift for Zuckerberg, who just last year said "fuck that" about closed platforms and penned a lengthy memo titled "Open Source AI is the Path Forward." But the notoriously competitive CEO is also apparently intensely worried about falling behind OpenAI, Google and other rivals. Meta has said it expects to spend $600 billion over the next few years to fund its AI ambitions.

This article originally appeared on Engadget at https://www.engadget.com/ai/meta-is-reportedly-working-on-a-new-ai-model-called-avocado-and-it-might-not-be-open-source-215426778.html?src=rss

Reddit is starting to verify public figures

Like it or not, the checkmark has become an almost universal symbol on most social platforms, even though its exact meaning can vary significantly between services. Now, Reddit, which historically hasn't cared that much about its users' identity, is joining the club and starting to test verification for public figures on its platform.

The company is beginning "a limited alpha test" of the feature with a small "curated" group of accounts that includes journalists from major media outlets like NBC News and the Boston Globe. Businesses that are already using an "official" badge, which Reddit started testing in 2023, will also now have a grey "verified" checkmark instead of the "official" label. 

Verification has long been a thorny issue for many platforms. For users, it's at times been a source of confusion, especially on sites where verified badges only require a paid subscription. Reddit's approach, at least for now, is closer to how Twitter handled verification prior to Elon Musk's takeover of the company.

The company has handpicked the initial group who will get checkmarks indicating they have verified their identity and seems to be geared around high-visibility accounts. "This feature is designed to help redditors understand who they're engaging with in moments when verification matters, whether it’s an expert or celebrity hosting an AMA, a journalist reporting news, or a brand sharing information," Reddit explains in a blog post. "Our approach to verification is voluntary, opt-in, and explicitly not about status. It’s designed to add clarity for redditors and ease the burden on moderators who often verify users manually." 

For now, Reddit users — even notable ones — won't be able to apply for verification. But the company notes that its intention isn't to limit checkmarks to famous people only. A Reddit spokesperson tells Engadget that "our goal is that anyone who wishes to self-identify will be able to do so in the future." 

The company also notes that verification doesn't come with any exclusive perks, like increased visibility or immunity from the rules of individual subreddits. Reddit requires accounts to be in good standing and already active on the platform in order to be eligible for verification. Accounts that are marked NSFW or that "primarily engage in NSFW-tagged communities" won't be eligible. 


This article originally appeared on Engadget at https://www.engadget.com/social-media/reddit-is-starting-to-verify-public-figures-170000833.html?src=rss