X says it is changing its policies around Grok’s image-editing abilities following a multi-week outcry over the chatbot repeatedly being accused of generating sexualized images of children and nonconsensual nudity. In an update shared from the @Safety account on X, the company said it has “implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis.”
The new safeguards, according to X, will apply to all users regardless of whether they pay for Grok. xAI is also moving all of Grok’s image-generating features behind its subscriber paywall so that non-paying users will no longer be able to create images. And it will geoblock "the ability of all users to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X" in regions where it's illegal.
The company's statement comes hours after the state of California opened an investigation into xAI and Grok over its handling of AI-generated nudity and child exploitation material. A statement from California Attorney General Rob Bonta cited one analysis that found "more than half of the 20,000 images generated by xAI between Christmas and New Years depicted people in minimal clothing," including some that appeared to be children.
In its update, X said that it has "zero tolerance" for child exploitation and that it removes "high-priority violative content, including Child Sexual Abuse Material (CSAM) and non-consensual nudity" from its platform. Earlier in the day, Elon Musk said he was "not aware of any naked underage images generated by Grok." He later added that when its NSFW setting is enabled, "Grok is supposed [sic] allow upper body nudity of imaginary adult humans (not real ones) consistent with what can be seen in R-rated movies on Apple TV." He added that "this will vary in other regions" based on local laws.
Malaysia and Indonesia both recently moved to block Grok citing safety concerns and its handling of sexually explicit AI-generated material. In the UK, where regulator Ofcom is also investigating xAI and Grok, officials have also said they would back a similar block of the chatbot.
Have a tip for Karissa? You can reach her by email, on X, Bluesky, Threads, or send a message to @karissabe.51 to chat confidentially on Signal.
This article originally appeared on Engadget at https://www.engadget.com/ai/x-says-grok-will-no-longer-edit-images-of-real-people-into-bikinis-231430257.html?src=rss
California authorities have launched an investigation into xAI following weeks of reports that the chatbot was generating sexualized images of children. "xAI appears to be facilitating the large-scale production of deepfake nonconsensual intimate images that are being used to harass women and girls across the internet, including via the social media platform X," California Attorney General Rob Bonta's office said in a statement.
The statement cited a report that "more than half of the 20,000 images generated by xAI between Christmas and New Years depicted people in minimal clothing," including some that appeared to be children. "We have zero tolerance for the AI-based creation and dissemination of nonconsensual intimate images or of child sexual abuse material,” Bonta said. “Today, my office formally announces an investigation into xAI to determine whether and how xAI violated the law.
The investigation was announced as California Governor Gavin Newsom also called on Bonta to investigate xAI. "xAI’s decision to create and host a breeding ground for predators to spread nonconsensual sexually explicit AI deepfakes, including images that digitally undress children, is vile," Newsom wrote.
xAI’s decision to create and host a breeding ground for predators to spread nonconsensual sexually explicit AI deepfakes, including images that digitally undress children, is vile.
I am calling on the Attorney General to immediately investigate the company and hold xAI…
California authorities aren't the first to investigate the company following widespread reports of AI-generated child sexual abuse material (CSAM) and non-consensual intimate images of women. UK regulator Ofcom has also opened an official inquiry, and European Union officials have said they are also looking into the issue. Malaysia and Indonesia have moved to block Grok.
Last week, xAI began imposing rate limits on Grok's image generation abilities, but has so far declined to pull the plug entirely. When asked to comment on the California investigation, xAI responded with an automated email that said "Legacy Media Lies."
Earlier on Wednesday, Elon Musk said he was "not aware of any naked underage images generated by Grok." Notably, that statement does not directly refute Bonta's allegation that Grok is being used "to alter images of children to depict them in minimal clothing and sexual situations." Musk said that "the operating principle for Grok is to obey the laws" and that the company works to address cases of "adversarial hacking of Grok prompts."
This article originally appeared on Engadget at https://www.engadget.com/ai/california-is-investigating-grok-over-ai-generated-csam-and-nonconsensual-deepfakes-202029635.html?src=rss
Last month, Instagram began rolling out a new set of controls that allowed users to personalize the topics recommended to them by the Reels algorithm. Now, Meta is making that feature available to all English users of the app globally, along with the ability to highlight their top topics for the coming year.
The feature begins with a selection of topics Meta's AI thinks you're interested in based on your recent activity, and has controls to remove them or add new categories. There's also a separate field for identifying what you want to see less of, and a new "build your 2026 algorithm" that allows you to highlight three topics in particular.
Meta's algorithm tagged a skiing clip as "snowboarding."
Screenshot via Instagram
I don't yet have the 2026-specific control yet, but I was able to tweak some of my preferred topics and was surprised at how quickly the algorithm seemed to adjust. I added "snowboarding" as a topic and then later, when I clicked over to Reels, the first clip I saw was tagged "snowboarding." Unfortunately, the video wasn't actually about snowboarding — it featured a clip of a freestyle skiing event — so Meta's systems might still need a little work at classifying the actual content. But given how sensitive the Reels algorithm can be, it's nice to have a way of opting out of interests even if you briefly went down a rabbit hole.
The feature won't, however, let you ask to see fewer ads. I tried to add "ads" to my "what you want to see less of" list and received an error. "No results found. Try another topic or interest." I was able to successfully add "sponsored content" and "AI" to my "see less" list, though I'm pretty sure the latter will affect videos about AI rather than those made with the help of it.
This article originally appeared on Engadget at https://www.engadget.com/social-media/instagram-wants-you-to-personalize-your-reels-algorithm-for-2026-215252736.html?src=rss
Several of Meta's VR studios have been affected by the company's metaverse-focused layoffs. The company has shuttered three of its VR studios, including Armature, Sanzaru and Twisted Pixel. VR fitness app Supernatural will no longer be updated with fresh content.
Employees at Twisted Pixel, which released Marvel's Deadpool VR in November, and Sanzaru, known for Asgard's Wrath, posted on social media about the closures. Bloombergreported that Armature, which brought Resident Evil 4 to Quest back in 2021 has also closed and that the popular VR fitness app Supernatural will no longer get updates.
“Due to recent organizational changes to our Studio, Supernatural will no longer receive new content or feature updates starting today,” the company wrote in an update on Facebook. The app “will remain active” for existing users.
A spokesperson for Meta confirmed the closures. "We said last month that we were shifting some of our investment from Metaverse toward Wearables," the spokesperson said in a statement to Engadget. "This is part of that effort, and we plan to reinvest the savings to support the growth of wearables this year."
The cuts raise questions about Meta's commitment to supporting a VR ecosystem it has invested heavily in. The company hasn't announced any new VR headsets since the Quest 3S in 2024, and last month it "paused" planned Horizon OS headsets from Asus and Lenovo. Now, it's also pulling back on in-house game development too.
Meta is claiming, internally at least, that it remains committed to supporting the industry. “These changes do not mean we are moving away from video games,” Oculus Studios director Tamara Sciamanna wrote in a memo reported by Bloomberg. "With this change we are shifting our investment to focus on our third-party developers and partners to ensure long-term sustainability.”
Have a tip for Karissa? You can reach her by email, on X, Bluesky, Threads, or send a message to @karissabe.51 to chat confidentially on Signal.
Update, January 13, 2026, 2:13PM PT: This post was updated to additional information about Supernatural.
This article originally appeared on Engadget at https://www.engadget.com/ar-vr/meta-has-closed-three-vr-studios-as-part-of-its-metaverse-cuts-202720670.html?src=rss
On the heels of Mark Zuckerberg announcing that Meta's former board member, Dina Powell McCormick, would be formally joining the company as president and vice chairman, the CEO has shared new details about her purview at the company. The executive will play a key role overseeing Meta's sprawling infrastructure investments as part of a newly announced initiative called Meta Compute.
"Meta is planning to build tens of gigawatts this decade, and hundreds of gigawatts or more over time," Zuckerberg said in an update. "How we engineer, invest, and partner to build this infrastructure will become a strategic advantage."
Zuckerberg said that Meta's head of global engineering Santosh Janardhan will lead the "top-level initiative" and that recent hire and former Safe Superintelligence CEO Daniel Gross will "lead a new group responsible for long-term capacity strategy, supplier partnerships, industry analysis, planning, and business modeling." McCormick is expected to "work on partnering with governments and sovereigns to build, deploy, invest in, and finance Meta's infrastructure."
Meta has been investing heavily in infrastructure to fuel its AI "superintelligence" ambitions. The company also recently announced three agreements to buy massive amounts of nuclear power to help power its data centers. Zuckerberg has previously said he expects Meta to spend $600 billion on AI infrastructure and jobs by 2028.
This article originally appeared on Engadget at https://www.engadget.com/ai/mark-zuckerberg-announces-new-meta-compute-initiative-for-its-data-center-and-ai-projects-192100086.html?src=rss
CES always has its share of attention-grabbing robots. But this year in particular seemed to be a landmark year for robotics. The advancement in AI technology has not only given robots better “brains,” it’s enabled new levels of autonomy and given rise to an ambitious, if sometimes questionable, vision for our robot-filled future.
From sassy humanoids to AI-powered pets and chore-handling assistants, we sought out as many cute, strange and capable robots as we could find in Las Vegas. These are the ones that made the biggest impression.
Agibot Humanoids
Agibot's X2 humanoid robot.
Karissa Bell for Engadget
Of all the humanoids we saw at CES, Agibot's made the biggest impression. The company was showing off two models: the larger A2 and the smaller X2 (pictured above). The latter impressed us with its dance moves — the company told us it can learn surprisingly complex choreography — but the A2 turned out to be surprisingly capable at chatting up CES goers.
Later in the show, we came across the A2 at IntBot's booth, where the company had custom versions of both Agibot humanoids "running" their booth. I spent several minutes talking with "Nylo" and was genuinely impressed by its conversational skills, even if its roasts could use a little work. — Karissa Bell, Senior Reporter
Dreame's robo vac arms and legs
Dreame was back this year with some wild robot vacuums. The company showed of the Cyber 10 Ultra, a robot vacuum with a multipurpose extendable arm. The arm, which we got a glimpse of at last year's show, can pick up stuff, but it also has its own cleaning attachments, allowing the robot to clean hard-to-reach corners and other spots that wouldn't otherwise be accessible.
Dreame also brought its latest wild concept, the Cyber X, which has legs that propel it up and down full-size staircases. The legs are somewhat unsettling — they look alarmingly similar to mini chainsaws — but watching it glide up and down stairs was impressive all the same. — KB
OlloBot
The long neck version of OlloBot.
Cheyenne MacDonald for Engadget
OlloBot is one of those semi-ridiculous CES robots that's just impossible not to smile at. It has the goofiest face, with top-sitting frog eyes slapped onto a tablet where its mouth is displayed. Then, on top of that, it has a patch of soft fur on its neck and nowhere else on its body, which is penguin shaped and complete with flappy little arms. There are two versions of OlloBot, one that's short with a fixed neck and another where the neck can stretch out to make it much taller. And of course, it can be dressed up in silly outfits.
It's a family-focused robot that responds to voice commands and touch, and is meant to capture memories as they happen, snapping pics and videos for its diary of notable moments. It can be used to make calls and control smart home devices. Everything is stored locally in its removable heart module, and there's a companion app for additional interactions. — Cheyenne MacDonald, Weekend Editor
Rovie
A robot with a dust pan like appendage dumps toys into a bin.
Cheyenne MacDonald for Engadget
Sure, we've seen multiple robots (particularly robovacuums) that can pick objects up off the floor and put them away to make homes tidier, but this one is cute and has a little face. Instead of using an arm to grab one thing at a time, Clutterbot's Rovie has a dustpan-style tray with two sweepers that fold out from its front. It drives around and, using computer vision, identifies toys that have been left on the floor and scoops them up. Then, it dumps them in a designated bin where they're consolidated and out of the way.
It's still in the R&D phase, a team member said when I visited the booth, but this is one I'm hoping to see become a real, purchasable product soon. For parents of small children who are constantly leaving their toys around, it would be pretty convenient to have a tiny robot picking up after them. Also for me, who doesn't have children but a very sweet and hardworking cat who loves to steal socks and then deliver them as if they're her kills, leaving socks scattered all over the house. Clutterbot team, if you're reading this, please add socks to the list of items Rovie can sweep up. — CM
Saros Rover
Not to be outdone, Roborock also brought a stairclimbing robot vacuum to CES, Saros Rover. And, unlike Dreame's prototype, the Roborock can also clean the stairs while it climbs. No word on when it will be available or how much it might cost (probably a lot!) but the company says it is "a real product in development." -KB
CLOiD
CLOiD folded laundry at LG's CES booth.
Karissa Bell for Engadget
LG's CLOiD was definitely the most ambitious robot we saw at CES 2026. The company showed its home helper concept (slowly) folding and sorting laundry, fetching drinks from the fridge, putting food in the oven and retrieving a set of lost keys. But while the 15-minute demo gave us a tantalizing look at the appliance maker's vision for a "zero labor home," it's unlikely to be anything more than a slick demo anytime soon. The company has made no commitment to actually make a version of CLOiD people can actually buy. — KB
Allex
WIRobotics' Allex robot makes a heart sign with its hands.
Cheyenne MacDonald for Engadget
WIRobotics brought its new humanoid, Allex, to CES, and the robot was really hamming it up when we stopped by the booth, striking poses and engaging with visitors. It's a waist-up robot with articulated parts, from its arms to its fingers, and is meant to be a general purpose tool that could be used in manufacturing, the service industry or even households. Each hand can hold objects of up to about 6.6 lbs, and the robotic hand has 15 degrees of freedom. The company's website shows the robot's fingers are dexterous enough to do the Gen-Z heart sign, but when it looked at Karissa and me it threw a millennial heart up. Did Allex lowkey call us unc? — CM
Poketomo
Poketomo in one of the many outfits Sharp brought to CES.
Cheyenne MacDonald for Engadget
Sharp's Poketomo is an improbably adorable tiny meerkat. Well, technically it's an AI companion shaped like a fuzzy, portable meerkat. It might look like a toy, but the company says it's actually to be a companion for adults.
It’s small enough you can carry it around with you throughout the day (Sharp even makes a tiny Poketomo-sized clear backpack). Like a lot of AI companion devices we saw at CES, it’s equipped with a small camera and microphone that enables it to constantly interact with you. The camera also enables its “memory” so the pet can recognize and deliver personalized updates to its person. Poketomo launched recently in Japan, but sadly Sharp says it has no current plans to sell it in other markets. — KB
Bibo
Moony bibo (I-Type).
Cheyenne MacDonald for Engadget
It seemed like everyone was trying to cash in on Labubu hype at CES 2026. There were Pop Mart-style bag charms all over the place and countless products that looked suspiciously like the now ubiquitous toy monster. We even got one pitch for a "a labubu-like robot that talks to you" that, in fact, did not look like a Labubu in any way, shape or form. But there was one truly Labubu-like tiny robot that managed to stand out from the rest and kind of stole my heart, even though I'm not particularly into Labubus. (Please don't make me say Labubu ever again.)
Bibo is a cute-as-hell AI toy that's meant to be a companion you bring with you everywhere. It has a little camera on its head that it uses to see the world around it, and can recognize its owner's face and tone of voice, so it can respond to interactions in an emotionally appropriate way. It'll keep a daily diary of its activities, and while the toy comes in two starting personality "types" — Sunny bibo (E-Type), the bubbly extrovert, and Moony bibo (I-Type), the gentle, sensitive one — they'll develop more unique personalities over time. Their fur is soft and warm, so it feels like you're petting a kitten.
Why is it even cuter like this?
Cheyenne MacDonald
At the booth, the team had several of them on display wearing various outfits, in little dioramas showing them in classroom and camping scenes, and even deconstructed with the fur removed, which somehow made it look even cuter. Bibo isn't available to purchase yet, and when it is, it'll launch first in China before potentially expanding depending on its success at home. — CM
Sharpa
Sharpa's humanoid robot is seen playing ping-pong.
Cheyenne MacDonald for Engadget
Sharpa's booth had a lot going on and was definitely one of the bigger crowd-pullers. There was a humanoid robot playing ping-pong, another taking selfies with people and another dealing blackjack, along with a disembodied robotic hand that could mirror visitors' finger movements. The autonomous demos showed off what that highly dexterous hand can do, and it was pretty impressive — especially seeing it draw individual cards from the deck. — CM
Zeroth
Zeroth's W1 robot.
Cheyenne MacDonald for Engadget
Chinese robotics startup Zeroth brought two adorable home robots to CES: a pint-sized humanoid companion bot and a rolling robot that looks like Wall-E, with tank-style tracked treads so it can ride around outside. We didn't see these guys doing too much, but they sure were cute. The one that resembles Wall-E, called W1, kind of melted my heart just looking at it. (Don't get attached, you can't afford it.)
The tiny humanoid, M1, costs $2,400 while W1 costs $5,000. Both are expected to ship this spring, with a tentative date of April 15. — CM
Sweekar
Sweekars in their little outfits.
Karissa Bell for Engadget
Takway's Sweekar pocket pet was something I looked at and immediately thought, sigh, I'm going to buy that. It's a Tamagotchi-like virtual pet with AI smarts so it can form a personality based on your interactions with it and the activities you do together. The idea is that it "grows" with you. Like a Tamagotchi, it will require more frequent care in the younger stages of its life cycle. But after it reaches the adult level, it autonomously cares for itself, and it never dies. It can eventually keep itself entertained, and go off on its own virtual adventures and bring you back tales of its travels.
Sweekar is super cute as is, and it can be dressed up in little outfits for more personalization. The device comes in light yellow, pink, and blue, and we saw it sporting a snowboarder outfit and a full cowboy getup. — CM
Realbotix
One of Realbotix' robots.
Cheyenne MacDonald for Engadget
Realbotix is a company we've seen a lot at CES over the years, and it was at the show again for 2026 with several of its highly customizable, realistic humanoid robots. As always, it was among the most unnerving exhibits we saw. New for this year, Realbotix was demonstrating its Robotic Vision System, which allows its robots to see and react to their surroundings more naturally, tracking faces to look directly at whoever is talking and better reading emotion from facial expressions. Damn, it can sense my fear now… — CM
Onero H1
Onero H1 had an endearingly blank stare.
Karissa Bell for Engadget
Switchbot surprised us with its own chore-handling robot, Onero H1, which also won Engadget editors' pick for best robot of CES 2026. We were immediately taken by its weirdly long body and endearingly blank stare as it slowly wheeled around picking up laundry and depositing the items in a washing machine.
Like a lot of robot demos we saw at CES, we only saw Onero performing a small part of what Switchbot says it's actually capable of. But Onero also seemed much more realistic in terms of the type of robot helpers that people might actually see outside of CES, and the company told us it does plan to sell Onero (albeit in limited quantities) by the end of the year. — KB
Cocomo
Ludens AI Cocomo robot.
Cheyenne MacDonald for Engadget
Another robot pet that won us over immediately was Cocomo. Created by Japanese startup Ludens AI, Cocomo is an autonomous robot friend that yes, uses AI to respond to voice and touch and is meant to bond with its owners over time. The egg-shaped creature can scoot around on a wheeled base, or you can carry it around with you.
But what we loved about Cocomo is that it's not trying to be yet another AI assistant, give out life advice or perform tasks. Its goal is to provide companionship and well, be your friend. And while it can respond to voice input it doesn't exactly have a voice of its own: it communicates via cute humming sounds, which is a lot less creepy than some of the talking robots we saw. — KB
Yonbo
Yonbo at CES.
Cheyenne MacDonald for Engadget
Yonbo is a kids' AI companion robot that totally charmed us. It kind of looks like a dog, and when we visited its booth at Unveiled, there were four of them playfully bopping their heads to a pop song and cycling through different cute facial expressions and emoji eyes (including bowls of ramen). It's designed to be an intelligent playmate that can tag along for activities, talk with a child and read them stories, and even help them work through emotions, like getting frustrated during a game.
Yonbo's movement is controlled by a wristband, so it doesn't require a phone to play with. It can also be used as an extra pair of eyes for parents around the house. In Parental Monitor mode, which the team says is the only time its camera will be able to stream and store video, parents are able to see what Yonbo sees. The robot costs $800 and is available now. — CM
MÖFO
MÖFO in a glass case at CES.
Cheyenne MacDonald for Engadget
If we're being completely honest, the pitch for will.i.am's MÖFO (yes, MOFO, like motherfucker) had us a bit, um, perplexed for a hot second. We read it and all the accompanying materials over and over trying to figure out what, exactly, this thing does. Some of the claims that added to this confusion: "the agent 'octopuses' across your digital ecosystem through its eight USB-C connections"; it "converts moments into objects"; it "turns life notes into a life operating system."
We get it now, (we think): It's agentic AI hardware, kind of like a Rabbit R1 or AI Pin but in the form of a teddy bear. Sadly, we didn't get to see MÖFO up close or watch it do anything, but we are nonetheless intrigued, if still a bit confused, by this strange teddy bear. — CM and KB
This article originally appeared on Engadget at https://www.engadget.com/ai/the-robots-we-saw-at-ces-2026-the-lovable-the-creepy-and-the-utterly-confusing-153537930.html?src=rss
When Meta first announced its display-enabled smart glasses last year, it teased a handwriting feature that allows users to send messages by tracing letters with their hands. Now, the company is starting to roll it out, with people enrolled in its early access program getting it first,
I got a chance to try the feature at CES and it made me want to start wearing my Meta Ray-Ban Display glasses more often. When I reviewed the glasses last year, I wrote about how one of my favorite tings about the neural band is that it reduced my reliance on voice commands. I've always felt a bit self conscious at speaking to my glasses in public.
Up to now, replying to messages on the display glasses has still generally required voice dictation or generic preset replies. But handwriting means that you can finally send custom messages and replies somewhat discreetly.
Sitting at a table wearing the Meta Ray-Ban Display glasses and neural band, I was able to quickly write a message just by drawing the letters on the table in front of me. It wasn't perfect — it misread a capital "I" as an "H" — but it was surprsingly intuitive. I was able to quickly trace out a short sentence and even correct a typo (a swipe from left to right will let you add a space, while a swipe from right to left deletes the last character).
Alongside handwriting, Meta also announced a new teleprompter feature. Copy and paste a bunch of text — it supports up to 16,000 characters (roughly a half-hour's worth of speech) — and you can beam your text into the glasses' display.
If you've ever used a teleprompter, Meta's version works a bit differently in that the text doesn't automatically scroll while you speak. Instead, the text is displayed on individual cards you manually swipe through. The company told me it originally tested a scrolling version, but that in early tests, people said they preferred to be in control of when the words appeared in front of them.
Teleprompter is starting to roll out now, though Meta says it could take some time before everyone is able to access.
The updates are the among the first major additions Meta has made to its display glasses since launching them late last year and a sign that, like its other smart glasses, the company plans to keep them fresh with new features. Elsewhere at CES, the company announced some interesting new plans for the device's neural band and that it was delaying a planned international rollout of the device.
This article originally appeared on Engadget at https://www.engadget.com/wearables/handwriting-is-my-new-favorite-way-to-text-with-the-meta-ray-ban-display-glasses-213744708.html?src=rss
It's only been a few months since Meta announced that it would open its smart glasses platform to third-party developers. But one startup at CES is already showing off how the glasses can help power an intriguing set of accessibility features.
Hapware has created Aleye, a haptic wristband that, when paired with Ray-Ban Meta smart glasses, can help people understand the facial expressions and other nonverbal cues of the people they are talking to. The company says the device could help people who are blind, low vision or neurodivergent unlock a type of communication that otherwise wouldn't be available.
Aleye is a somewhat chunky wristband that can vibrate in specific patterns on your wrist to correspond to the facial expressions and gestures of the person you're talking to. It uses the Meta Ray-Ban glasses's computer vision abilities to stream video of your conversation to the accompanying app, which uses an algorithm to detect facial expressions and gestures.
The bumps on the underside of the Aleye vibrate to form unique patterns.
Karissa Bell for Engadget
Users can customize which expressions and gestures they want to detect in the app, which also provides a way for people to learn to distinguish between the different patterns. Hapware CEO Jack Walters said in their early testing people have been able to learn a handful of patterns within a few minutes. The company has also tried to make them intuitive. "Jaw drop might feel like a jaw drop, a wave feels more like a side to side haptics," he explains.
The app is also able to use Meta AI to give vocal cues about people's expressions, though Hapware's CTO Dr. Bryan Duarte told me it can get a bit distracting to talk to people while the assistant is babbling in your ear. Duarte, who has been blind since a motorcycle accident at the age of 18, told me he prefers Aleye to Meta AI's other accessibility features like Live AI. "It will only tell me there's a person in front of me," he explains. "It won't tell me if you're smiling. You have to prompt it every time, it won't just tell you stuff."
Hapware has started taking pre-orders for the Aleye, which starts at $359 for the wristband or $637 for the wristband plus a year subscription to the app (a subscription is required and otherwise will cost $29 a month). A pair of Ray-Ban Meta glasses is also not included, though Meta has also been building a number of its own accessibility features for the device.
This article originally appeared on Engadget at https://www.engadget.com/wearables/this-haptic-wristband-pairs-with-meta-smart-glasses-to-decode-facial-expressions-214305431.html?src=rss
When LG announced that it would demo a laundry-folding, chore-doing robot at CES 2026, I was immediately intrigued. For years, I've wandered the Las Vegas Convention Center halls and wondered when someone might create a robot that can tackle the mundane but useful tasks I despise like folding laundry. With CLOiD (pronounced like "Floyd"), LG has proven that this is theoretically possible, but probably not likely to happen any time soon.
I went to the company's CES booth to watch its demonstration of CLOiD's abilities, which also include serving food, fetching objects and fitness coaching. During a very carefully choreographed 15-minute presentation, I watched CLOiD grab a carton of milk out of the fridge, put a croissant in an oven, sort and fold some laundry and grab a set of keys off a couch and hand them to the human presenter.
Throughout the demonstration, LG showed off how its own appliances can play along with the robot. When it rolled over to the fridge, the door automatically opened, as did the oven. When the LG-branded robot vacuum needed to move around a hamper, CLOiD helpfully cleared the path. But the robot also moved very slowly, which you can see in the highlight video below.
The appliance maker is selling the setup as a part of its vision for a "zero labor home" where its appliances and, I guess, robotics technology can come together to take care of all your chores and household upkeep. Maybe I'm jaded from a decade of watching CES vaporware, but I left the slick demo thinking the concept is unlikely to amount to much anytime soon.
On one hand, it is exciting to see robots competently performing tasks that would actually be useful to most people. But this technology is still far from accessible. Even LG isn't making any firm commitments about CLOiD's future as anything more than a CES demo. The company has instead said that CLOiD is a signal of its interest in creating "home robots with practical functions" and "robotized appliances," like fridges with doors that can open automatically.
That may be a more reasonable target for the company (and yet another way for LG to sell us more appliance upgrades). But it's still pretty far from anything approaching the fantasy of a "zero labor home."
This article originally appeared on Engadget at https://www.engadget.com/home/smart-home/lgs-cloid-robot-can-fold-laundry-and-serve-food-very-slowly-181902306.html?src=rss
I'll admit that I've always kind of taken walking for granted. Other than a knee injury more than a decade ago, my ability to walk long distances has largely been limited only by my own choices. That's not the case for everyone, though. And robotics company Dephy has created a pair of robotic sneakers, called the Sidekick, that are meant to help people who want to walk more than their bodies might otherwise be capable of.
The system consists of two parts: an ankle-worn exoskeleton and a special pair of sneakers that attach to it. The exoskeleton hooks onto the back of the shoe and is secured with a strap around your calf. The battery powered device is equipped with sensors that can detect and adapt to the wearer's gait in order to deliver an extra "boost" with each step.
The whole setup is pricey, at $4,500, but Dephy is betting that people who have "personal range anxiety" might be willing to pay for the extra confidence the Sidekick can provide. "This is a device that's kind of like [having] an extra calf muscle," Dephy CEO Luke Mooney told me.
The Sidekick.
Karissa Bell for Engadget
I was able to take the Sidekick for a spin around the CES showfloor and it was a truly surprising sensation. The best way I can describe walking with the Sidekick powered on is that with every step forward there's a noticeable upward push from under your heel. It wasn't enough to throw me off balance, but it did feel a bit strange.
The Sidekick has adjustable power levels based on how much help you might need. At the highest level, it definitely felt unnecessarily pushy. The lower levels were still noticeable but felt less disruptive. I just felt… bouncy. Later, when Mooney turned off the power entirely, I noticed that my feet felt weirdly heavy in a way they hadn't just a few minutes before.
Mooney was quick to tell me that I'm not Dephy's target demographic. "A lot of times people who are fit, or like athletes, actually struggle to adopt to the technology because their body's so in tune with how they move," he said. "Whereas folks who are not as physically active and fit, their body's ready to accept help."
The company's technology will be used in products more focused on athletic performance, however. Dephy has partnered with Nike on its upcoming robotic sneaker currently known as Project Amplify. Mooney declined to share details on the collaboration, but the shoemaker has claimed that some early testers have been able to improve their mile times by two minutes.
I tried the Sidekick early in the day. Several hours later, though, when I was walking between the Las Vegas Conventions Center halls for the third or fourth time, I started thinking about those robotic sneakers again. I was getting close to 10,000 steps and hadn't sat down for hours. My feet were sore. I remembered that strange, bouncy boost and thought it sounded kind of nice.
This article originally appeared on Engadget at https://www.engadget.com/wearables/these-robotic-sneakers-gave-me-a-surprising-boost-at-ces-174500005.html?src=rss