This 6-Fingered Robot Hand Crawls Away From Its Own Arm

Imagine a robotic hand that not only mimics human dexterity but completely reimagines what a hand can do. Researchers at École Polytechnique Fédérale de Lausanne (EPFL) have developed something that looks like it crawled straight out of a sci-fi fever dream: a modular robotic hand that can detach from its arm, scuttle across surfaces spider-style, and grab multiple objects at once.

The human hand has long been considered the gold standard for dexterity. But here’s the thing about trying to replicate perfection: you often inherit its limitations, too. Our hands are fundamentally asymmetrical. We have one opposable thumb per hand, which means we’re constantly repositioning our wrists and contorting our bodies to reach awkwardly placed objects or grasp items from different angles. Try reaching behind your hand while keeping a firm grip on something, and you’ll quickly understand the problem.

Designer: École Polytechnique Fédérale de Lausanne’s (EPFL) school of engineering

The team at EPFL, led by Aude Billard from the Learning Algorithms and Systems Laboratory, decided to throw the rulebook out the window. Instead of copying human anatomy, they created something better: a symmetrical hand that features up to six identical fingers, each tipped with silicone for grip. The genius lies in the design, where any combination of fingers can form opposing pairs for pinching and grasping. No single designated thumb here.

But wait, it gets wilder. The hand is completely reversible, meaning the palm and back are interchangeable. Flip it over, and it works just as effectively from either side. This eliminates the need for awkward repositioning and opens up grasping possibilities that humans simply can’t achieve. The device can perform 33 different types of human grasping motions, and thanks to its modular design, it can hold multiple objects simultaneously with fewer fingers than we’d need.

The most mind-bending feature? This hand can literally walk away from its job. Using a magnetic attachment and motor-driven bolt system, it detaches from its robotic arm and crawls independently to retrieve objects beyond the arm’s reach. Picture a warehouse robot that needs to grab something just out of range. Instead of the entire system repositioning, the hand simply walks over, grabs what it needs, and returns like a loyal (if slightly creepy) pet.

The practical applications are staggering. In industrial settings, this kind of “loco-manipulation” (locomotion plus manipulation) could revolutionize how robots interact with their environments. Service robots could navigate complex spaces and handle multiple tasks without constant human intervention. In exploratory robotics, think Mars rovers or deep-sea vehicles, a detachable hand could investigate tight spaces or retrieve samples from areas the main body can’t access.

The research team’s work, published in Nature, demonstrates that symmetrical design provides measurably better performance, with 5 to 10 percent improvements in crawling distance compared to traditional asymmetric configurations. The hand’s 160mm diameter palm houses motors that mimic the natural forward movement of human finger joints, but without being constrained by human limitations.

What makes this project so compelling isn’t just the technical achievement. It’s the philosophical shift it represents. For years, robotics has been obsessed with replicating human form and function. But by questioning whether human design is actually optimal for all tasks, the EPFL team has created something that surpasses our biological blueprint. It’s a reminder that innovation often requires abandoning our assumptions about how things should work.

This robotic hand represents more than just another engineering marvel. It’s a glimpse into a future where machines aren’t limited by human constraints, where form follows function in unexpected ways, and where a hand doesn’t need to stay attached to be incredibly handy. Whether it’s retrieving your dropped phone from under the couch or assembling complex machinery in factories, this crawling, grasping, reversible wonder proves that sometimes the best way forward is to let go of convention entirely.

The post This 6-Fingered Robot Hand Crawls Away From Its Own Arm first appeared on Yanko Design.

Artly Robots Master Latte Art and Drinks for CES 2026 Debut

People gather around a robot arm in a café, half for the drink and half for the performance. Most automation in food and beverage still feels either like a vending machine or a novelty, and the real challenge is capturing the craft of a skilled barista or maker, not just the motion of pushing buttons. The difference between a decent latte and a great one often comes down to subtle pressure, timing, and feel.

Artly treats robots less like appliances and more like students in a trade school, learning from human experts through motion capture, multi-camera video, and explanation. At CES 2026, that philosophy shows up in two compact robots, the mini BaristaBot and the Bartender, both built on the same AI arm platform but trained for different kinds of counters. Together, they make a case for automation that respects the shape of the work instead of flattening it.

Designer: Artly AI

Click here to know more.

mini BaristaBot: A 4×4 ft Café That Learns from Champions

The mini BaristaBot is a fully autonomous café squeezed into a 4 × 4 ft footprint, designed for high-traffic, labor-constrained spaces like airports, offices, and retail corners. One articulated arm handles the entire barista workflow, from grinding and tamping to brewing, steaming, and pouring, with the same attention to detail you would expect from a human who has spent years behind a machine. “At first, I thought making coffee was easy, but after talking to professional baristas, we realized it is not simple at all. There are a lot of details and nuances that go into making a good cup of coffee,” says Meng Wang, CEO of Artly.

The arm is trained on demonstrations from real baristas, including a U.S. Barista Champion, with Artly’s Skill Engine breaking down moves into reusable blocks like grabbing, pouring, and shaping. Those blocks are recombined into recipes, so the robot can reproduce nuanced techniques such as milk texturing and latte art, and adapt to different menus without rewriting code from scratch or relying on rigid workflows. “Our goal is not to automate for its own sake. Our goal is to recreate an authentic, specific experience, whether it is specialty coffee or any other craft, and to build robots that can work like those experts,” Wang explains.

“The training in our environment is not just about action: it is about judgment, and a lot of that judgment is visual. You have to teach the robot what good frothing or good pouring looks like, and sometimes you even have to show it bad examples so it understands the difference.” That depth of teaching separates Artly’s approach from simpler automation. The engineering layer uses food-grade stainless steel and modular commercial components, wrapped in a warm, wood-clad shell that looks more like a small kiosk than industrial equipment.

A built-in digital kiosk handles ordering, while Artly’s AI stack combines real-time motion planning, computer vision, sensor fusion, and anomaly detection to keep quality consistent and operation safe in public spaces where people stand close and watch the whole process. “Our platform is like a recording machine for skills. We can record the skills of a specific person and let the robot repeat exactly that person’s way of doing things,” which means a café chain can effectively bottle a champion’s technique and deploy it consistently across multiple sites.

The ecosystem supports plug-and-play deployment, with remote monitoring, over-the-air updates, and centralized fleet management. A larger refrigerator and modular countertops in finishes like maple, white oak, and walnut let operators match different interiors. For a venue, that means specialty coffee without building a full bar, and for customers, it means a consistent drink and a bit of theater every time they walk up.

Bartender: The Same Arm, Trained for a Different Counter

The Bartender is an extension of the same idea, using the Artly AI Arm and Skill Engine to handle precise, hand-driven tasks behind a counter. Instead of focusing on espresso and milk, the robot learns careful measurement, shaking, or stirring techniques, and finishing touches that depend on timing and presentation, all captured from human experts and turned into repeatable workflows. “If the robot learns the technique of a champion, it can repeat that same pattern at different locations. No matter where it performs, it will always create the same result that person did,” Wang notes.

Dexterity is the key differentiator. The Bartender uses a dexterous robotic hand and wrist-mounted vision to pick up delicate garnishes, handle glassware, and move through sequences that normally require a trained pair of hands. The same imitation-learning approach that taught the BaristaBot to pour latte art is now applied to more complex motions, so the arm can execute them smoothly and consistently in a busy environment.

For a hospitality space, the Bartender offers a way to standardize recipes, maintain quality during peak hours, and free human staff to focus on conversation and creativity rather than repetitive prep. Because it shares hardware and software with the BaristaBot, it fits into the same remote monitoring and fleet-management framework, making it easier to run multiple robotic stations across locations without reinventing operational infrastructure for each new skill type.

Artly AI at CES 2026: From Robot Coffee to a Skill Engine for Craft

The mini BaristaBot and the Bartender are not just two clever machines; they are early examples of what happens when a universal skill engine and a capable arm are pointed at crafts that usually live in human hands. For designers and operators, that means automation that respects the shape of the work, and for visitors at CES 2026, it is a glimpse of a future where robots learn from experts and then quietly keep that craft alive, one cup or glass at a time, without demanding that every venue become bigger or that every drink become simpler just to fit a machine.

Click here to know more.

The post Artly Robots Master Latte Art and Drinks for CES 2026 Debut first appeared on Yanko Design.

Based on sensors in game controllers, this upper-limb wearable robot will help you with your daily chores

One thing exoskeletons have done right is help with motor rehabilitation. Of course, their size and weight have decreased over time, but most of those available are suitable for rehabilitation, load-bearing assistance, and similar purposes. However, they are not designed for daily wear. Not concentrating on the lower limb, which is a saturated market, a duo of budding South Korean designers has targeted the upper limb; creating a wearable robot that can be worn for daily usage.

It’s called the Sleev. For now, it’s not far beyond the drawing books, but from how and what it’s projected to be built for, its God damn great solution for the purpose. Sleev is designed as a daily upper-limb exosuit (wearable robot). It supports independent arm movement and is effortless to wear and remove: just one hand, no more!

Designers: Youngha Rho and Sungchan Ko

It’s not that we are seeing a robotic assistant for the arm for the first time. The market is flooded with iterations of bulky and inconvenient wearable robots that are designed with a great level of technological input and robotic sensors, but somehow make the wearer feel like a cyborg. With its sleek and lightweight limb, the Sleev is conceptualized to change that for a robotic assistant that you would like to wear. It can be strapped on like any other elbow brace to provide assistance in its movement. In addition to being a crucial option for people recovering from stroke or sports injury, the Sleev (for its design and attractive appearance) will augment daily tasks like lifting and carrying; you will like wearing it when carrying a baby for a long time or doing groceries and have a lot of packets to carry back home.

As a wearable robot conceptualized to integrate exoskeletons into our daily life, the Sleev is also strong and intelligent enough to support with rehabilitation activities. To ensure this, the design is integrated with FMG (force myography), a method that detects movement intentions through muscle pressure. The muscle pressure is different in people based on their gender, height, weight, and age. So, for the data accuracy and for the correct functioning of the wearable robot, this information about the users will be necessary. And a larger database will ensure better results, the designers believe.

Collaborating FMG with IMU sensors, the designers suggest, they can allow the algorithm to know where the user intends to move and help them with it accordingly. Both these sensors are affordable and commonly used in game controllers, so they should not be overly expensive when Sleev can find itself into mass production. Interestingly, it relates its movements based on muscle strength and intention. The Sleev doesn’t need to be worn directly on the skin; users can wear it over a thin innerwear as well and go on with it during their daily activities.

The post Based on sensors in game controllers, this upper-limb wearable robot will help you with your daily chores first appeared on Yanko Design.

RoboGrocery is the first step towards robots packing our grocery

When I first encountered a self-checkout system in IKEA a few years ago, I sort of panicked because I didn’t know what to do. But after experiencing it and eventually figuring things out, I thought this was such a convenient way to do your shopping, especially if you want to keep social interactions at a minimum. Now if only there was a also a self-packing system since the packing up groceries stuff is the most difficult.

Designer: MIT CSAIL

Eventually, this can of course come true and one step towards a system like this is the RoboGrocery. This was developed by MIT’s CSAIL department and uses a soft robotic gripper together with computer vision to help you bag groceries and other small items. It’s still in its early stages of course but seeing how it’s working at this time seems pretty promising.

They tested it out by placing 10 objects on a grocery conveyer belt, ranging from soft items like grapes, crackers, muffins, bread to the more solid ones like cans, meal boxes, and ice cream containers. The vision detects the size of the item to determine the order of placing it in a box. The grasper, with the pressure sensors in its fingers, then determines whether the item is delicate and should not be placed at the bottom of the bag.

 

While we’re still a few steps away from actually having a robot to bag your groceries, it’s an interesting first step towards that. Eventually, after it becomes available for commercial use, they might also be able to develop this for industrial spaces like recycling plants and factories.

The post RoboGrocery is the first step towards robots packing our grocery first appeared on Yanko Design.