The Mirror That Knows Your Skin Better Than You Do

Most of us have a complicated relationship with mirrors. We lean in too close, angle our phones for better lighting, and still walk away unsure whether that new moisturizer is actually doing anything. The SIMETRA AI Mirror, designed by Second White, is betting that the problem was never us. It was the mirror itself.

At its core, SIMETRA is a skin analysis system disguised as beautiful bathroom furniture. It reads light, image, and depth data in real time, translating what it sees into precise, actionable feedback about your skin. Not vague impressions. Not generic advice about drinking more water. Actual, measurable intelligence about what’s happening on your face right now, tuned specifically to you.

Designers: Second White

That shift from passive reflection to active analysis feels genuinely significant. The mirror has been one of the least-changed objects in domestic life. For centuries, it asked nothing of us and gave us only what we brought to it. SIMETRA breaks that contract quietly but completely. It observes, interprets, and responds. Whether you find that exciting or slightly unnerving probably says a lot about where you land on the broader AI conversation. From a pure design and utility standpoint, it’s a compelling leap.

What makes Second White’s approach worth paying attention to is how restrained the design is. The temptation with AI-powered beauty tech is to signal intelligence through complexity: screens everywhere, blinking LEDs, the visual vocabulary of a dermatologist’s clinic. SIMETRA goes the other direction entirely. The form is calm and geometric, built around a circular mirror disc that floats beside a fluted, rounded column. The fluting is deliberate. It gives the hardware body texture and warmth, grounding what could have been a clinical appliance in something that feels more like a considered object. A sculptural one.

That tension between analytical function and human-centered feeling is exactly what Second White was after. Precision and empathy coexisting within a single form, as the studio describes it. It sounds like a lofty design brief, but looking at the product, it actually lands. The fabric-covered base, the brushed metal details, the soft rounding of every edge. None of it screams technology. It whispers it.

This matters because beauty routines are intimate. They happen in the 15 minutes before the rest of the world gets access to you. Introducing a device that watches, scans, and analyzes during that time requires a certain amount of tact in how it presents itself. A mirror that looks and feels like a piece of thoughtful furniture earns a different kind of trust than one that announces itself as a gadget. Second White understood that tension, and it shows in every material choice.

The smarter conversation here isn’t really about whether AI belongs in your skincare routine. It probably does, in the same way it’s already crept into everything else we track about ourselves: sleep, steps, heart rate. Skin is just the next frontier, and it’s arguably one of the more logical ones. What we’ve historically lacked is a tool precise enough to deliver useful data in the moment, without requiring a clinic visit or a consultation appointment. SIMETRA frames itself as exactly that: professional-level diagnosis, embedded in daily life.

Whether it fully delivers on that promise in practice is a question only time and real-world use will answer. But as a design proposition, it’s already doing a lot right. It treats the user as someone who wants clarity, not just encouragement. It respects the space it’s designed for. And it manages to look like something you’d actually want on your vanity, which is no small thing when you’re asking someone to trust an algorithm with their morning routine.

The mirror has always held a complicated cultural weight. We’ve used it to judge, to prepare, to reassure ourselves. SIMETRA doesn’t erase that history. It adds another layer. One that’s less about judgment and more about knowledge. And if a mirror is going to know things about us anyway, knowing our skin might just be the most useful thing it could do.

The post The Mirror That Knows Your Skin Better Than You Do first appeared on Yanko Design.

The AI Gadget Concept That Shows You the Real Price Before You Buy

If you’ve ever ordered something from an international retailer only to be blindsided by a customs bill at your door, you already know the frustration that designers Taehyeong Kim and Yu Jeong Choi were sitting with when they created zena. It’s a concept device that reads like the future of shopping, but it addresses a problem that is very much happening right now.

The premise is deceptively simple. You point zena at a product, it scans it, and within seconds you have a full breakdown: the item’s price, real-time exchange rates across multiple currencies, applicable duties, and the best purchasing options available. Not the price the retailer wants you to see. The actual, landed cost. The number that follows you home.

Designers: Taehyeong Kim, Yu Jeong Choi

The design team’s background research puts the stakes into perspective. Citing Avalara’s 2024 global consumer survey, their project notes that 68% of shoppers reported a negative experience tied to unexpected cross-border costs. 75% said they wouldn’t repurchase from a retailer after a customs surprise. And 49% refused delivery altogether. That last number is staggering when you sit with it. Nearly half of the people who encountered surprise fees just sent the package back. That’s not only a UX failure. That’s an industry-wide trust problem that e-commerce at large seems unmotivated to solve. So two industrial designers from Daegu, Korea, decided to take a direct swing at it.

The way they’ve approached the physical design is just as compelling as the concept itself. Zena is small, handheld, and wears its function confidently. The camera module sits on a rotating head at the top, giving it a form that feels like a high-end digital camera crossed with a barcode scanner from a much more considered future. It comes in matte black, soft silver, and a sage green that is genuinely lovely, with a woven lanyard strap running through a flush metal eyelet on the side. That strap detail alone signals that these designers cared about the object beyond its utility. It’s the kind of quiet decision that separates a good concept from a great one.

The docking station is worth mentioning too. Docked, zena tilts its camera head upward like it’s curious about something, giving it a personality that feels almost alive. It sits on a desk in a way that makes you want to look at it, which is more than you can say for most gadgets. The dock functions as a charging station as well, which means the device is always ready to go when you reach for it.

On the software side, the UI is clean and intentional. Once zena scans a product, it surfaces the item’s name, price, color options, and a list of purchase prices sorted by country and currency, with duty percentages clearly noted beside each one. A real-time exchange rate graph runs alongside. You pick your preferred price, preferred purchase location, and complete the transaction immediately. The workflow is scan, search, analyze, buy. No extra apps, no tab-switching, no mental math in a foreign currency.

The part that sticks with me is how practical this feels specifically as a travel companion. Imagine walking through a boutique in Tokyo or a market in Paris and actually knowing, before you commit, whether you’re getting a fair price or paying for the privilege of proximity. Right now that calculation happens mostly in your head, half-guessed and usually wrong.

Zena isn’t something you can buy yet. It’s a concept living on Behance for now. But it speaks to a real gap in how we shop globally, and it does so in a package that respects both form and function equally. In a design space full of concepts that look polished but feel purposeless, this one carries a clear point of view. Kim and Choi aren’t just designing a gadget. They’re designing against a system that has been profiting from consumer confusion for years. That’s the kind of ambition that deserves more than just a scroll-past.

The post The AI Gadget Concept That Shows You the Real Price Before You Buy first appeared on Yanko Design.

OMO X self-balancing electric scooter employs AI and Robotics to refresh urban riding experience

Two-wheelers have always demanded a certain level of skill and balance from riders, especially at low speeds or when navigating crowded city streets. OMO X by Omoway attempts to change that equation by introducing what the company describes as the world’s first mass-produced self-balancing electric motorcycle. Designed around advanced robotics and artificial intelligence, the new age electric bike blends traditional scooter convenience with autonomous technology that aims to make urban mobility easier and safer.

At the core of the two-wheeler is Omoway’s newly introduced OMO-ROBOT architecture, a full-stack control platform that integrates sensors, perception systems, decision-making software, and mechanical actuation into a unified framework. The system combines aerospace-grade gyroscope technology with reinforcement-learning models to continuously stabilize the motorcycle. This architecture allows the OMO X to maintain balance on its own, even when stationary, eliminating one of the biggest challenges riders face on two-wheeled vehicles.

Designer: Omoway

The balancing capability is achieved through a Control Moment Gyroscope (CMG) module. Using the principle of angular momentum, the spinning gyroscope actively stabilizes the vehicle, keeping it upright without rider input. Beyond simply preventing tip-overs, the system also supports a range of riding assistance features. These include slip prevention on wet surfaces, assistance while cornering, and obstacle-avoidance capabilities designed to enhance safety during everyday riding.

Omoway is also positioning the OMO X as a highly intelligent mobility device. The scooter incorporates a network of sensors and cameras that continuously monitor the surrounding environment and feed data into an AI-based riding system. This enables features such as adaptive speed adjustments, hazard detection, and automated safety responses if the system identifies a potential risk. Some demonstrations have even shown the scooter maneuvering on its own, driving onto a stage without a rider, and responding to remote commands through a smartphone app.

Another notable capability is automated parking. Instead of requiring riders to maneuver the bike into tight urban spaces manually, the OMO X can guide itself into a parking spot once a location is selected. The system relies on its self-balancing capability and onboard sensors to navigate safely, a feature that reflects the growing overlap between robotics and personal transportation.

The electric scooter’s futuristic design further reinforces its technological identity. Its sharp, angular styling and distinctive lighting signature give it a modern aesthetic that stands apart from traditional scooters. In a way, it carries the Tesla Cybertruck aesthetic, with a continuous front light bar replacing a conventional headlamp and creating a visually striking presence on the road.

Production plans for the OMO X are already underway. The company announced that the model has entered mass production following its global launch event in Singapore, with pre-orders expected to open in April 2026. Indonesia has been selected as the first launch market, where the electric scooter will debut commercially in Jakarta shortly afterward. Omoway is reportedly working with multiple regional distributors and plans to establish a dealer network of more than 100 locations in the country.

The post OMO X self-balancing electric scooter employs AI and Robotics to refresh urban riding experience first appeared on Yanko Design.

Rabbit R1’s OpenClaw Update Could Be Its Most Important Moment Yet

There is a version of the Rabbit R1 story that ends in 2024. The device launches to enormous hype off the back of a viral CES presentation, ships to early adopters who find it half-finished and frustrating, earns a wave of scathing reviews, and quietly disappears the way most failed AI gadgets do. Humane’s AI Pin followed that trajectory almost exactly, discontinued in early 2025 after HP acquired the company. The R1 did not follow it, though the reasons why have less to do with any brilliant pivot than with stubbornness, incremental software updates, and a fair amount of luck.

By January 2026, two years of over-the-air updates had produced a device functional enough to sustain a renewed community of users and developers. Then OpenClaw arrived on the R1, and the conversation changed in a way that felt less like a product announcement and more like something clicking into place. OpenClaw, the open-source autonomous AI agent that had exploded from obscurity to 60,000 GitHub stars in 72 hours, had always carried a hardware problem at its core. The R1, as it turned out, had most of the solution already built in.

Designer: Rabbit

OpenClaw (formerly Clawdbot, then Moltbot, changing names three times in a single week) is an open-source autonomous AI agent that exploded from 9,000 to over 60,000 GitHub stars in 72 hours in late 2025. Austrian developer Peter Steinberger built it as a self-hosted agent runtime that connects AI models to your local machine, messaging apps, calendar, email, and file system. You control it by sending messages through WhatsApp, Telegram, Discord, or Slack, like you’re DMing a particularly capable assistant. OpenClaw can browse the web, manage your inbox, schedule meetings, summarize documents, and execute shell commands autonomously, with persistent memory that lets it remember context across weeks. The problem OpenClaw always carried was the lack of native voice interaction on dedicated hardware, and the R1 had exactly that hardware sitting in a drawer gathering skepticism.

Rabbit integrated OpenClaw in January 2026 as an alpha feature, requiring users to set up their own OpenClaw gateway and connect it to the R1. Push the talk button, speak a command, and OpenClaw executes it through your existing setup. The R1 becomes a voice interface for an agent that can genuinely act on your behalf, making the device something closer to what Lyu promised two years ago. The possibilities depend entirely on how you configure OpenClaw, which can expand through over 100 community-built skills. Security risks are real and well-documented (over 400 malicious add-ons were found on the skill hub in early 2026), but for users willing to manage that complexity, the R1 finally has a use case that feels native to the hardware rather than bolted on.

The post Rabbit R1’s OpenClaw Update Could Be Its Most Important Moment Yet first appeared on Yanko Design.

Rokid’s Smart Glasses Let You Pick Your AI: Gemini or ChatGPT

Most wearable tech that puts an AI assistant in your ear assumes you want only theirs. The earpiece, the speaker, the entire software stack, all funneled through one model chosen for you before you even open the box. Rokid’s latest update to the AI Glasses Style takes a different position entirely, turning the glasses into what is effectively an open platform where you pick the brain behind the voice.

The update makes the Style the first smart glasses to natively support Google’s Gemini, sitting alongside OpenAI’s ChatGPT, DeepSeek, and Alibaba’s Qwen in a unified interface. Users toggle between them freely, which means reaching for Gemini for a quick Google Maps query and switching to ChatGPT for something else entirely is up to you.

Designer: Rokid

The glasses themselves debuted at CES 2026 in January, and the hardware makes a reasonable case for the category. At 38.5 grams, with a TR90 frame and titanium alloy hinges, they sit closer to a regular pair of prescription glasses than anything resembling a prototype. The frame takes prescription lenses directly, with a fitting service starting at $79, including photochromic options in over 200 colors that darken within 25 seconds.

Powering the AI and imaging workload is a dual-chip setup: an NXP RT600 handles always-on, low-power tasks, while a Qualcomm AR1 manages heavier processing. The same Qualcomm chip is in Meta’s Ray-Ban glasses, though the battery life here runs to 12 hours, noticeably longer than Meta’s. A 12MP Sony-sensor camera sits at the bridge, capturing 4K stills and 3K 30fps video with up to 10 minutes of continuous recording. A privacy indicator light signals to people nearby when the camera is active.

Audio comes through directional AAC speakers built into the temples, focused toward the ears with minimal bleed. The AI interaction itself works through a two-finger tap to summon any of the four models, head gestures for call management, and voice prompts in 12 supported languages. Real-time translation, navigation, photo recognition, and AI-generated meeting summaries are all part of the feature set, fed through whichever model the user has selected.

For anyone already oriented around a specific AI assistant, the practical appeal is straightforward. Someone in Google’s ecosystem gets Gemini in their glasses without compromise; someone who prefers ChatGPT for writing picks that instead. At $299 to start, with a lens fitting service folding in prescription and photochromic options, the Style has cleared 15,000 units sold ahead of its formal global rollout, which is a reasonable early signal for a category still working out what it wants to be.

The post Rokid’s Smart Glasses Let You Pick Your AI: Gemini or ChatGPT first appeared on Yanko Design.

This Wall Speaker Lets You Decorate Your Room with Music and Art

The must-have for your home used to be a choice: a speaker or a digital frame. Good audio gear fills a room with sound but rarely does anything worth looking at. Digital frames look considered and calm on a wall but go completely silent the moment you need them to do something else. It seems obvious, in hindsight, that someone would eventually stop treating these as separate problems.

Monar is that someone. The Monar Canvas Speaker brings both together in a single framed wall piece that plays Hi-Fi audio while displaying art on a built-in screen, and the two functions are genuinely connected. When music plays, the display responds in real time, generating visuals that shift and react to the track. It fills your home with sound. It decorates your wall with art. It does both at once.

Designer: Monar

Click Here to Buy Now: $799 $1299 ($500 off). Hurry, only 122/150 left! Raised over $55,000.

The design draws its visual logic from classical oil painting. Traditional canvas proportions, the kind that have framed masterworks for centuries, informed the 4:5 portrait ratio of the panel, a deliberate departure from the widescreen format most screens default to. That historical reference is not decorative. It is the reason the Monar reads like framed art on a wall rather than a screen that someone forgot to put away.

The outer frame is interchangeable across eight options: premium ABS plastics, natural linen, and brushed aluminium, with one ABS option styled after Mondrian’s primary color geometry. Swapping the frame is a practical feature rather than a gimmick, since the object is permanent décor. If your interior changes, the frame can too.

The audio side makes bold claims for an enclosure that is only 4.9cm deep. Six drivers handle the load: 2 titanium tweeters, 2 midranges using a golden ratio cone geometry, and full-size subwoofers running through a 2.2-channel amplifier. The 20Hz to 20kHz frequency response is ambitious for a chassis this thin, and one definitely worth hearing.

Where the product earns genuine interest is in the everyday texture of using it. Put on an album, and one of 12 lyric display themes animates the words in sync with the music. Switch to the World Gallery and the screen cycles through more than 50,000 digitized artworks, from Van Gogh to Hokusai. Activate Meditation Mode and the visuals shift to ambient scenes timed to calming audio. When no music is playing, it displays personal photos or videos, so it never really goes blank or dormant.

The generative AI tools go further still. Monar’s AI Studio lets you create original artwork through text prompts, uploaded images, or even a musical concept. The result displays on screen, making it possible to have genuinely new wall art on demand without touching a single frame nail. These features run on a points system, with a free tier offering 100 points per month. The World Gallery and Meditation Mode cost nothing extra, regardless.

Paid AI tiers range from $9.90 to $39.90 per month for heavier creative use, and the free allocation covers casual experimentation comfortably. What makes the pricing structure interesting is what it says about the product underneath it: even without touching a single AI feature, the Monar already delivers a fully functional Hi-Fi speaker system and a complete digital frame in one object. That combination alone is something no single product category had managed to pull off before it came along.

A speaker that becomes a painting, a gallery that plays music, a frame that reacts to sound: the Monar pulls off a combination that no single product category has figured out before it. The real question worth sitting with is not whether it works, but how much your walls have been missing something like it.

Click Here to Buy Now: $799 $1299 ($500 off). Hurry, only 122/150 left! Raised over $55,000.

The post This Wall Speaker Lets You Decorate Your Room with Music and Art first appeared on Yanko Design.

5 Wildest Design Trends at MWC 2026: Nodding Phones and Tiny Robots

Every year, MWC arrives with the promise of seeing the future of mobile technology, or at least a very expensive approximation of it. The 2026 edition in Barcelona was the event’s 20th anniversary in the city, and while nearly 105,000 people showed up, there was a noticeable shift in what filled the booths. Fewer headline-grabbing product launches, more working concepts and proofs of concept across every category imaginable.

That’s not necessarily a bad thing. When manufacturers stop competing on a single spec and start showing what they’re thinking about next, the underlying patterns get easier to read. Five trends cut across product categories at MWC 2026, crossing from smartphones to laptops to robotic companions. None of them belongs to one company, and none of them is going away anytime soon.

Robots got a size reduction

For the past couple of years, humanoid robots have been stealing the show at tech events. They walk, they wave, they occasionally fall over, and everyone takes a video. The problem is that a bipedal robot that can fetch a package from across the room is not something most people actually need sitting in their office. MWC 2026 suggested the industry might be starting to figure that out.

The robots worth talking about this year were small, desk-bound, and refreshingly honest about what they could do. Lenovo’s AI Workmate Concept is a desk-mounted unit that handles document scanning, note organization, and presentation help through voice, gesture, and spatial interaction, processing everything on-device. It can even project content onto your desk or a nearby wall, which sounds gimmicky until you think about how useful a hands-free reference surface actually is during a meeting.

Samsung Display’s OLED AI Mini PetBot takes the idea in a more playful direction. It is a pocket-sized robot with a 1.34-inch circular OLED screen for a face, reacting to voice and touch with animated expressions. It comes from Samsung’s display division rather than its product team, so this is less a product announcement and more a demonstration of where the panel technology can go.

AI is learning to show its feelings

Most people’s experience of AI right now involves typing into a box and getting text back, or asking a question into empty air and hearing a voice that sounds like it was recorded in a server room. It works, but it does not feel particularly warm. A cluster of products at MWC 2026 was specifically trying to fix that, not by making AI smarter, but by making it more expressive.

Lenovo’s AI Work Companion Concept looks like a desk clock, which is either a clever disguise or a statement about how unobtrusive AI should be. Its AI planning system, called Thought Bubble, syncs tasks and schedules from across your devices to build a daily plan, monitors screen time, nudges you to take breaks, and delivers an end-of-week summary of what you actually got done. The behavioral framing is deliberately light. The goal is to build a rhythm rather than manage a list, and the device is designed to feel like a presence in your workspace rather than another notification surface.

TCL’s Tbot takes a similar approach for a younger audience. It pairs with the company’s MOVETIME kids smartwatch, so when a child gets home and drops the watch onto Tbot’s magnetic dock, the robot comes to life as a study companion and bedtime storyteller. The physical handoff is a considered design decision, a tangible trigger rather than an app to open.

Honor’s Robot Phone extends the idea into the phone itself. A motorized titanium alloy gimbal arm holds a 200-megapixel camera that nods when it agrees, shakes when it doesn’t, and tracks you across the room. Honor plans to sell it in the second half of 2026, which means it will be the first of this particular batch of emotionally expressive AI devices to actually land in someone’s hands.

Modular design, this time as a practical argument

Modular phones have been promised before: Project Ara, LG G5, and Fairphone at various stages of their evolution. The pitch is always appealing: buy a base device, then upgrade the camera, swap the battery, add what you need. The reality has usually involved awkward connectors, software that doesn’t quite work, and products that disappear within two years. MWC 2026 had a notable cluster of modular devices, and what made them interesting is that each was solving a different version of the problem.

Lenovo’s ThinkBook Modular AI PC Concept approaches it from the laptop side. The 14-inch base connects to a secondary screen via pogo pins, and that screen can sit alongside the base as a travel monitor, mount on the lid for face-to-face sharing, or replace the keyboard to create a dual-display setup. Interchangeable I/O ports, covering USB Type-A, USB Type-C, and HDMI, mean the connection layout changes with the workflow. It’s a concept aimed at professionals who spend their day switching between contexts, and the argument is about longevity and flexibility rather than upgradeability for its own sake.

TECNO’s Modular Magnetic Interconnection Technology works from the phone outward. The base device is 4.9mm thick, which is thinner than anything Apple or Samsung currently sells, and that extreme thinness turns out to be the point. Modules, including telephoto lenses, battery packs, microphones, wallets, and speakers, attach magnetically to the rear without making the phone ungainly.

Ulefone’s RugOne Xsnap 7 Pro is less elegant but arguably more practical: a rugged phone whose rear camera detaches and operates independently as a wearable action camera. Three very different products, three different price tiers, and the same underlying idea. A device you can reconfigure is a device you keep longer.

The keyboard is making a serious case for itself

BlackBerry’s demise was supposed to be the end of physical keyboards on phones. Touch screens were better, the argument went, because they could be anything. And they were right, mostly. But they were also cold, imprecise for fast typing, and they ate half your screen every time you needed to type more than a sentence. A small but persistent group of users never fully made peace with that trade-off, and in 2026, they suddenly had options.

The Unihertz Titan 2 Elite was at MWC with a 4.3-inch AMOLED display at 120Hz above a physical QWERTY keyboard with touch-sensitive keys that also function as a trackpad. The aluminum body and slimmed-down proportions mark a clear departure from the chunky, ruggedized aesthetic of earlier Titan phones. This one is trying to look like something you would actually carry every day.

The Clicks Communicator comes from the opposite direction: Clicks already makes keyboard cases for iPhones, and the Communicator is a logical next step, a standalone Android phone built around the companion philosophy for people who want physical keys without abandoning modern smartphone basics.

The iFROG RS1 is the strangest and most interesting of the three. It is a square phone with a 3.4-inch display that sits on top of a rotating lower section. Twist it one way, and you get a full QWERTY keyboard with tactile keycaps. Twist it the other way, and you get a gamepad with a D-pad and face buttons, which unavoidably recalls the Game Boy and the Motorola Flipout in equal measure. What all three of these share is a belief that tactile input has genuine ergonomic value that glass surfaces haven’t replaced, just obscured. Whether that belief translates into mainstream sales is a different question.

Design became the headline spec

Phones have always been designed objects. But for most of the last decade, the design conversation at launch events came after the camera specs, after the processor benchmark, after the battery capacity. At MWC 2026, a handful of manufacturers flipped that order. The design was the lead, and everything else followed.

Honor’s Magic V6 is the most straightforward example. At 8.75mm closed, it is one of the thinnest foldables on the market, and Honor announced that measurement with the same emphasis as a performance figure might receive. The engineering behind it is genuinely impressive: IP68 and IP69 water resistance on a foldable, combined with a 6,660mAh silicon-carbon battery, means thinness was not achieved by sacrificing durability or endurance. It’s a difficult combination, and the design is doing real work to make it possible rather than just looking good on a spec sheet.

The CMF collaborations told a different story about design as positioning. Infinix’s NOTE 60 Ultra, developed with Pininfarina, applied the Italian studio’s automotive logic to the phone’s rear panel. The result is a single continuous sheet of Gorilla Glass Victus covering the triple camera array, a thin floating taillight strip, and a hidden active matrix notification display, all completely flush. No bump. The colorways, Torino Black, Monza Red, Amalfi Blue, and Roma Silver, are not accidental.

TECNO’s partnership with Tonino Lamborghini produced the TAURUS gaming PC, a water-cooled mini system with a 10,000mm² copper cold plate, and the POVA Metal phone, whose 241-pixel rear LED dot matrix turns the notification surface into a deliberate design feature. At the concept end, TECNO’s POVA Neon filled its rear panel with ionized inert gas to produce plasma patterns that chase your fingertip across the glass, which is either the most impractical phone feature ever conceived or a fascinating question about what a phone’s surface is actually for.

The Lenovo Yoga Book Pro 3D lets 3D creators sculpt directly on a dual-screen laptop without additional hardware. The Motorola Maxwell AI pendant turned conference transcription into something you wear around your neck. None of these are shipping products. At MWC 2026, that seemed less like a limitation and more like the whole point: showing what you think design can do, before you have to prove it.

The post 5 Wildest Design Trends at MWC 2026: Nodding Phones and Tiny Robots first appeared on Yanko Design.

Meta better be worried. Qwen’s affordable AI Smart Glasses have cameras, speakers, and even a built-in display

It was one of the more audacious moves at MWC 2026. Right across the aisle from Meta’s smart glasses booth at Fira Gran Via, Alibaba’s Qwen pavilion was anchored by a pair of glasses so oversized they were practically architecture, a giant sculptural prop that functioned as a very literal invitation to come over and look. People did. And once they got close enough to see the actual products, the conversation shifted fairly quickly from “interesting marketing stunt” to “wait, what exactly is this?”

What they found were two frame styles that could sit in any optician’s window without raising an eyebrow. A rectangular wayfarer in matte black, clean and understated. A rounded frame in warm tortoiseshell with a two-tone contrast that leans vintage without being self-conscious about it. Both carry the “Qwen” wordmark on the temple, small and unobtrusive. Both have cameras tucked discreetly at the hinge corners rather than mounted on the bridge. And inside the lenses, visible only when you look closely, is the faint shimmer of a waveguide display.

Designer: Qwen

That last detail is where the competitive context gets genuinely interesting. The smart glasses market in 2026 has essentially sorted itself into two camps. On one side, you have camera-and-speakers devices like the mainstream Ray-Ban Metas, starting around $299, which have been wildly successful because they figured out that looking normal matters more than most features. On the other, you have display-first devices like the Even Realities G1 and G2, which sit at $599 and offer binocular waveguide displays, but sacrifice the camera entirely and strip out the speakers to keep weight down to a remarkable 36 grams. Meta entered the premium display tier late last year with the $799 Ray-Ban Display, a full-colour waveguide in one eye, a 12MP camera, and open-ear audio. It’s a compelling package, but $799 is a significant ask for a first-generation product in a category most consumers are still on the fence about.

The Qwen glasses, if they land close to the pricing of Alibaba’s previous Quark AI Glasses at around $277, would be threading an entirely different needle. Camera, display, on-device AI, and a frame design that competes aesthetically with anything in this space, all at a price that undercuts the Even G2 by more than half and the Meta Display by almost two-thirds. On paper, that’s a serious value proposition. The technology powering it is a lightened version of Qwen 3.5, running directly on the device rather than offloading everything to the cloud, which matters both for latency and for use cases where connectivity is limited.

The honest caveat is the brand itself, and it’s worth sitting with. Qwen is well regarded within AI research circles, particularly since Alibaba open-sourced much of the model family and developers worldwide have built on it. But Qwen as a consumer product, as something you’d buy at a store or recommend to a friend in Europe or North America, carries essentially zero name recognition. The app ecosystem that Alibaba plans to migrate onto the glasses, things like food delivery and ride-hailing integrations, is deeply rooted in China’s domestic services infrastructure and doesn’t translate directly to international markets without significant rework. Meta spent years building the Ray-Ban brand before it put a chip inside the frame. Alibaba is trying to build hardware credibility and software trust simultaneously, in markets where it starts from a cold position.

None of that makes the product less interesting. The Qwen glasses are arguably the first device in this category to arrive with a camera, a waveguide display, on-device AI, and a design that doesn’t require the wearer to make aesthetic compromises, all at a price that could realistically attract mainstream buyers rather than just enthusiasts. With North America and Western Europe commanding the vast majority of global smart glasses demand, Alibaba is clearly going after the big markets, and the product is credible enough to deserve a proper hearing there. The harder work, convincing people in those markets to trust a brand they have never heard of with a face-worn AI device that has cameras and a display, is the challenge that no amount of giant sculpture at a trade show can solve on its own.

What MWC established is that the hardware is real, the ambition is real, and the timing is deliberate. Alibaba confirmed that AI earbuds and a smart ring are coming later this year under the same Qwen brand, building out a wearable ecosystem that mirrors the strategy Meta has been executing for several years. The glasses are the opening argument. Whether the rest of the world ends up listening is the part that plays out over the next twelve months.

The post Meta better be worried. Qwen’s affordable AI Smart Glasses have cameras, speakers, and even a built-in display first appeared on Yanko Design.

The Kids’ AI Tool That Ends With Crayons, Not Screens

Most conversations about AI and children go one of two ways: either we’re told to be terrified, or we’re told to embrace it fully and immediately. Morrama’s Create concept lands somewhere far more interesting than either of those extremes, and it’s the most thoughtful thing I’ve seen in the AI space in a while.

Create is a physical device, soft and rounded and painted in a cheerful lime green, that sits on a table and listens to a child speak. The kid says something like “a lion playing football,” Create generates a line drawing based on that prompt, and then prints it out on paper. Real paper. The kind you color in with markers and hang on the fridge.

Designer: Morrama

The design studio behind it, London-based Morrama, built Create as part of a broader series of concept AI tools aimed at children aged six and up. They’re calling them “mindful AI tools,” which could easily sound like marketing fluff, but the more I sit with this one, the more I think they’ve actually earned that description.
Here’s what I keep coming back to: the output is analog. The AI does its part, generates the image, hands it over, and then steps back completely. What happens next is entirely up to the child, their color choices, their interpretation, the way they decide to finish what the machine started. That handoff feels significant. It’s not AI completing the task. It’s AI beginning a conversation.

We’re at a point where most of the discussion around kids and AI centers on schools, on cheating, on homework, on what should or shouldn’t be allowed in classrooms. It’s a valid conversation, but it’s also a narrow one. Create isn’t interested in the classroom at all. It’s thinking about the bedroom floor, the kitchen table, the slow weekend afternoon when a child has nothing to do and everything to imagine.

Morrama’s research acknowledges that most young children are already aware of AI. That’s not alarming so much as it’s simply true. These kids are growing up inside the technology, not encountering it for the first time as adults. So the question of how they’re introduced to it, what framework they’re given for understanding what it is and what it’s for, actually matters quite a lot.

What Create does is frame AI as a creative tool from the very beginning. Not a search engine. Not an entertainment machine. A collaborator that responds to what you bring to it. Teaching a six-year-old that AI works best when you give it something of yourself, a thought, an idea, a weird little prompt about a lion with a football, is quietly radical. That’s a healthier mental model for AI than most adults currently have.

The device itself deserves credit, too. Morrama has been deliberate about making Create feel nothing like a screen. The tubular green form, the single lavender button, the paper rolling out like something from an old-school receipt printer, it all communicates “toy” more than “gadget.” That matters because how a thing looks shapes how we use it, and children especially take cues from aesthetics. Create looks like it belongs on a playroom shelf, not a tech desk.

I’ll be straightforward about the fact that Create is still a concept. You can’t buy it, and there’s no confirmed production timeline. But sometimes a concept does its most important work just by existing, by showing that a different approach is possible. The default assumption is that AI for kids means apps, screens, subscriptions, and data. Create pushes back on all of that with something wonderfully low-stakes: a piece of paper and a box of colored pencils.

Whether it ever gets made or not, the thinking behind it is worth paying attention to. Because the children growing up right now will be the ones designing, regulating, and living with AI for the rest of their lives. Starting them off with creativity rather than consumption isn’t just a nice idea. It’s probably the smartest one going.

The post The Kids’ AI Tool That Ends With Crayons, Not Screens first appeared on Yanko Design.

Samsung’s Mini PetBot Gives AI a Face So It Feels Less Cold

Talking to AI still feels a bit strange for a lot of people. You type into a chat box or ask a question into empty air, and something invisible answers back. It works, but it does not feel particularly warm. That low-grade awkwardness has quietly pushed a whole product category into existence: small, expressive desktop robots designed to put a visible face on AI and make the whole interaction feel less like filling out a form.

Samsung Display’s concept shown at MWC 2026 in Barcelona fits neatly into that wave. Called the OLED AI Mini PetBot, it is a compact robot built around a 1.34-inch circular OLED screen that acts as its face. That screen displays animated expressions that shift in response to voice and touch input, so the robot is not just sitting blankly while it processes a command. It reacts, visibly and immediately, which is exactly the point.

Designer: Samsung

The instinct behind it is not new. Products like EMO from LivingAI, Eilik from Energize Lab, and Loona from KEYi Tech have each explored the formula with varying personalities and price points. KEYi Tech even debuted a concept at CES 2026 that docks an iPhone on a motorized MagSafe stand to create a desk robot face. DIY builders have been constructing expressive robot heads from microcontrollers and small screens for years. The appetite for something to look at while talking to a machine is apparently very real.

What Samsung Display contributes to that conversation is the OLED panel itself. A 1.34-inch circular OLED renders fine gradients and deep blacks without a backlight, which means animated eyes or shifting emotional states read clearly even at that small scale. The circular format also removes any rectangular frame of reference, so the face reads more organic than a screen mounted on a housing. That distinction drives the entire emotional premise of these robots.

The MiniPetBot is a concept from a display technology booth, not a product headed to retail. Samsung Display’s interest here is in showing where its panels can go, and the robot shares booth space with the AI Toyhouse, a separate concept pairing a 13.4-inch circular OLED with an 18.1-inch flexible panel. Both exist to make the screen the story. Whether a hardware partner picks up the form factor is a separate question.

The real question these robots keep circling is whether giving AI a physical face actually changes how people relate to it. A robot that looks up when spoken to, or scrunches its face when confused, closes a certain psychological distance that better language models alone cannot bridge. Samsung Display’s Mini PetBot might only be a concept today, but the reasoning behind it seems to be where the whole industry is quietly heading.

The post Samsung’s Mini PetBot Gives AI a Face So It Feels Less Cold first appeared on Yanko Design.