Lenovo’s AI Desk Robot Has Eyes, Moves, and Watches You Work

There’s a specific kind of loneliness that comes with working alone all day. Not the dramatic kind, just the low-grade awareness that every question you have goes into a chat window, every instruction gets typed into a box, and the thing supposedly helping you has no idea where you’re sitting or what’s on your desk.

Lenovo’s AI Workmate Concept, shown at MWC 2026, takes that gap seriously enough to build a physical object around it. The device is a desk companion in the most literal sense, a spherical head on an articulated arm, rising from a circular base, with animated eyes on its front display that shift and orient as it responds.

Designer: Lenovo

The arm is the most telling design decision, though it isn’t just decorative. Because it moves, the Workmate can orient itself toward whatever is in front of it, a document laid flat, a person leaning back, a wall nearby. That range of motion is what separates it from a smart speaker with a face. It has spatial awareness built into its posture, not just its software.

On the practical side, it handles the kind of work that accumulates quietly throughout a day. Place a document in front of it, and it can scan and summarize the contents. Talk through a rough set of notes, and it can help organize them into something usable. Working on a presentation means the Workmate can assist in structuring the content, pulling from what it already knows about the task at hand through on-device AI processing rather than a cloud connection.

The projection feature is the most speculative part of the concept. Rather than keeping information on a screen, the Workmate can cast content onto a desk surface or wall, which, on paper, turns any flat surface nearby into a secondary display. Whether that’s genuinely more useful than glancing at a monitor, or just a more theatrical way to display the same information, is a fair question that a proof of concept can’t fully answer.

What’s harder to dismiss is the physical language the design uses. The animated eyes aren’t a gimmick in the way that most product “personalities” are. They borrow from the same visual shorthand that makes robots in film immediately readable as attentive or distracted, curious or idle. A status light ring on the base shifts color depending on what the device is doing, adding a peripheral layer of feedback that doesn’t require looking directly at the display. Together, those two elements mean the Workmate communicates state without demanding attention, which is actually a more considered interaction model than most desktop AI tools currently offer.

The deeper question isn’t whether the Workmate works. It’s whether having a robot with eyes watching from the corner of the desk makes the day feel more manageable, or just more observed. That’s not a problem Lenovo can solve with a better arm joint. It’s the kind of thing that only becomes clear once the novelty of the eyes wears off.

The post Lenovo’s AI Desk Robot Has Eyes, Moves, and Watches You Work first appeared on Yanko Design.

Forget Step Counters: Dreame’s New Smart Rings Focus On ECG Reports, Sleep, And Real-Time Emotion Data

On any given game day, millions of us become amateur analysts, dissecting every play and scrutinizing every statistic that flashes across the screen. We track player performance with an almost scientific rigor, celebrating the numbers that signal a win and debating the metrics that lead to a loss. This deep dive into data has fundamentally changed how we watch sports, turning passive viewing into an interactive, analytical experience. Yet, for all the attention we pay to the athletes’ performance, our own physiological journey as spectators has remained completely invisible.

Dreame’s new AI Smart Ring proposes a fascinating shift in perspective, turning the sensor technology usually reserved for athletes inward on the audience. The ring’s most ambitious feature, an AI-powered emotion index, aims to quantify the rollercoaster of being a fan, tracking how your body reacts to every thrilling victory and agonizing fumble. It represents a new frontier for wearables, one less concerned with counting your steps and more interested in mapping your heart’s response to the passions that drive you. It is pro-level analytics for the rest of us.

Designer: Dreame

Instead of launching just one device, Dreame is splitting its ambition into a two-ring strategy, which is a seriously interesting market play. The company is effectively acknowledging that “health tracking” means different things to different people. For some, it is about hard, clinical data and safety nets. For others, it is about lifestyle, self-awareness, and emotional insight. So, rather than making one ring that tries to do everything, they have created two distinct products: the Dreame Health Ring, launching in early March, and the Dreame AI Smart Haptic Ring, which is slated for the second half of the year.

The Dreame Health Ring is the more advanced and serious of the two. This is the one aimed squarely at users who want professional-grade monitoring and peace of mind. Its headline feature is the ability to generate ECG reports on demand, moving it closer to a medical-grade device than a typical fitness tracker. It is built around a core of accurate health monitoring and safety alerts, using AI-driven analysis to flag potential issues. Think of this as the quiet, reassuring guardian, focused on delivering vital health data you can potentially share with a doctor, rather than tracking your mood during a movie.

Landing later this year, the Dreame AI Smart Haptic Ring is the lifestyle-focused sibling. You are looking at a 2.5 mm thin body that is about 7.5 mm wide and weighs a featherlight 5.2 grams. The outside is a microcrystalline zirconia nano-ceramic with a Mohs hardness of 8, while the inner band is a slick antibacterial alloy. This ring is all about AI-driven health and sleep tracking, but with a focus on interpretation and daily living. It is designed to be the wearable you forget you are even wearing.

Packed inside that tiny frame is the trifecta of modern health sensors: PPG for heart rate and SpO₂, a temperature sensor, and an accelerometer. This all feeds into the AI sleep algorithms that Dreame claims can nail your REM, deep, and light sleep stages with less than a 5 percent error rate. The AI ring tracks all your key vitals 24/7 and holds about a week of data offline, which is exactly how these trackers should work. But where the Health Ring focuses on ECGs, the AI ring uses this data to power its more experimental features.

This is where we get to the AI ring’s headline feature: the emotion sensing. It claims it can generate a real-time emotion index with 92 percent accuracy. Now, is it going to replace your therapist? Absolutely not. But that is not the point. The real value is in the biofeedback. It is a tool for spotting patterns, for seeing a data-driven trace of how your body reacted to a stressful day while your brain was telling you everything was fine. It is a fascinating, and potentially humbling, new layer of self-awareness that separates it from the more advanced Health Ring.

The design of the AI ring is meant to be invisible. It is a screenless, silent loop of ceramic. Instead of a screen, you get a tiny vibration motor inside for its AI Haptic Alerts, a subtle tap on your finger for a call or message, not a jarring buzz that makes everyone in the room look at you. Those haptics also support tap gestures for controlling music or snapping a photo. The battery life reflects this always-on philosophy, with about a week on the ring itself and a charging case that gives you a claimed at least a 100 days of use before you need a wall outlet.

So why are we seeing this two-ring strategy pop up around the Championship Sunday? It is a smart move. It frames the brand not as just another gadget maker, but as a company thinking deeply about the future of personal health. We are obsessed with the analytics of pro athletes, tracking every metric to understand their performance. Dreame is betting that we are finally ready to apply that same level of nerdy obsession to ourselves, and by offering two distinct paths, they are letting us choose just how deep we want that data to go.

The post Forget Step Counters: Dreame’s New Smart Rings Focus On ECG Reports, Sleep, And Real-Time Emotion Data first appeared on Yanko Design.

AI Device Turns Your Mental Health Data Into a Living Garden

There’s something deeply broken about the way we interact with technology. We scroll mindlessly, chase notifications, and bounce between tabs like caffeinated pinballs. Our devices constantly demand our attention, rewarding speed over substance, reaction over reflection. But what if a piece of technology asked you to slow down instead?

That’s the radical premise behind Cognitive Bloom, a speculative AI device conceived by Map Project Office in collaboration with Chanwoo Lee from Lovelace Research. Lee, who’s also a visiting lecturer at Imperial College London and the Royal College of Art, is reimagining what personal AI could become if we designed it with the same care we give to cultivating a garden.

Designers: Chanwoo Lee, Map Project Office, Lovelace Research

The concept couldn’t arrive at a more critical moment. With mounting evidence around cognitive decline and digital burnout, Cognitive Bloom offers an alternative vision for our relationship with artificial intelligence. Instead of optimizing for efficiency or speed, it encourages something we’ve almost forgotten how to do: genuine self-reflection.

At the heart of Cognitive Bloom is a beautiful metaphor that makes complex data feel alive. The device uses an ambient display that transforms your mental wellness data into a virtual ecosystem. Areas where you’re struggling show up as yellowing leaves. New buds emerge where you’re beginning to grow. When you’re truly thriving in an aspect of your wellbeing, those buds finally bloom. It’s an intuitive visualization that breaks down the typically overwhelming data around mental health. Rather than confronting you with charts, percentages, or clinical assessments, Cognitive Bloom speaks in a language we instinctively understand. Plants need water, sunlight, and attention. So do we.

The device functions as a domestic companion that nurtures what the designers call “a new ritual of self-reflection.” It’s designed to help users reconnect with what genuinely matters, fostering the creation of new mental pathways through thoughtful engagement rather than passive consumption. This approach stands in stark contrast to how most AI products work today. Current AI interfaces typically emphasize quick answers, instant gratification, and frictionless productivity. Cognitive Bloom deliberately introduces friction, but the kind that matters. It’s the friction of pausing. Of considering. Of being present with your thoughts rather than racing past them.

The gardening metaphor extends throughout the entire experience. Just as tending a garden requires patience, consistency, and presence, Cognitive Bloom asks users to take a respite from digitally overstimulated lifestyles. It creates space for genuine contemplation, curiosity, and self-discovery, qualities that feel increasingly rare in our current technological landscape. What makes this project particularly compelling is how it uses human-centered design to foster a deeper connection not just to ourselves, but to our digital environment. Too often, technology feels like something that happens to us, an external force constantly pulling us in a hundred directions. Cognitive Bloom suggests technology could instead become a tool for coming home to ourselves.

The collaboration between Map Project Office and Lovelace Research brings together expertise in design strategy and human-centered AI research, creating a vision that feels both technically informed and emotionally resonant. As a speculative project, Cognitive Bloom doesn’t need to solve every practical challenge of implementation. Instead, it asks the more important question: What if we actually designed technology the way we cultivate gardens, with care, patience, and presence?

That question alone is worth sitting with. In a culture obsessed with growth hacking, viral moments, and exponential scaling, the steady rhythm of gardening offers a different model entirely. Gardens can’t be rushed. They respond to seasons, weather, and the particular needs of different plants. They require observation and adaptation, not standardized solutions.

Cognitive Bloom represents a growing movement in design and technology that’s pushing back against the extractive, attention-harvesting model that dominates our digital lives. It joins other projects reimagining what ethical, human-centered AI could actually look like when we design for wellbeing instead of engagement metrics. Whether Cognitive Bloom eventually becomes a physical product or remains a provocative concept, it’s already succeeded in making us reconsider our relationship with AI and personal data. Sometimes the most important innovations aren’t the ones that disrupt markets but the ones that disrupt our assumptions about what technology should be for.

The post AI Device Turns Your Mental Health Data Into a Living Garden first appeared on Yanko Design.

Bring The Touch Bar Back… And Maybe Put An Intelligent Siri Or Gemini On It

Sounds radical, doesn’t it? The Touch Bar was such a waste of space on the MacBook Pro when it was first introduced exactly a decade ago in 2016. It shipped with a lot of potential but barely any real-world use, and Apple even considered swapping it out for a slot that housed the Apple Pencil back in 2021. While that feature never really came to pass, something else happened in 2021 that blew everyone’s minds – OpenAI’s Dall-E. For a lot of people, this was the first time you could just ‘tell’ an AI to make an image for you and it would. It was the birth of generative AI, and only a year later, OpenAI would break the internet with ChatGPT.

This is also around the time that Apple quietly killed the Touch Bar, but here’s my opinion… bring it back. Maybe not on the MacBook, but the Touch Bar definitely deserves a place on any independent wireless keyboard. With AI LLMs, progressive web apps, widgets, and vibe-coding going mainstream, a Touch Bar on a keyboard finally makes sense. It’s a place for your AI agent to live, alongside tasks, shortcuts, toolbars, and widgets. Apple pioneered the Touch Bar, but one could argue they were way too early to realize its potential. Now, a concept keyboard by Eslam Mohammed and Ahmed Yassen shows how the Touch Bar should be resurrected.

Designers: Eslam Mohammed & Ahmed Yassen

Mohammed and Yassen’s LUMO x700 keyboard comes with a few tricks up its sleeve. Sure, it sports a sleek, metal-forward Magic Keyboard-inspired design, but the thing also packs an end-to-end Touch Bar that’s about as tall as your standard key, making it a lot more usable than the actual Touch Bar, which was just as slim as the function key row. However, that isn’t all there is to this. A snap-on module turns the keyboard into a music player so you aren’t listening to tunes on your iMac or laptop’s fairly tinny speakers. All in all, this turns your keyboard into something a little more versatile than just ‘something you type on’. It now has an identity of its own, and can channel a level of productivity you’d only get with an Elgato-style accessory.

But wait! That modular soundbar isn’t just keyboard-dependent! It works independently too, allowing you to place it underneath the monitor or anywhere else on your desk for a wireless sound experience. The dual speakers fire stereo audio, buttons and a knob help tweak volume and playback, and the part that attaches to the LUMO x700 keyboard, well, there’s a hidden light-bar there to give your desk some ambient lighting. It’s all cleverly designed to ensure the module isn’t useless on its own. However, that Touch Bar is my predominant focus.

Why does a Touch Bar matter now more than ever? Well, we’re all multitasking, we’re all looking for extra real estate for displays, and almost all of us are running agents of some kind to automate tasks. That’s what this Touch Bar is for. Shortcuts to apps live in the center, widgets on the left, and maybe an AI chatbot on the right that you can deploy to talk to, ask questions to, or delegate tasks to. Claude just debuted a desktop-controlling agent called Claude Cowork that can run tasks and perform duties on your desktop on your command, and the infamous OpenClaw’s been taking the internet by storm for doing pretty much the same thing too. Obviously, such an AI will need to be vetted, and probably contained by a set of restrictions so it doesn’t go around leaking your data on a ‘Reddit for AI Agents’ or spending your cash (as OpenClaw has done in a few instances).

The rest of the Touch Bar experience goes on as originally intended. Active programs can reside within the bar, like a recorder interface, the player for music or video apps, allowing you to seek to different parts of a song/video, or even the emoji keyboard that lets you easily cycle through emojis before pasting them. The potential is endless, and while independent Touch Bars like this one exist, we need to design one for an era of AI agents, applets, shortcuts, and widgets. It really is about time.

The post Bring The Touch Bar Back… And Maybe Put An Intelligent Siri Or Gemini On It first appeared on Yanko Design.

Meta Misread the Future Twice. Now They’re Sitting on a Golden Egg, But Don’t Know It

Mark Zuckerberg changed his company’s name to Meta in October 2021 because he believed the future was virtual. Not just sort-of virtual, like Instagram filters or Zoom calls, but capital-V Virtual: immersive 3D worlds where you’d work, socialize, and live a parallel digital life through a VR headset. Four years and roughly $70 billion in cumulative Reality Labs losses later, Meta is quietly dismantling that vision. In January 2026, the company laid off around 1,500 people from its metaverse division, shut down multiple VR game studios, killed its VR meeting app Workrooms, and effectively admitted that the grand bet on virtual reality had failed. Investors barely blinked. The stock went up.

The official line now is that Meta is pivoting to AI and wearables. Zuckerberg spent much of 2025 building what he calls a “superintelligence” lab, hiring top-tier AI talent with eye-watering compensation packages that are now one of the largest drivers of Meta’s 2026 expense growth. The company released Llama models that benchmark decently against OpenAI and Google, embedded chatbots into WhatsApp and Instagram, and talks constantly about “AI agents” and “new media formats.” But from a product and profit perspective, Meta’s AI strategy looks suspiciously like its metaverse strategy: lots of spending, vague promises, and no breakout consumer experience that people actually love. Meanwhile, the thing that is quietly working, the thing people are buying and using in the real world, is a pair of $300 smart glasses that Meta barely talks about. If this sounds like a pattern, that’s because it is. Meta has now misread the future twice in a row, and both times the answer was hiding in plain sight.

The Metaverse Was a $70 Billion Fantasy

Reality Labs has been hemorrhaging money since late 2020. As of early 2026, cumulative operating losses sit somewhere between $70 and $80 billion, depending on how you slice the quarters. In the third quarter of 2025 alone, Reality Labs posted a $4.4 billion loss on $470 million in revenue. For 2025 as a whole, the division lost more than $19 billion. These are not rounding errors or R&D investments that will pay off next year. These are structural losses tied to a product category, VR headsets and metaverse platforms, that the market simply does not want at the scale Meta imagined.

The vision sounded compelling in a keynote. You would strap on a Quest headset, meet your coworkers in a virtual conference room with floating whiteboards, then hop over to Horizon Worlds to hang out with friends as legless avatars. The problem was that almost no one wanted to do any of that for more than a demo. VR remained a niche gaming platform with occasional fitness and entertainment use cases, not the next paradigm shift in human interaction. Zuckerberg kept insisting the breakthrough was just around the corner. He was wrong, and the January 2026 layoffs and studio closures were the formal acknowledgment that Reality Labs as originally conceived was dead.

The irony is that Meta actually had a potential killer app inside Reality Labs, and it murdered it. Supernatural, a VR fitness game that Meta acquired for $400 million in 2023, was one of the few pieces of Quest software that generated genuine user loyalty and recurring revenue. People who used Supernatural regularly described it as the most effective home workout they had ever done, combining rhythm-based gameplay with full-body movement in a way that treadmills and Peloton bikes could not replicate. It had a subscription model, a dedicated community, and real retention. In January 2026, Meta moved Supernatural into “maintenance mode,” which is corporate speak for “we fired almost everyone and it will get no new content.” If you are trying to prove that VR has mainstream utility beyond gaming, fitness is one of the most obvious wedges. Meta had that wedge, and it chose to kill it in the same round of cuts that shuttered studios working on Batman VR games and other prestige titles. The message was clear: Zuckerberg had lost interest in Quest, even the parts that worked.

The AI Bet That Looks Like the ‘Metaverse Bust’ 2.0

After spending years insisting the future was virtual worlds, Meta pivoted hard to AI in 2023 and 2024. Zuckerberg now talks about AI the way he used to talk about the metaverse: with sweeping language about paradigm shifts and transformative platforms. The company stood up an AI division focused on building what it calls “superintelligence,” hired aggressively from OpenAI and Anthropic, and made technical talent compensation the second-largest contributor to Meta’s 2026 expense growth behind infrastructure. This is not a side project. Meta is spending billions on AI research, training, and deployment, and Zuckerberg expects losses to remain near 2025 levels in 2026 before they start to taper.

From a technical standpoint, Meta’s AI work is solid. The Llama family of models is legitimately competitive with GPT-4 class systems and has found real adoption among developers who want open-source alternatives to OpenAI and Google. Meta’s internal AI is also driving real business value in ad targeting, content ranking, and moderation. Those systems work, and they contribute directly to Meta’s core revenue. But from a consumer product perspective, Meta’s AI feels scattered and often unnecessary. The company has embedded “Meta AI” chatbots into WhatsApp, Instagram, Messenger, and Facebook, none of which feel like natural places for a chatbot. Instagram’s feed is increasingly stuffed with AI-generated images and engagement bait that users actively complain about. Meta has launched character-based AI bots tied to influencers and celebrities, and approximately no one uses them. The gap between “we have impressive models” and “we have a product people love” is enormous, and it is the exact same gap that sank the metaverse.

What Meta is missing, again, is product intuition. OpenAI built ChatGPT and made it feel like the future because the interface was simple, the use cases were obvious, and it delivered consistent value. Google integrated Gemini into Search and productivity tools where users were already working. Meta, by contrast, seems to be throwing AI at every surface it controls and hoping something sticks. Zuckerberg talks about “an explosion of new media formats” and “more interactive feeds,” which in practice means more algorithmic slop and fewer posts from people you actually know. Analysts are starting to notice. One Bernstein note from early 2026 argued that the “winner” criteria in AI is shifting from model quality to product usage, which is a polite way of saying that having a great model does not matter if your product is annoying. Meta has a great model. Its products are annoying.

The financial picture is also murkier than Meta would like to admit. Reality Labs is still losing close to $20 billion a year, and while AI is not a separate reporting segment, the talent and infrastructure costs are clearly rising. Meta’s overall revenue growth is strong, driven by advertising, but the company is not yet showing a clear path to AI profitability outside of ‘ad optimization’. That puts Meta in the awkward position of having pivoted from one unprofitable moonshot (metaverse) to another potentially unprofitable moonshot (consumer AI products) while the actual profitable parts of the business, social ads and engagement, keep the lights on. This is a pattern, and it is not a good one.

The Smart Glasses Lead That Meta Is Poised to Lose

Meta talks about the Ray-Ban smart glasses constantly. Zuckerberg calls them the “ultimate incarnation” of the company’s AI vision, and the pitch is relentless: sales more than tripled in 2025, the glasses represent the future of ambient computing, this is the post-smartphone platform. The problem is not that Meta is ignoring the glasses. The problem is that Meta is about to squander a massive early lead, and the competition is closing in fast. 2026 is shaping up to be a blockbuster year for smart glasses. Samsung confirmed its AR glasses are launching this year. Google is releasing its first pair of smart glasses since 2013, an audio-only pair similar to the Ray-Ban Meta glasses. Apple is reportedly pursuing its own smart glasses and shelved plans for a cheaper Vision Pro to prioritize the project. Meta dominated VR because it was early, cheap, and had no real competition. In smart glasses, that window is closing fast, and the field is getting crowded with all kinds of names, from smaller players like Looktech and Xgimi’s Memomind to mid-sized brands like Xreal, to even larger ones like Google, TCL, and Xiaomi.

The Ray-Ban Meta glasses work because they are simple and focused. They take photos and videos, play music, make calls, and provide real-time answers through an AI assistant. Parents use them to record their kids hands-free. Travelers use them for translation. The form factor, actual Ray-Ban Wayfarers that cost around $300, means they do not scream “I am wearing a computer on my face.” This is the rare Meta hardware product that feels intuitive rather than forced, and it is selling because it solves boring, everyday problems without requiring users to change their behavior.

Then Meta made a critical mistake. To use the glasses, you have to route everything through the Meta AI app, which means you cannot just power-use the hardware without engaging with Meta’s AI-slop ecosystem. Want to access your photos? Meta AI. Want to tweak settings? Meta AI. The app is the mandatory gateway, and it is stuffed with the same kind of algorithmic recommendations and AI-generated suggestions that clutter Instagram and Facebook. Instead of letting the glasses be a clean, utilitarian tool, Meta is using them as another vector to push its AI products. Google and Samsung are not going to make that mistake. Their glasses will integrate with Android XR and existing ecosystems without forcing users into a single AI app. Apple, if and when it launches, will almost certainly take a similar approach: clean hardware, seamless OS integration, optional AI features. Meta had a head start, Ray-Ban branding, and a product people actually liked. It is on track to waste all of that by prioritizing AI evangelism over product discipline, and the competition is going to eat its lunch.

What Happens When You Chase Narratives Instead of Products

The pattern across metaverse and AI is that Meta keeps betting on big, abstract visions rather than iterating on the things that work. Zuckerberg is a narrative-driven founder. He wants to define the future, not respond to it. That impulse gave us Facebook in 2004, when no one else saw the potential of real-identity social networks, but it has led Meta astray repeatedly in the 2020s. The metaverse was a narrative, not a product. The idea that billions of people would strap on headsets to work and socialize in 3D was always more science fiction than product roadmap, but Zuckerberg committed so hard to it that he renamed the company.

AI feels like the same mistake. The narrative is that foundation models and “agents” will transform every part of computing, and Meta wants to be seen as a leader in that transformation. The actual products, chatbots in WhatsApp and AI-generated feed content, do not meaningfully improve the user experience and in many cases make it worse. Meanwhile, the thing that is working, smart glasses, does not fit cleanly into the AI or metaverse narrative, so it gets less attention and investment than it deserves. Meta’s 2026 strategy, “shifting investment from metaverse to wearables,” is a tacit admission of this, but it is couched in language that still emphasizes AI rather than the hardware itself.

The other pattern is that Meta is willing to kill its own successes if they do not fit the broader narrative. The hit VR fitness game on Meta’s Horizon, Supernatural, was working. It had subscribers, retention, and cultural momentum within the VR fitness community. It was also a relatively small, specific product rather than a platform play, and that made it expendable when Meta decided to scale back Reality Labs. The same logic applies to Quest more broadly. The headset had carved out a niche in gaming and fitness, and with sustained investment in content and ecosystem development, it could have grown into a meaningful adjacent business. Instead, Meta is deprioritizing it because Zuckerberg has decided the future is AI and lightweight wearables. That might turn out to be correct, but the way Meta is executing the pivot, by shuttering studios and putting products in maintenance mode rather than spinning them out or finding partners, suggests a lack of product discipline.

Why Smart Glasses Might Actually Be the Next Facebook

If you step back and ask what Meta is actually good at, the answer is not virtual reality or language models. Meta is good at building social products with massive scale, capturing and distributing content, and monetizing attention through ads. The Ray-Ban Meta glasses fit all of those strengths. They make it easier to capture photos and video, which feeds into Instagram and Facebook. They use AI to provide contextual information, which ties into Meta’s model development. And they are a physical product that people wear in public, which is a form of distribution and branding that Meta has never had before.

The bigger story is that smart glasses as a category are exploding, and Meta happened to be early. It is not just Samsung, Google, and Apple entering the space. Meta itself is expanding the Ray-Ban line with Displays (which adds a heads-up display) and partnering with Oakley on HSTN, a sportier model aimed at action sports. Google is teaming up with Warby Parker for its glasses, which gives it instant credibility in eyewear design. And then there are the startups: Even Realities, Xiaomi, Looktech, MemoMind, and dozens more, all slated for 2026 releases. This feels exactly like the moment AirPods sparked the true wireless earbud movement. Apple defined the format, then everyone from Samsung to Sony to no-name brands flooded the market, and now you can buy HMD ANC earbuds for 28 dollars. Smart glasses are following the same trajectory, which means the form factor itself is validated, and Meta’s early lead matters less than whether it can keep iterating faster than everyone else.

The other underrated piece is that having an instant camera on your face is genuinely useful in ways that VR headsets never were. People are using Ray-Ban Meta glasses as GoPro alternatives while skateboarding, cycling, and doing action sports, because POV capture without holding a phone or mounting a camera is frictionless. Content creators are using them to shoot hands-free B-roll at events like CES. Parents are using them to record their kids playing without the weird “I am holding my phone up at the playground” vibe. Pet owners are capturing spontaneous moments with dogs and cats that would be impossible to get with a phone. These are not sci-fi use cases or metaverse fantasies. They are boring, real-world problems that the glasses solve immediately, and that is why they are selling. Meta has spent a decade chasing grand visions of the future, and it accidentally built a product that people want right now. The challenge is whether it can resist the urge to over-complicate it before Google, Samsung, and Apple catch up.

The Real Lesson Is About Focus

Meta has spent the last five years oscillating between grand visions, metaverse and AI, and neglecting the products that actually work. The Ray-Ban Meta glasses are proof that when Meta focuses on solving real problems with tangible products, it can still build things people want. The metaverse failed because it was a solution in search of a problem, and the AI push is struggling because Meta is shipping features rather than products. Smart glasses, by contrast, are succeeding because they make everyday tasks easier without requiring users to change their behavior or buy into a futuristic narrative.

If Zuckerberg can internalize that lesson, Meta might actually have a shot at owning the next platform. But that requires a level of product discipline and restraint that Meta has not shown in years. It means resisting the urge to turn every product into a platform, admitting when a bet has failed rather than pouring another $10 billion into it, and focusing on iteration over narration. The irony is that Meta already has the right product. It just needs to stop looking past it.

The post Meta Misread the Future Twice. Now They’re Sitting on a Golden Egg, But Don’t Know It first appeared on Yanko Design.

Teenage Engineering-inspired Music Sampler Uses AI In The Nerdiest Way Possible

The T.M-4 looks like it escaped from Teenage Engineering’s design studio with a specific mission: teach beginners how to make music using AI without making them feel stupid, or without creating slop. Junho Park’s graduation concept borrows all the right cues from TE’s playbook, that modular control layout, the single bold color, the mix of knobs and buttons that practically beg to be touched, but redirects them toward a gap in the market. Where Teenage Engineering designs for people who already understand synthesis and sampling, the T.M-4 targets people who have ideas but no vocabulary to express them. The device handles the technical translation automatically, separating audio into layers and letting you manipulate them through physical controls. It feels like someone took the OP-1’s attitude and wired it straight into an AI stem separator.

The homage succeeds because Park absorbed what makes Teenage Engineering products special beyond their appearance. TE hardware feels different because it removes friction between intention and result, making complex technology feel approachable through thoughtful interface design and immediate tactile feedback. The T.M-4 brings that same thinking to AI music generation. You’re manipulating machine learning model parameters when you adjust texture, energy, complexity, and brightness, but the physical controls make it feel like direct manipulation of sound rather than abstract technical adjustment. An SD card system lets you swap AI personalities like you would game CDs from a gaming console – something very hardware, very tactile, very TE. Instead of drowning in model settings, you collect cards that give the AI different characters, making experimentation feel natural rather than intimidating.

Designer: Junho Park

What makes this cool is how it attacks the exact point where most beginners give up. Think about the first time you tried to remix a track and realized you had no clean drums, no isolated vocals, nothing you could really move around without wrecking the whole thing. Here, you feed audio in through USB-C, a mic, AUX, or MIDI, and the system just splits it into drum, bass, melody, and FX layers for you. No plugins, no routing, no YouTube rabbit hole about spectral editing. Suddenly you are not wrestling with the file, you are deciding what you want the bass to do while the rest of the track keeps breathing.

The joystick and grid display combo help simplify what would otherwise be a fairly daunting piece of gear. Instead of staring at a dense DAW timeline, you get a grid of dots that represent sections and layers, and you move through them like you are playing with a handheld console. That mental reframe matters. It turns editing into navigation, which is far less intimidating than “production.” Tie that to four core parameters, texture, energy, complexity, brightness, and you get a system that quietly teaches beginners how sound behaves without ever calling it a lesson. You hear the track get busier as you push complexity, you feel the mood shift when you drag energy down, and your brain starts building a map.

Picture it sitting next to a laptop and a cheap MIDI keyboard, acting as a hardware front end for whatever AI engine lives on the computer. You sample from your phone, your synth, a YouTube rip, whatever, then sculpt the layers on the T.M-4 before dumping them into a DAW. It becomes a sort of AI sketchpad, a place where ideas get roughed out physically before you fine tune them digitally. That hybrid workflow is where a lot of music tech is quietly drifting anyway, and this concept leans straight into it.

Of course, as a student project, it dodges the questions about latency, model size, and whether this thing would melt without an external GPU. But as a piece of design thinking, it lands. It treats AI as an invisible assistant, not the star of the show, and gives the spotlight back to the interface and the person poking at it. If someone like Teenage Engineering, or honestly any brave mid-tier hardware company, picked up this idea and pushed it into production, you would suddenly have a very different kind of beginner tool on the market. Less “click here to generate a track,” more “here, touch this, hear what happens, keep going.”

The post Teenage Engineering-inspired Music Sampler Uses AI In The Nerdiest Way Possible first appeared on Yanko Design.

These 5 AI Modules Listen When You Hum, Tap, or Strum, Not Type

AI music tools usually start on a laptop where you type a prompt and wait for a track. That workflow feels distant from how bands write songs, trading groove and chemistry for text boxes and genre presets. MUSE asks what AI music looks like if it starts from playing instead of typing, treating the machine as a bandmate that listens and responds rather than a generator you feed instructions.

MUSE is a next-generation AI music module system designed for band musicians. It is not one box but a family of modules, vocal, drum, bass, synthesizer, and electric guitar, each tuned to a specific role. You feed each one ideas the way you would feed a bandmate, and the AI responds in real time, filling out parts and suggesting directions that match what you just played.

Designers: Hyeyoung Shin, Dayoung Chang

A band rehearsal where each member has their own module means the drummer taps patterns into the drum unit, the bassist works with the bass module to explore grooves, and the singer hums into the vocal module to spin melodies out of half-formed ideas. Instead of staring at a screen, everyone is still moving and reacting, but there is an extra layer of AI quietly proposing fills, variations, and harmonies.

MUSE is built around the idea that timing, touch, and phrasing carry information that text prompts miss. Tapping rhythms, humming lines, or strumming chords lets the system pick up on groove and style, not just genre labels. Those nuances feed the AI’s creative process, so what comes back feels more like an extension of your playing than a generic backing track cobbled together from preset patterns.

The modules can be scattered around a home rather than living in a studio. One unit near the bed for late-night vocal ideas, another by the desk for quick guitar riffs between emails, a drum module on the coffee table for couch jams. Because they look like small colorful objects rather than studio gear, they can stay out, ready to catch ideas without turning the house into a control room.

Each module’s color and texture match its role: a plush vocal unit, punchy drum block, bright synth puck, making them easy to grab and easy to live with. They read more like playful home objects than intimidating equipment, which lowers the barrier to experimenting. Picking one up becomes a small ritual, a way to nudge yourself into making sound instead of scrolling or staring at blank sessions.

MUSE began with the question of how creators can embrace AI without losing their identity. The answer it proposes is to keep the musician’s body and timing at the center, letting AI listen and respond rather than dictate. It treats AI as a bandmate that learns your groove over time, not a replacement, and that shift might be what keeps humans in the loop as the tools get smarter.

The post These 5 AI Modules Listen When You Hum, Tap, or Strum, Not Type first appeared on Yanko Design.

Cambridge Just Designed the Voice Device Every Stroke Survivor Wanted

There’s something almost poetic about a piece of technology that looks like a fashion accessory but can fundamentally change someone’s life. That’s exactly what researchers at the University of Cambridge have created with Revoice, a soft, flexible choker that helps stroke survivors speak again.

Around 200,000 people in the U.S. experience speech difficulties after a stroke each year. Many lose the ability to form words clearly or struggle to express complete thoughts, a condition called dysarthria. For years, the options have been limited to speech therapy, typing on communication boards, or experimental brain implants that require surgery. Revoice offers something different: a wearable device you can put on like jewelry and throw in the wash when you’re done.

Designer: scientists from the University of Cambridge

What makes this device fascinating is how it works. The choker sits comfortably against your throat and does two things at once. First, it picks up the tiniest vibrations from your throat muscles when you mouth words, even if no sound comes out. Second, it tracks your heart rate, which gives clues about your emotional state, whether you’re frustrated, anxious, or calm.

These signals get sent to two AI systems working together. The first AI agent focuses on reconstructing what you’re trying to say based on those throat vibrations. It’s essentially reading the intention behind silent or partial speech. The second agent takes things further by expanding short phrases into full, natural sentences. So if you manage to mouth “need help,” the system might generate “I need help with something, can you come here?” complete with the right emotional tone based on your heart rate data.

Think about what this means. Instead of laboriously spelling out every word on a screen or pointing at pictures on a board, you can have fluid conversations again. Your family hears full sentences. You can express nuance and emotion, not just basic needs. The device aims to give people back something invaluable: their natural communication style. The technology builds on recent advances in AI and sensor miniaturization. These aren’t the bulky medical devices of the past. The choker is designed to be discreet and comfortable enough to wear all day. It’s washable, which means it fits into normal life without requiring special care or maintenance. You’re not announcing to everyone that you’re using assistive technology unless you want to.

What’s particularly clever is how the system learns. Current speech assistance tools often require extensive training periods where users must adapt to the technology’s limitations. Revoice flips this approach by using AI that can understand variations in how people try to speak. It works with what you can do rather than forcing you to work around what it can’t. The emotional intelligence aspect shouldn’t be overlooked either. When the device detects an elevated heart rate, it can adjust the tone of generated speech to reflect urgency or stress. This might seem like a small detail, but emotional expression is fundamental to human communication. Being able to convey that you’re upset or excited transforms a conversation from transactional to genuinely human.

Right now, Revoice is still in development and will need more extensive clinical trials before it reaches the market. The research team published their findings in the journal Nature Communications. They’re also planning to expand the system to support multiple languages and a wider range of emotional expressions, which would make it accessible to diverse populations worldwide. For the design and tech communities, Revoice represents a perfect intersection of form, function, and empathy. It’s a reminder that the best innovations don’t just solve problems technically, they solve them in ways that respect dignity and daily life. No surgery, no stigma, just a well-designed tool that helps people communicate.

The post Cambridge Just Designed the Voice Device Every Stroke Survivor Wanted first appeared on Yanko Design.

Apple’s Secret AI Pin Looks Like an AirTag and it Might Just Kill The Smartwatch

Representational Image

Apple’s wearable future might not be strapped to your wrist at all. According to new reports, the company is developing an AI-powered pin about the size of an AirTag, complete with dual cameras, microphones, and a speaker. The device would clip onto clothing or bags, marking a deliberate shift away from the smartwatch form factor that has dominated wearable tech for the past decade.

If the rumors prove accurate, this circular aluminum-and-glass device could launch as early as 2027, running Apple’s upcoming Siri chatbot and leveraging Google’s Gemini AI models. The company appears to be betting that consumers want ambient AI assistance without constantly pulling out their phones or glancing at their watches. Whether this gamble pays off remains to be seen, especially given the struggles of similar devices like Humane’s now-defunct AI Pin.

Designer: Apple

Representational Image

The hardware specs sound modest on paper but reveal something about Apple’s thinking. Two cameras sit on the front: one standard lens, one wide-angle. Three microphones line the edge for spatial audio pickup. A speaker handles output. Physical button for tactile control. Magnetic inductive charging on the back, identical to the Apple Watch system. The whole thing supposedly stays thinner than you’d expect from something packing this much capability. What strikes me most is the screenless design, which tells you Apple learned something from watching Humane crash and burn trying to replace phones with projectors and awkward gesture controls.

Representational Image

Because here’s the thing about AI wearables so far: they’ve all suffered from identity crisis. The Humane AI Pin wanted to be your phone replacement but couldn’t handle basic tasks without overheating or dying within hours. Motorola showed off something similar at CES 2026, and demonstrated a level of agentic control that was still in its beta stages but was impressive nevertheless. Apple seems to be taking notes from both the failure of the former as well as the potential success of the latter. A screenless pin that relies entirely on voice, environmental awareness, and audio feedback has clear limitations, which paradoxically might be its greatest strength.

Motorola’s AI Pendant at CES 2026

The timing lines up with Apple’s Siri overhaul coming in iOS 27. They’re rebuilding the assistant from scratch as a proper conversational AI, and they’ve partnered with Google to tap into Gemini models for the heavy lifting. Smart move, actually. Apple’s in-house AI efforts have been mediocre at best, and licensing Google’s tech lets them skip years of expensive catch-up work. This pin becomes the physical embodiment of that strategy: a purpose-built device for ambient AI that doesn’t pretend to be anything else. You clip it on, it listens and watches, you talk to it, it responds. Simple interaction model.

But I keep circling back to the same question: who actually wants this? Your iPhone already has cameras, microphones, and Siri access. Your Apple Watch gives you wrist-based notifications and quick voice commands. AirPods put computational audio directly in your ears. Apple’s ecosystem already covers every conceivable wearable surface area. Adding a clip-on camera pin feels like solving a problem nobody has, or worse, creating a new product category just because the technology allows it. The 38.5-gram weight of competing devices like Rokid’s AI glasses shows manufacturers obsess over comfort, but comfort alone doesn’t justify purchase.

Representational Image

The 2027 timeline is far enough out that Apple can quietly kill this project without anyone noticing, exactly like they did with the Apple Car. They’ve got a pattern of floating ambitious ideas internally, letting engineers explore possibilities, then axing things that don’t meet their standards or market conditions. Sometimes that discipline saves them from embarrassing product launches. Sometimes it means we never get to see genuinely interesting experiments. This AI pin could go either way, and frankly, Apple probably hasn’t decided yet either. They’re watching how the market responds to early AI wearables, gauging whether spatial computing takes off with Vision Pro, and waiting to see if their Siri rebuild with Google’s Gemini actually works before committing manufacturing resources.

The post Apple’s Secret AI Pin Looks Like an AirTag and it Might Just Kill The Smartwatch first appeared on Yanko Design.

5 AI Devices That Just Made Smartphones Look Obsolete in 2026

The year 2026 marks a historic pivot in personal technology. We are moving past the era of the “AI chatbot” trapped inside a website and entering the age of ambient hardware. While 2025 was defined by software experimentation, 2026 is the year when specialized AI silicon, smart glasses, and wearable pins have matured into indispensable daily companions.

These next-gen devices aren’t just faster smartphones; they represent a fundamental shift in how we interact with the digital world. By integrating intelligence directly into our physical presence, the “AI in your pocket” has evolved from a reactive tool into a proactive partner that anticipates our needs before we even voice them.

1. The Post-Smartphone Device

The traditional glass rectangle is no longer the sole gateway to the internet. In 2026, we are seeing the rise of screenless interfaces and augmented reality glasses that prioritize voice and gesture over scrolling. Devices like AI-powered rings and lightweight smart glasses have moved from niche gadgets to mainstream essentials, offering a “heads-up” lifestyle that keeps users engaged with the real world.

A desire for frictionless interaction drives this hardware shift. Instead of pulling out a phone to navigate or translate, users simply look at a sign or speak to their lapel pin. These devices are designed to disappear into our daily attire, making technology an invisible but powerful layer of our human experience rather than a constant distraction.

The Acer FreeSense Ring represents a refined advancement in wearable technology, offering continuous health monitoring in a compact, stylish form. Crafted from lightweight titanium alloy, the ring is slim, measuring 2.6mm in thickness and 8mm in width, and weighs only 23 grams. Its design balances elegance and practicality, available in finishes such as rose gold and glossy black, and water-resistant up to 5 ATM. With seven size options, it ensures a comfortable fit for a wide range of users. The ring is intended to complement traditional watches, providing wellness tracking without overwhelming the wearer with bulk or complexity.

Equipped with advanced biometric sensors, the FreeSense Ring tracks heart rate, heart rate variability, blood oxygen saturation, and sleep quality. Data is processed through a dedicated mobile application, which transforms readings into actionable, AI-driven wellness insights and personalized recommendations. Its detailed sleep analysis and continuous monitoring enable users to manage health proactively. By integrating sophisticated design with advanced biometric intelligence, the FreeSense Ring delivers an elegant and practical solution for modern wellness management.

2. On-Device Intelligence Systems

One of the biggest breakthroughs in 2026 is the move away from the cloud, made possible by massive leaps in Neural Processing Units (NPUs). As a result, your device no longer requires a constant internet connection to “think.” Complex reasoning and language processing now happen directly on the hardware in your pocket, resulting in near-zero latency.

This shift to “Edge AI” means your personal assistant is faster and more reliable than ever. Whether you are in a remote hiking spot or a crowded subway, your device can translate languages and organize your schedule offline. By keeping the “brain” of the AI on the device, manufacturers have finally solved the lag issues that plagued early generations of AI hardware.

The CL1 by Cortical Labs is the world’s first commercially available biological computer, integrating living human neurons with silicon hardware in a compact, self-contained system. Rather than relying on conventional software models, the CL1 uses lab-grown neurons cultured on an electrode array, allowing them to form, modify, and strengthen connections in real time. This enables the device to process information biologically, learning dynamically through interaction instead of pre-trained algorithms or large datasets.

At the core of the CL1 is Synthetic Biological Intelligence (SBI), a hybrid computing approach that combines biological adaptability with machine precision. The neurons respond to electrical stimulation by reorganizing their connections, closely mirroring natural learning processes in the human brain. This results in exceptional energy efficiency and high responsiveness compared to traditional AI systems. Designed as a research-grade platform, the CL1 offers scientists a new way to study neural behavior, test compounds, and explore adaptive intelligence, positioning it as a foundational product in the emerging field of biological computing.

3. Rethinking App-Centric UX

We are witnessing the slow death of the traditional app icon grid. In 2026, next-gen devices utilize Agentic AI, which allows your pocket companion to navigate services on your behalf. Instead of you opening a travel app, a hotel app, and a calendar app to book a trip, you give one command. Your AI agent handles the cross-platform logistics autonomously.

This transition from “apps” to “actions” has redefined the user interface. Our devices have become executive assistants that understand our preferences across every service we use. The friction of toggling between dozens of different interfaces is being replaced by a single, unified conversation that gets things done, effectively turning the operating system into a proactive worker rather than a static menu.

The TB1’s defining feature is its AI-powered LightGPM 2.0 system, developed using principles of color psychology and professional lighting design. The system is capable of generating refined lighting scenes from billions of possible combinations, delivering precise, task-appropriate illumination without requiring manual configuration. Through simple voice commands such as “Hey Lepro,” users can activate lighting modes tailored for activities including gaming, or social gatherings. The AI interprets intent in real time and produces a balanced, professional-grade ambience with minimal user intervention.

The product also incorporates a built-in microphone and LightBeats technology, enabling lighting to synchronize dynamically with music, while segmented control allows detailed customization across different sections of the lamp. By combining intelligent scene generation, hands-free interaction, and a distinctive sculptural form, the TB1 positions itself as a forward-looking lighting solution. It enhances modern living environments through responsive, adaptive illumination that prioritizes ease of use and functional design.

4. Sensory-Driven Artificial Intelligence

Next-gen devices in 2026 are no longer blind to their surroundings. Equipped with high-fidelity microphones and low-power cameras, these pocket companions possess contextual awareness. They can “see” the ingredients on your kitchen counter to suggest a recipe or “hear” the tone of a meeting to provide real-time talking points or summaries that capture subtle emotional cues.

This sensory integration allows the AI to offer help that is actually relevant to your current environment. It isn’t just processing text; it is understanding your physical reality. By merging visual, auditory, and biometric data, your 2026 device acts as a second set of eyes and ears, providing a level of personalized support that was previously confined to science fiction.

The Humane AI Pin was introduced as a bold vision of screenless, context-aware computing, promising an AI-powered future worn discreetly on the body. For many early adopters, however, the device quickly lost functionality after the discontinuation of its cloud services, rendering its advanced features inoperative. What remained was a piece of thoughtfully engineered hardware—complete with a miniature projector, sensors, microphones, and cameras—stranded without a viable software ecosystem. As a result, the Pin became a notable example of how tightly coupled hardware and proprietary services can limit a product’s long-term relevance.

This narrative has begun to shift with the emergence of PenumbraOS, an experimental software platform developed through extensive reverse engineering. By reimagining the AI Pin as a specialized Android-based device, PenumbraOS unlocks privileged system access and introduces a modular assistant framework to replace the original interface. This effort reframes the Humane AI Pin not as a failed product, but as a capable development platform with renewed potential. Through open-source collaboration, the device now serves as a case study in how community-led innovation can extend the life and value of forward-thinking hardware.

5. Data in Your Pocket

As AI becomes more personal, the demand for “Data Sovereignty” has reached a fever pitch. 2026 hardware solves the “creepy” factor through hardware-level privacy vaults. Because the majority of AI processing now happens locally, your most sensitive conversations, health data, and private photos never have to leave the physical device to be processed in a distant corporate data center.

This “Privacy by Design” approach has built a new level of trust between users and their machines. With encrypted local storage and physical kill switches for sensors, next-gen devices ensure that your digital twin remains yours alone. In a world where data is the most valuable currency, the 2026 device serves as a secure fortress that protects your personal identity while amplifying your capabilities.

The Light Phone III is a purpose-built device designed around simplicity, privacy, and intentional use. It features a 3.92-inch black-and-white OLED display that replaces the earlier e-ink screen, offering sharper visuals, faster response, and improved legibility across lighting conditions. The interface is minimal and distraction-free, supporting essential functions such as calls, messages, navigation, music, podcasts, and notes. Powered by a Qualcomm SM4450 processor with 6GB of RAM and 128GB of storage, the device delivers smooth performance while remaining firmly limited to core tasks.

The product introduces a single, straightforward camera with a fixed focal length and a physical two-stage shutter button, emphasizing documentation over content creation. Its compact, solid form factor includes a user-replaceable battery, fingerprint sensor integrated into the power button, stereo speakers, USB-C charging, NFC, and GPS that prioritizes user privacy. Every design decision reflects a restrained, ethical approach to personal technology, positioning the Light Phone III as a secure, focused alternative to conventional smartphones.

The “AI in your pocket” is no longer a futuristic promise but the standard for 2026. By moving intelligence to the edge, embracing agentic workflows, and prioritizing local privacy, next-gen devices have successfully bridged the gap between human intent and digital execution. We are no longer using technology as we are living alongside it.

The post 5 AI Devices That Just Made Smartphones Look Obsolete in 2026 first appeared on Yanko Design.