Motorola’s AI Pendant Turns Conference Talks Into LinkedIn Posts

There’s a particular kind of friction that comes with using AI during moments that actually matter. You’re in a meeting or a keynote, and consulting your phone means breaking focus, fumbling with a screen, and silently signaling to everyone around you that you’d rather be somewhere else. Motorola’s 312 Labs team identified this as a design problem worth solving, and Project Maxwell is what came out of it.

The device is a pendant, small enough to disappear against a shirt, worn on a metal chain with a rounded rectangular body that wouldn’t look out of place as functional jewelry. At one end sits a wide-angle camera lens in a dark housing, flanked by a slim LED indicator. It comes in a range of distinct finishes: a tortoiseshell amber with deep brown gradients, a matte navy with woven textile-like texture, a sculptural marbled white, and a deep chocolate brown.

Designer: Motorola

When prompted, Project Maxwell continuously captures what you see and hear, then processes that through what Motorola calls Multimodal Perception Fusion, combining input from its camera, microphones, and sensors to deliver real-time, contextual recommendations. The second technical layer, Natural Language Interaction and Intention Capture, is built on Large Action Models that don’t just respond to queries but execute tasks. The difference between describing an action and performing it is exactly the point.

Motorola illustrates the concept with a conference scenario: you prompt Maxwell before a keynote, let it absorb the room, and walk out with a ready-to-edit LinkedIn post, without opening a single app. The idea is that AI works best when it fits into what you’re already doing rather than demanding you stop to interact with it. That’s not a new pitch for wearable tech, but it’s rarely been this well-considered from a form standpoint.

Real questions remain, and Motorola is the first to say so. Project Maxwell is a proof of concept without pricing, a release date, or confirmed hardware specifications. The concerns around continuous environmental capture, consent, and data handling tend to get louder the closer a device like this gets to an actual shelf. How those boundaries get communicated in any future product will matter as much as the hardware.

What 312 Labs has made clear is that Maxwell’s learnings feed directly into Motorola’s Qira AI ecosystem. Even if this exact pendant never ships, the interaction model it’s testing, hands-free, context-aware, and action-capable, is the direction Motorola is heading. The more interesting question isn’t whether a wearable AI pendant is useful. It’s whether people will actually want to wear one.

The post Motorola’s AI Pendant Turns Conference Talks Into LinkedIn Posts first appeared on Yanko Design.

Yanko Design’s Best of MWC 2026: When Engineering Gets Obsessive

Every year, MWC arrives like a controlled flood of announcements, each one louder than the last. Cameras with more megapixels, batteries with bigger numbers, screens with higher refresh rates than the human eye can meaningfully appreciate. It’s easy to walk away from Barcelona with a head full of specs and no clear sense of what any of it actually felt like to hold, use, or live with. The products that matter don’t always win the spec sheet battle.

The ones worth paying attention to are the ones built around a specific, almost stubborn design conviction. A team that decided thinness wasn’t a compromise but the whole point. Engineers who spent years rethinking how a GPS antenna sits inside a running watch. Designers who asked what a laptop would look like if it finally adapted to the user instead of demanding the opposite. Those are the products that stopped people on the MWC 2026 show floor, and these are the design decisions that made them worth stopping for.

HUAWEI WATCH GT Runner 2 Smartwatch

GPS watches for runners have always played both sides of a strange contradiction: the more seriously you take running, the more you end up wearing a small computer that weighs down your wrist and distracts you with irrelevant notifications. Huawei’s answer to that tension is the Watch GT Runner 2, a dedicated running watch built around the single question of what a wrist-worn device actually needs to do well for someone logging serious miles.

Five years of development went into the GPS architecture, which tells you where Huawei’s engineering priorities landed. The 3D floating antenna design, paired with an intelligent converged positioning algorithm, claims 20% better accuracy than its predecessor, holding signal through tunnels and tree cover where most watches lose the thread. The body itself is nanomolded aerospace-grade titanium at just 34.5 grams, with a 10.7mm profile that doesn’t fight the wrist wearing it.

Designer: Huawei

The Intelligent Marathon Mode is where the Huawei Watch GT Runner 2 really shines. Developed alongside the dsm-firmenich Running Team, it functions as an on-wrist coach with customized training plans, real-time pace charts, a digital pacer showing how far ahead or behind your target you are, and a personalized fueling reminder so you don’t bonk at kilometer 30. Performance prediction uses your Running Ability Index and physical data to estimate finish times, which either motivates you or quietly humbles you.

Health monitoring goes beyond the usual heart rate and step counts. ECG analysis triggers 30 minutes post-exercise, HRV is tracked throughout the day, and the PPG sensor can flag potential atrial fibrillation risks. Battery life reaches 32 hours in outdoor workout mode with GPS active, backed by a cell with 68% higher energy density than the previous generation. Curve Pay integration also lets you leave your phone and wallet behind on long runs entirely.

The Huawei Watch GT Runner 2 covers both ends of the spectrum, from amateurs wanting a smart training companion to athletes chasing records with lactate threshold and power metrics. At 34.5 grams with a breathable AirDry woven strap, it’s built to disappear on your wrist. What remains to be seen is whether marathon coaching calibrated with elite runners translates meaningfully to the rest of us.

MemoMind One AI Glasses

Most AI glasses have made the same mistake: designing around the technology first and hoping the wearability sorts itself out later. The result is eyewear that signals to everyone around you that something unusual is happening on your face. MemoMind, a new AI hardware brand incubated by projector company XGIMI, took the opposite approach with its debut product, building from a decade of optical engineering experience to make glasses that simply look like glasses.

The MemoMind One is the flagship of the lineup, combining integrated speakers with a dual-eye air display that layers information over your field of view without demanding your full attention. The multi-LLM hybrid operating system handles real-time translation, voice summaries, transcription, and contextual reminders, all accessible through head-motion controls and a conversational interface. Since its CES 2026 debut, software updates have expanded navigation integration and refined how the AI delivers information without interrupting natural interaction.

Designer: XGIMI

Personalization sits at the center of the MemoMind design philosophy in a way most wearable tech ignores entirely. Frames are fully customizable, temples are interchangeable, and the glasses support prescription lenses, meaning you can actually wear them as your everyday eyewear rather than carrying a second pair of frames. That design decision alone separates MemoMind from most competitors, where the hardware dictates the look and the wearer adapts accordingly.

The broader MemoMind lineup shows how deliberately the brand has thought through different user needs. The MemoMind Air Display weighs just 28.9 grams and uses a single-eye monocular display for a lighter-touch AI presence, aimed at commuters and minimalists who want information without visual density. The MemoMind Air goes further still, dropping the display entirely for a microphone-only model that makes the AI presence nearly invisible, present when useful and undetectable when not.

MemoMind One is set for preorder in April 2026, with the Air Display and Air models following later in the year. What XGIMI has built here is a clear and considered answer to the question of how AI should sit on your face: quietly, comfortably, and without announcing itself to the room. The design conviction behind MemoMind is that the best wearable AI is the kind you stop noticing you’re wearing.

Honor Robot Phone Concept

Smartphones have been flat rectangles for so long that the design conversation around them has largely shifted to cameras, refresh rates, and how thin the bezels are. Honor arrived at MWC 2026 with a genuinely different question: what if the phone itself could move? The Robot Phone concept puts a 4DoF gimbal system inside a handheld device, built around what Honor calls the industry’s smallest micro motor, with the motor size reduced by 70% compared to existing solutions.

Designer: Honor

The gimbal does two distinct things, and they pull in interestingly different directions. On the imaging side, three-axis mechanical stabilization works alongside an AI stabilization engine to keep footage steady through complex, dynamic movement. A double-tap locks the AI onto any subject, tracking it even through sudden changes or brief obstructions. Honor also introduced an AI Spinshot mode, supporting 90-degree and 180-degree rotations, a move that borrows directly from cinema camera rigs and scales it down to one hand.

The second application is where the concept gets harder to categorize. Honor has designed the gimbal to express what it calls embodied AI interaction, meaning the phone physically responds to what’s happening around it. It nods during agreement in video calls, adjusts its orientation to keep you in frame automatically, and moves to the rhythm of music playing through its speakers. These are features that a spec sheet cannot really describe, and that makes the Robot Phone one of the more genuinely curious things shown at MWC 2026, even as a concept still working toward a commercial release.

Xiaomi Vision Gran Turismo EV Concept

The Vision Gran Turismo program is where car brands go to design without consequences. No production targets, no crash tests, no accountants in the room. Ferrari has done it. Porsche has done it. Now Xiaomi, a company that started by selling smartphones and rice cookers, has become the 36th brand to join and the first technology company ever invited. Gran Turismo producer Kazunori Yamauchi extended the invitation personally at the GT World Series in London.

Designer: Xiaomi

The design problem Xiaomi decided to obsess over is one every hypercar team faces: low drag gives you straight-line speed, high downforce gives you corners, and optimizing hard for either one usually compromises the other. Xiaomi’s answer was to eliminate the trade-off entirely by building aerodynamics into the body itself. No bolted-on wings, no add-on splitters. A teardrop cockpit, airfoil-shaped structural members, and embedded channels that guide air from nose to tail. The Accretion Rims are the detail worth pausing on: magnetically held wheel covers that stay perfectly still while the wheels rotate beneath them, cooling the brakes through internal turbine fins while cutting drag from spinning surfaces.

Inside, Xiaomi replaced the usual carbon-and-leather tension of a hypercar cockpit with something it calls the Sofa Racer, a continuous loop of dashboard, doors, and seating upholstered in 3D-knitted fabric pulled from sportswear manufacturing. The Xiaomi Pulse system reads driver state through sensors and responds through light and sound rather than screens and alerts. It all connects to Xiaomi’s broader Human x Car x Home ecosystem, which is either a genuinely interesting idea about how cars fit into a connected life, or a lot of ecosystem language wrapped around a very beautiful virtual concept car.

TECNO Modular Magnetic Interconnection Technology

The modular phone idea has been attempted before, most famously by Google’s Project Ara, which spent years promising a phone you could rebuild like Lego before quietly disappearing in 2016. The premise was compelling, and the execution proved stubborn. TECNO’s approach at MWC 2026 is different in one important way: rather than replacing the phone’s internal components, the Modular Magnetic Interconnection Technology keeps the phone slim and complete on its own, then lets you snap additional hardware onto it magnetically when you actually need it.

Designer: TECNO

The concept arrives in two visual flavors, ATOM and MODA, but the underlying system is the same across both. Over a dozen modules compose the Customizable Modular Suite, covering stackable battery packs, action cameras, telephoto lenses, and more, each attaching and communicating through the magnetic interconnection system. The scale and visual coherence of the accessory ecosystem is genuinely striking. Everything shares a design language, sits flush when attached, and reads as a single object rather than a phone with things stuck to it.

The ATOM edition makes the clearest design statement of the two, with its white and red palette, ribbed surfaces, and a camera module that looks pulled straight from a mirrorless system. TECNO’s core argument is that keeping the phone genuinely slim in daily use, while letting the modules handle the heavier lifting on demand, sidesteps the trade-off that has defined smartphone design for years. Add what you need, remove what you don’t, and the phone adapts to the moment rather than trying to anticipate every one of them in advance.

T10 Bespoke Luxury Custom IEM

There are 150 of these made each year. That’s it. Each one starts as a conversation, not a product listing, where you sit down with the team and work through finishes, metals, and sculptural forms until the result is entirely yours. The chassis is ceramic zirconium, machined to roughly half the volume of an AirPod and assembled with micro-screws and gaskets the way a Swiss watchmaker approaches a movement. Some configurations arrive in mirror-polished obsidian black YTPZ ceramic with 24k rose-gold plating over solid bronze. Others wear navy-blue Cerakote over polished zirconia with hand-rubbed tung-oil burl wood inserts. The newest collection reaches into diamonds, amethysts, and fine metals, with one-of-a-kind builds priced past $115,000. These aren’t earbuds that happen to look expensive. They’re objects you’d keep in a case and hand down.

Designer: EAR Micro, Klipsch

What separates the T10 Bespoke from anything else isn’t just the materials. It’s what’s packed into that tiny chassis. An ARM primary processor runs alongside a dedicated co-processor, with twin Cadence Tensilica Hi-Fi DSPs handling the signal chain. You get selectable amplifier modes, Class D for efficiency, and Class A/B when you want the fuller analog character. The Sonion Balanced Armature driver, tuned with Klipsch from the X10 lineage, feeds from a signal path that supports Sony LDAC at 24-bit/96kHz. That resolution matters because the hardware can actually deliver it. The PCB inside spans less than 1.13 square centimeters, with folding wings to fit the geometry. It’s the kind of engineering that usually stays behind a rack somewhere. Here it’s in your ear.

The interaction layer is equally thoughtful. Bragi OS powers the whole thing, supporting touch controls, voice commands, and head-motion gestures so you rarely have to reach for your phone. Battery life runs 8 to 9 hours per earbud, stretching past 30 hours with the case, and a 15-minute fast charge gets you to 85%. ANC is tuned in-house, and the founder calls it best in class, which is a claim that holds up in context, given the hardware underneath it. The deeper point is that this isn’t a product built to a price point or a roadmap. The chassis is replaceable. The battery is replaceable. The shell is replaceable. You’re not buying a device with a two-year lifespan. You’re buying something designed to stay with you, improve over time, and still be relevant long after everything else has been recycled.

Lenovo AI Workmate Concept

Most AI assistants live inside a screen, which means interacting with them still involves picking up a device, unlocking it, and navigating to something. Lenovo’s AI Workmate Concept takes a different position, literally: it sits on your desk as a physical object, a spherical head on an articulated arm mounted on a circular base, designed to be always present and always on without requiring you to go looking for it.

Designer: Lenovo

The design is built around natural interaction rather than typed commands or app interfaces. It responds to voice, gesture, and writing, with on-device AI processing inputs locally for privacy. The more distinctive capability is spatial output: the Workmate can project content directly onto a nearby surface, turning a desk or wall into a temporary display for documents, presentations, or notes. It also handles practical business tasks like scanning and summarizing documents and assisting with content creation, positioned as a desk companion rather than a novelty.

The physical form is what makes the concept worth paying attention to as a design argument. The spherical head, articulated arm, and glowing base ring give the device a clear presence and orientation, somewhere between a desk lamp and a friendly robot, without tipping into either. It acknowledges you spatially rather than waiting to be summoned from a notification panel. Whether a desk companion with animated eyes and a projector becomes something people actually want next to their laptops is the real design question Lenovo is exploring here, and MWC 2026 was its first public test of that answer.

Huawei Mate 80 Pro Max

Huawei’s Mate series has always been the line where the company makes its clearest design statements, and the Mate 80 Pro Max carries that further with a body that steps away from the fiber-reinforced plastic back of the standard Pro in favor of an aluminum alloy construction throughout. The result is a phone with more physical presence and a slightly larger footprint. Both share the same Dual Space Rings camera module design that has become the Mate family’s most recognizable feature, two concentric rings framing the rear cameras in a configuration that reads as intentional rather than incidental.

Designer: Huawei

The display on the Pro Max stretches farther to 6.9 inches while keeping the same LTPO OLED panel with 1440Hz PWM dimming and Kunlun Glass 2 protection. Powered by the same Kirin 9030 Pro chipset in their top configurations, the Max differentiates itself through physical scale and materials rather than raw internals. The battery also steps up to 6000mAh, though paired with the same 100W wired charging. The color options shift too: where the Pro comes in Black, White, Green, and Gold, the Max trades the softer tones for Black, Silver, Blue, and Gold.

What the Mate 80 Pro Max represents is a familiar kind of product logic: take the established design, make it bigger, make the materials more premium, and add the battery capacity to match the larger chassis. The Dual Space Rings identity carries across both models intact, so the design conversation between the two is less about direction and more about degree. With a significantly higher price tag, the Pro Max is considered step up for buyers who want the full physical expression of what the Mate 80 series is about.

Honor Magic V6 Foldable phone

Foldable phones have spent years promising the future while feeling fragile, bulky, and anxious about rain. Honor’s design obsession with the Magic V6 was to solve all three problems at once without letting any of them compromise the others. The result is an 8.75mm folded profile, putting it in iPhone-thin territory, paired with a 6,660mAh silicon-carbon battery, the largest ever fitted into a foldable at this thickness.

Designer: Honor

That battery figure is where the real engineering story lives. Silicon-carbon cells pack more energy into less space than conventional lithium-ion, but higher silicon content creates expansion stress that can crack cells over charge cycles. Honor’s fifth-generation silicon-carbon material, developed with ATL, reaches 25% silicon content. That’s what allows the capacity and the thinness to coexist without one compromising the other.

The Magic V6 also carries both IP68 and IP69 ratings, a first for any foldable. IP68 handles submersion; IP69 covers high-pressure, high-temperature water jets. Getting both on a device with a moving hinge, a crease depth reduced by 44% over the previous generation, and a display reflectivity as low as 1.5%, reflects how much structural engineering went into something that still opens and closes hundreds of times daily.

Lenovo ThinkBook Modular AI PC Concept

Laptops have been making the same basic promise for decades: here is one device that does everything, carry it everywhere. The trade-off has always been that “everything” means compromises, a screen too small for real work, a body too thick for a bag, a keyboard that disappears when you want a tablet. Lenovo’s ThinkBook Modular AI PC Concept at MWC 2026 takes a different position entirely, built around a “carry small, use big” philosophy that lets a single 14-inch base system reconfigure itself depending on where you are and what you’re doing.

Designer: Lenovo

The modularity here is practical rather than speculative. A secondary display attaches to the top cover for face-to-face sharing or closed-lid use, sits alongside the base on an integrated kickstand as a portable travel monitor in portrait or landscape, or swaps with the keyboard to create a dual-screen setup stretching the combined workspace to roughly 19 inches. The Bluetooth keyboard detaches entirely. IO ports, including USB Type-A, USB Type-C, and HDMI, are interchangeable depending on what a given day requires. Pogo-pin connectors handle power and data transfer between modules, keeping the system stable and self-contained throughout all the rearranging.

What makes the ThinkBook Modular concept worth paying attention to as a design argument is the restraint behind it. Rather than trying to anticipate every scenario inside one fixed chassis, Lenovo accepted that the device itself should be the smallest possible useful thing and let the user decide what gets added to it. A laptop that adapts to the workflow instead of the other way around is an old idea that has never quite landed in a form people actually use. This concept is still exactly that, a proof of concept with no confirmed release date, but the underlying logic is more considered than most modular hardware that has come before it.

Leica Leitzphone by Xiaomi

Xiaomi has made plenty of capable camera phones, but the Leica Leitzphone takes a different approach entirely, treating the smartphone less like a spec competition and more like an extension of Leica’s century-old obsession with optical craft. The silver aluminum frame carries tactile knurling, a rotatable camera ring, and the iconic Leica Red Dot, sitting against a black fiberglass back pulled directly from classic Leica rangefinder design language.

Designer: Xiaomi x Leica

That camera system is where the conviction becomes most legible. A 1-inch sensor with LOFIC HDR technology handles the main shooting duties, alongside a 200MP telephoto at 75 to 100mm and a 14mm ultra-wide. The rotatable physical camera ring, assignable to focal length, focus, or bokeh, gives the experience a tactile dimension that touchscreen sliders simply cannot replicate. Thirteen Leica color styles and a dedicated Essential Mode recreating the Leica M9 and M3 look complete the package.

The rest of the hardware keeps pace: Snapdragon 8 Elite Gen 5, a 6.9-inch 3500-nit OLED display, and a 6000mAh battery with 90W wired charging. The Leica UX layer goes further than a cosmetic theme, reshaping system fonts, icons, and widgets into a coherent visual identity rooted in Leica’s design language. For anyone who has wanted smartphone photography to feel less like operating software and more like handling a real camera, this is the most direct answer yet.

TCL Tbot Smartwatch Desktop Companion for Kids

Kids’ smartwatches have gotten good at keeping children connected to parents while they’re out, but they go dark the moment they come off the wrist. That’s the gap TCL is trying to close with the Tbot, a magnetic desktop dock that pairs with TCL’s kids’ watches, like the MoveTime MT48, to keep the experience going at home during charging. Rather than letting the device sit idle on a nightstand, the Tbot turns that downtime into something more purposeful.

Designer: TCL

The companion functions as an AI assistant shaped around a child’s daily rhythm, setting wake-up alarms, bedtime reminders, and Pomodoro-style study timers through age-appropriate guidance. It also doubles as a learning partner for guided discovery, a sleep companion that tells bedtime stories, and a parental alert hub that sends configurable notifications when parents need to stay in the loop. The idea is continuity between the outdoors and the home, with the watch and dock working as two parts of the same connected experience.

TCL is positioning the Tbot as a concept for now, still in its development phase while the company works through applicable regulations around AI features for children. That measured approach actually makes sense given the audience, since parental permission and age-appropriate guardrails are built into its design from the start. Getting that balance right between a helpful AI companion and appropriate boundaries for kids is exactly the kind of design problem worth taking slowly.

Lenovo Yoga Book Pro 3D Concept

3D creation on a laptop has always involved a certain amount of peripheral management, between mice, styluses, and the occasional spacemouse bolted to the side of the desk. The Yoga Book Pro 3D Concept takes aim at that setup by building a glasses-free 3D display directly into a dual-screen laptop, letting creators view depth, form, and spatial relationships on screen without any additional equipment. Lenovo’s AI software handles 2D to 3D conversion on the upper PureSight Pro Tandem OLED display, and can even generate an environment around the converted object on command.

Designer: Lenovo

The dual-screen concept laptop also offers a rather interesting interaction feature. Zero-touch gestures read hand movements in front of the RGB camera, letting users zoom and rotate 3D objects without touching the screen at all. The lower display acts as a touch surface with snap-on physical pads that pop up adjustment controls, like lighting and viewing angle, wherever they’re placed. It’s a workflow designed to keep creators in the work rather than hunting through menus.

As a concept, the Yoga Book Pro 3D is still a proof of intent rather than a product you can buy, but it represents a genuinely specific design problem solved with unusual conviction. Glasses-free 3D displays have struggled to convince outside of niche applications, so how well the actual display holds up for extended professional use will be the real test when this moves closer to production.

Vivo X300 Ultra and Camera Cage

Most smartphone camera rigs are an afterthought, a collection of third-party mounts and adapters held together by optimism. Vivo is taking a different approach with the X300 Ultra’s dedicated Camera Cage, a pro-grade frame designed specifically around the phone rather than adapted from generic cinema accessories. Dual grip handles, cold shoe mounts, quick-release ports, and dedicated physical buttons for shutter and zoom come built into one coherent system.

Designer: vivo

The cage is also where the ZEISS Telephoto Extender Gen 2 Ultra slots in, an APO-certified lens co-engineered with ZEISS that pushes the X300 Ultra to a 400mm equivalent focal length with full 200MP optical output. Gimbal-grade optical image stabilization and motion-tracking focus sit underneath all of that reach. An integrated multi-level cooling fan handles thermal load during extended video shoots, solving the problem that turns most “pro mobile video” sessions into a race against an overheating warning.

What makes the setup genuinely interesting is the conviction behind it. Vivo isn’t treating the cage as a novelty accessory but as the central argument for how a smartphone can function as a serious production tool. The phone alone is one thing; inside this cage, with the extender attached and physical controls in hand, it becomes a fundamentally different experience.

TECNO x Tonino Lamborghini TAURUS Mini Gaming PC

Gaming PCs have never been shy about their presence, big towers, aggressive angles, and enough RGB to illuminate a small runway. The Tonino Lamborghini TECNO TAURUS compresses all of that energy into a mini PC chassis, with an all-metal body, red-accented lighting, and see-through panels that put the water-cooling loop on full display. It’s unapologetically theatrical, and that’s clearly the entire point of the exercise.

Designer: TECNO

Under that showpiece exterior sits an Intel Core i9-13900HK with 14 cores running up to 5.4GHz, alongside an NVIDIA GeForce RTX 5060 on the Blackwell architecture at 145W total graphics power. A roughly 10,000mm² pure copper water-cooled cold plate and triple-fan setup handle thermals in that compact body. A real-time performance monitor on the chassis lets you watch CPU and GPU loads without opening a single app, which feels very on-brand for a machine this self-aware.

TECNO’s first collaboration with Tonino Lamborghini positions this as a desktop you’d put on your desk rather than under it, treating the machine as a design object as much as a gaming rig. Fifteen ports and WiFi 6E keep the practical side well covered. What’s genuinely interesting is how much of the design budget went into making the cooling system the visual centerpiece, turning thermal engineering into the main aesthetic argument.

Unihertz Titan 2 Elite QWERTY Phone

Physical keyboard phones never really died; they just quietly retreated to a corner of the internet where people complained loudly about touchscreen autocorrect. Unihertz has been serving that corner for years with its Titan series, and the Titan 2 Elite is the most refined version yet. Gone is the chunky frame of its predecessor; in its place comes a slimmer 75mm-wide body, a 4.03-inch 120Hz AMOLED display with a punch-hole camera, and the same four-row QWERTY keyboard that the series built its following on.

Designer: Unihertz

The keyboard itself doubles as a touchpad, letting you scroll and navigate with a thumb swipe across the keys, a trick carried over from earlier Titans that still feels genuinely useful. Although nothing’s confirmed yet, it’s expected to run on a MediaTek Dimensity 7300 with 12GB of RAM and 512GB of storage, which is a solidly capable mid-range setup for a phone that’s really selling you on input, not raw performance. More notable is the software commitment: Android 16 out of the box, updates promised through Android 20, and security patches running until 2031, a rare five-year horizon for a device in this price range.

The Titan 2 Elite arrives at an interesting moment, with the Clicks pulling attention toward keyboard accessories for iPhones and Unihertz countering with a dedicated standalone device instead. There’s a meaningful difference between treating the keyboard as an add-on and building an entire phone around it, and that’s the bet Unihertz is making here.

The post Yanko Design’s Best of MWC 2026: When Engineering Gets Obsessive first appeared on Yanko Design.

This AI Desk Terminal Has a Screen, Knob, and Voice Control

AI has become a permanent fixture in how we work, but accessing it still feels strangely clumsy. Most of the time, it means opening yet another browser tab, typing a prompt into a chat window, waiting for a response, then copying it somewhere else. The irony is thick: tools designed to save time end up buried under the same pile of windows and notifications they were supposed to help manage.

The DECOKEE Quake approaches this problem sideways, and the solution is physical. It is a desktop terminal built around an 8.88-inch ultra-wide IPS touchscreen and a single rotary control knob, designed to sit alongside a keyboard rather than compete with the monitor above it. Everything about the form factor suggests a device that wants to be glanced at, tapped, and spoken to, not stared at for hours.

Designer: DECOKEE

Click Here to Buy Now: $279 $359 (22% off). Hurry, only 66/500 left! Raised over $231,000.

Pick it up and the construction registers immediately. The body is CNC-machined aluminum alloy with an anodized matte finish, a material choice that gives the Quake a density and coolness that plastic peripherals simply cannot replicate. A transparent backplate on the rear adds a subtle design signature, while the adjustable stand lets the screen tilt anywhere from flat to 60 degrees. At roughly 800g, it has enough heft to stay planted on a desk without feeling like an anchor.

1

That ultra-wide screen has a 1920×480 resolution at 450 nits or brighter, and its unusual aspect ratio turns out to be a deliberate design decision. Rather than mimicking a small monitor, the panel is shaped for control surfaces: rows of customizable touch shortcuts, status dashboards, system stats, and meeting interfaces laid out horizontally. The rotary knob beside it offers infinite rotation with a push-button click and an RGB light ring that changes color based on what mode the Quake is operating in, turning a simple input device into a status indicator.

1

Where the Quake earns its “AI copilot” label is in meetings. Tap a button, and it begins recording through a built-in far-field microphone with noise reduction, then auto-generates a structured transcript and summary when the call ends. Ten summary templates let the output match the context, whether it is a standup, a client call, or a brainstorm. Real-time translation covers 17 languages, and a system-level mic mute button works across every app on the computer, not just Zoom or Teams.

1

Beyond meetings, holding the knob and speaking activates a conversational AI layer with over 100 configurable assistant roles. Ask it to generate a shortcut layout for Photoshop, and it builds one on screen, ready to use. Ask for a translation, a compliance check, or a math solution, and the response appears on the Quake’s display without ever pulling focus from the main monitor. The same voice input can produce custom wallpapers and emojis, though the novelty of AI-generated desktop art will vary by taste.

1

1

The feature list stretches further than expected for a device this compact. A system monitoring mode displays real-time CPU, memory, and network stats. A Discord overlay gives gamers channel and mute controls without alt-tabbing. Home Assistant integration (through API setup) allows single-tap smart home control from the touchscreen. There is even a music player with a vinyl-inspired interface that connects to Spotify or plays local files, which is a charming if unexpected addition to a productivity device.

1

What makes the Quake interesting as a design object is the underlying argument it makes about where AI belongs on a desk. Not trapped inside a browser tab, not buried in a notification, but sitting in a physical surface with tactile controls and a screen that stays visible. Whether that argument holds up after months of daily use is something only shipped units will answer.

Click Here to Buy Now: $279 $359 (22% off). Hurry, only 66/500 left! Raised over $231,000.

The post This AI Desk Terminal Has a Screen, Knob, and Voice Control first appeared on Yanko Design.

Lenovo Just Turned the Ugly Desk Hub Into an AI Assistant

Most desks already have too much on them. A laptop, an external monitor, a charging cable snaking toward a phone, maybe a cold cup of coffee that started the morning with good intentions. And somewhere behind all of it is a hub that ties all of it together, which is usually a graceless plastic brick shoved behind something else, forgotten until a port stops working. It’s the least glamorous object in the room, and it knows it.

Lenovo’s AI Work Companion Concept, announced at MWC 2026, makes a case that the hub doesn’t have to apologize for existing. It sits at the front of the desk as a matte black wedge, display angled toward the person working, looking more like a clock than a piece of connectivity hardware. It takes a different position on that problem, literally and figuratively.

Designer: Lenovo

The front display cycles through six clockface styles, from a clean flip-clock layout to an abstract trio of pie-shaped circles, each one designed to read comfortably at a glance without demanding attention. Alongside the time, it surfaces calendar events, port charging status, and a grid of quick-action shortcuts from a single compact footprint.

The hardware underneath that display is a full docking station. One USB-C port delivers 100W to a laptop, another handles 20W phone charging, and two HDMI outputs drive a pair of 4K displays at 60Hz simultaneously. For anyone running a multi-monitor setup, that covers the entire back of the desk without a separate hub involved.

The more unusual part is a cartoon mascot Lenovo calls the Thought Bubble, a bespectacled cloud that lives on the display and manages the AI layer. Tap the large red knob on top, and it pulls tasks and calendar events from across connected devices, then proposes a structured daily plan. It also schedules breaks and monitors screen time, with a weekly “celebration report” summarizing what got done.

The obvious tension is that a device designed to reduce screen fatigue adds another screen to the desk. Whether offloading schedule decisions to a cartoon cloud actually clears mental space, or just relocates the same decisions to a different surface, is a question the concept doesn’t fully answer yet. That’s not a criticism so much as an observation that the idea is still at the stage where it sounds better than it can be proven to work.

What’s harder to argue with is the physical logic. A docking station that also tells the time, tracks the day, and has a programmable knob for whatever shortcut matters most is a more considered object than the plastic brick it replaces. Whether the AI earns its place on the desk is something only daily use can settle.

The post Lenovo Just Turned the Ugly Desk Hub Into an AI Assistant first appeared on Yanko Design.

Lenovo’s AI Desk Robot Has Eyes, Moves, and Watches You Work

There’s a specific kind of loneliness that comes with working alone all day. Not the dramatic kind, just the low-grade awareness that every question you have goes into a chat window, every instruction gets typed into a box, and the thing supposedly helping you has no idea where you’re sitting or what’s on your desk.

Lenovo’s AI Workmate Concept, shown at MWC 2026, takes that gap seriously enough to build a physical object around it. The device is a desk companion in the most literal sense, a spherical head on an articulated arm, rising from a circular base, with animated eyes on its front display that shift and orient as it responds.

Designer: Lenovo

The arm is the most telling design decision, though it isn’t just decorative. Because it moves, the Workmate can orient itself toward whatever is in front of it, a document laid flat, a person leaning back, a wall nearby. That range of motion is what separates it from a smart speaker with a face. It has spatial awareness built into its posture, not just its software.

On the practical side, it handles the kind of work that accumulates quietly throughout a day. Place a document in front of it, and it can scan and summarize the contents. Talk through a rough set of notes, and it can help organize them into something usable. Working on a presentation means the Workmate can assist in structuring the content, pulling from what it already knows about the task at hand through on-device AI processing rather than a cloud connection.

The projection feature is the most speculative part of the concept. Rather than keeping information on a screen, the Workmate can cast content onto a desk surface or wall, which, on paper, turns any flat surface nearby into a secondary display. Whether that’s genuinely more useful than glancing at a monitor, or just a more theatrical way to display the same information, is a fair question that a proof of concept can’t fully answer.

What’s harder to dismiss is the physical language the design uses. The animated eyes aren’t a gimmick in the way that most product “personalities” are. They borrow from the same visual shorthand that makes robots in film immediately readable as attentive or distracted, curious or idle. A status light ring on the base shifts color depending on what the device is doing, adding a peripheral layer of feedback that doesn’t require looking directly at the display. Together, those two elements mean the Workmate communicates state without demanding attention, which is actually a more considered interaction model than most desktop AI tools currently offer.

The deeper question isn’t whether the Workmate works. It’s whether having a robot with eyes watching from the corner of the desk makes the day feel more manageable, or just more observed. That’s not a problem Lenovo can solve with a better arm joint. It’s the kind of thing that only becomes clear once the novelty of the eyes wears off.

The post Lenovo’s AI Desk Robot Has Eyes, Moves, and Watches You Work first appeared on Yanko Design.

Forget Step Counters: Dreame’s New Smart Rings Focus On ECG Reports, Sleep, And Real-Time Emotion Data

On any given game day, millions of us become amateur analysts, dissecting every play and scrutinizing every statistic that flashes across the screen. We track player performance with an almost scientific rigor, celebrating the numbers that signal a win and debating the metrics that lead to a loss. This deep dive into data has fundamentally changed how we watch sports, turning passive viewing into an interactive, analytical experience. Yet, for all the attention we pay to the athletes’ performance, our own physiological journey as spectators has remained completely invisible.

Dreame’s new AI Smart Ring proposes a fascinating shift in perspective, turning the sensor technology usually reserved for athletes inward on the audience. The ring’s most ambitious feature, an AI-powered emotion index, aims to quantify the rollercoaster of being a fan, tracking how your body reacts to every thrilling victory and agonizing fumble. It represents a new frontier for wearables, one less concerned with counting your steps and more interested in mapping your heart’s response to the passions that drive you. It is pro-level analytics for the rest of us.

Designer: Dreame

Instead of launching just one device, Dreame is splitting its ambition into a two-ring strategy, which is a seriously interesting market play. The company is effectively acknowledging that “health tracking” means different things to different people. For some, it is about hard, clinical data and safety nets. For others, it is about lifestyle, self-awareness, and emotional insight. So, rather than making one ring that tries to do everything, they have created two distinct products: the Dreame Health Ring, launching in early March, and the Dreame AI Smart Haptic Ring, which is slated for the second half of the year.

The Dreame Health Ring is the more advanced and serious of the two. This is the one aimed squarely at users who want professional-grade monitoring and peace of mind. Its headline feature is the ability to generate ECG reports on demand, moving it closer to a medical-grade device than a typical fitness tracker. It is built around a core of accurate health monitoring and safety alerts, using AI-driven analysis to flag potential issues. Think of this as the quiet, reassuring guardian, focused on delivering vital health data you can potentially share with a doctor, rather than tracking your mood during a movie.

Landing later this year, the Dreame AI Smart Haptic Ring is the lifestyle-focused sibling. You are looking at a 2.5 mm thin body that is about 7.5 mm wide and weighs a featherlight 5.2 grams. The outside is a microcrystalline zirconia nano-ceramic with a Mohs hardness of 8, while the inner band is a slick antibacterial alloy. This ring is all about AI-driven health and sleep tracking, but with a focus on interpretation and daily living. It is designed to be the wearable you forget you are even wearing.

Packed inside that tiny frame is the trifecta of modern health sensors: PPG for heart rate and SpO₂, a temperature sensor, and an accelerometer. This all feeds into the AI sleep algorithms that Dreame claims can nail your REM, deep, and light sleep stages with less than a 5 percent error rate. The AI ring tracks all your key vitals 24/7 and holds about a week of data offline, which is exactly how these trackers should work. But where the Health Ring focuses on ECGs, the AI ring uses this data to power its more experimental features.

This is where we get to the AI ring’s headline feature: the emotion sensing. It claims it can generate a real-time emotion index with 92 percent accuracy. Now, is it going to replace your therapist? Absolutely not. But that is not the point. The real value is in the biofeedback. It is a tool for spotting patterns, for seeing a data-driven trace of how your body reacted to a stressful day while your brain was telling you everything was fine. It is a fascinating, and potentially humbling, new layer of self-awareness that separates it from the more advanced Health Ring.

The design of the AI ring is meant to be invisible. It is a screenless, silent loop of ceramic. Instead of a screen, you get a tiny vibration motor inside for its AI Haptic Alerts, a subtle tap on your finger for a call or message, not a jarring buzz that makes everyone in the room look at you. Those haptics also support tap gestures for controlling music or snapping a photo. The battery life reflects this always-on philosophy, with about a week on the ring itself and a charging case that gives you a claimed at least a 100 days of use before you need a wall outlet.

So why are we seeing this two-ring strategy pop up around the Championship Sunday? It is a smart move. It frames the brand not as just another gadget maker, but as a company thinking deeply about the future of personal health. We are obsessed with the analytics of pro athletes, tracking every metric to understand their performance. Dreame is betting that we are finally ready to apply that same level of nerdy obsession to ourselves, and by offering two distinct paths, they are letting us choose just how deep we want that data to go.

The post Forget Step Counters: Dreame’s New Smart Rings Focus On ECG Reports, Sleep, And Real-Time Emotion Data first appeared on Yanko Design.

AI Device Turns Your Mental Health Data Into a Living Garden

There’s something deeply broken about the way we interact with technology. We scroll mindlessly, chase notifications, and bounce between tabs like caffeinated pinballs. Our devices constantly demand our attention, rewarding speed over substance, reaction over reflection. But what if a piece of technology asked you to slow down instead?

That’s the radical premise behind Cognitive Bloom, a speculative AI device conceived by Map Project Office in collaboration with Chanwoo Lee from Lovelace Research. Lee, who’s also a visiting lecturer at Imperial College London and the Royal College of Art, is reimagining what personal AI could become if we designed it with the same care we give to cultivating a garden.

Designers: Chanwoo Lee, Map Project Office, Lovelace Research

The concept couldn’t arrive at a more critical moment. With mounting evidence around cognitive decline and digital burnout, Cognitive Bloom offers an alternative vision for our relationship with artificial intelligence. Instead of optimizing for efficiency or speed, it encourages something we’ve almost forgotten how to do: genuine self-reflection.

At the heart of Cognitive Bloom is a beautiful metaphor that makes complex data feel alive. The device uses an ambient display that transforms your mental wellness data into a virtual ecosystem. Areas where you’re struggling show up as yellowing leaves. New buds emerge where you’re beginning to grow. When you’re truly thriving in an aspect of your wellbeing, those buds finally bloom. It’s an intuitive visualization that breaks down the typically overwhelming data around mental health. Rather than confronting you with charts, percentages, or clinical assessments, Cognitive Bloom speaks in a language we instinctively understand. Plants need water, sunlight, and attention. So do we.

The device functions as a domestic companion that nurtures what the designers call “a new ritual of self-reflection.” It’s designed to help users reconnect with what genuinely matters, fostering the creation of new mental pathways through thoughtful engagement rather than passive consumption. This approach stands in stark contrast to how most AI products work today. Current AI interfaces typically emphasize quick answers, instant gratification, and frictionless productivity. Cognitive Bloom deliberately introduces friction, but the kind that matters. It’s the friction of pausing. Of considering. Of being present with your thoughts rather than racing past them.

The gardening metaphor extends throughout the entire experience. Just as tending a garden requires patience, consistency, and presence, Cognitive Bloom asks users to take a respite from digitally overstimulated lifestyles. It creates space for genuine contemplation, curiosity, and self-discovery, qualities that feel increasingly rare in our current technological landscape. What makes this project particularly compelling is how it uses human-centered design to foster a deeper connection not just to ourselves, but to our digital environment. Too often, technology feels like something that happens to us, an external force constantly pulling us in a hundred directions. Cognitive Bloom suggests technology could instead become a tool for coming home to ourselves.

The collaboration between Map Project Office and Lovelace Research brings together expertise in design strategy and human-centered AI research, creating a vision that feels both technically informed and emotionally resonant. As a speculative project, Cognitive Bloom doesn’t need to solve every practical challenge of implementation. Instead, it asks the more important question: What if we actually designed technology the way we cultivate gardens, with care, patience, and presence?

That question alone is worth sitting with. In a culture obsessed with growth hacking, viral moments, and exponential scaling, the steady rhythm of gardening offers a different model entirely. Gardens can’t be rushed. They respond to seasons, weather, and the particular needs of different plants. They require observation and adaptation, not standardized solutions.

Cognitive Bloom represents a growing movement in design and technology that’s pushing back against the extractive, attention-harvesting model that dominates our digital lives. It joins other projects reimagining what ethical, human-centered AI could actually look like when we design for wellbeing instead of engagement metrics. Whether Cognitive Bloom eventually becomes a physical product or remains a provocative concept, it’s already succeeded in making us reconsider our relationship with AI and personal data. Sometimes the most important innovations aren’t the ones that disrupt markets but the ones that disrupt our assumptions about what technology should be for.

The post AI Device Turns Your Mental Health Data Into a Living Garden first appeared on Yanko Design.

Bring The Touch Bar Back… And Maybe Put An Intelligent Siri Or Gemini On It

Sounds radical, doesn’t it? The Touch Bar was such a waste of space on the MacBook Pro when it was first introduced exactly a decade ago in 2016. It shipped with a lot of potential but barely any real-world use, and Apple even considered swapping it out for a slot that housed the Apple Pencil back in 2021. While that feature never really came to pass, something else happened in 2021 that blew everyone’s minds – OpenAI’s Dall-E. For a lot of people, this was the first time you could just ‘tell’ an AI to make an image for you and it would. It was the birth of generative AI, and only a year later, OpenAI would break the internet with ChatGPT.

This is also around the time that Apple quietly killed the Touch Bar, but here’s my opinion… bring it back. Maybe not on the MacBook, but the Touch Bar definitely deserves a place on any independent wireless keyboard. With AI LLMs, progressive web apps, widgets, and vibe-coding going mainstream, a Touch Bar on a keyboard finally makes sense. It’s a place for your AI agent to live, alongside tasks, shortcuts, toolbars, and widgets. Apple pioneered the Touch Bar, but one could argue they were way too early to realize its potential. Now, a concept keyboard by Eslam Mohammed and Ahmed Yassen shows how the Touch Bar should be resurrected.

Designers: Eslam Mohammed & Ahmed Yassen

Mohammed and Yassen’s LUMO x700 keyboard comes with a few tricks up its sleeve. Sure, it sports a sleek, metal-forward Magic Keyboard-inspired design, but the thing also packs an end-to-end Touch Bar that’s about as tall as your standard key, making it a lot more usable than the actual Touch Bar, which was just as slim as the function key row. However, that isn’t all there is to this. A snap-on module turns the keyboard into a music player so you aren’t listening to tunes on your iMac or laptop’s fairly tinny speakers. All in all, this turns your keyboard into something a little more versatile than just ‘something you type on’. It now has an identity of its own, and can channel a level of productivity you’d only get with an Elgato-style accessory.

But wait! That modular soundbar isn’t just keyboard-dependent! It works independently too, allowing you to place it underneath the monitor or anywhere else on your desk for a wireless sound experience. The dual speakers fire stereo audio, buttons and a knob help tweak volume and playback, and the part that attaches to the LUMO x700 keyboard, well, there’s a hidden light-bar there to give your desk some ambient lighting. It’s all cleverly designed to ensure the module isn’t useless on its own. However, that Touch Bar is my predominant focus.

Why does a Touch Bar matter now more than ever? Well, we’re all multitasking, we’re all looking for extra real estate for displays, and almost all of us are running agents of some kind to automate tasks. That’s what this Touch Bar is for. Shortcuts to apps live in the center, widgets on the left, and maybe an AI chatbot on the right that you can deploy to talk to, ask questions to, or delegate tasks to. Claude just debuted a desktop-controlling agent called Claude Cowork that can run tasks and perform duties on your desktop on your command, and the infamous OpenClaw’s been taking the internet by storm for doing pretty much the same thing too. Obviously, such an AI will need to be vetted, and probably contained by a set of restrictions so it doesn’t go around leaking your data on a ‘Reddit for AI Agents’ or spending your cash (as OpenClaw has done in a few instances).

The rest of the Touch Bar experience goes on as originally intended. Active programs can reside within the bar, like a recorder interface, the player for music or video apps, allowing you to seek to different parts of a song/video, or even the emoji keyboard that lets you easily cycle through emojis before pasting them. The potential is endless, and while independent Touch Bars like this one exist, we need to design one for an era of AI agents, applets, shortcuts, and widgets. It really is about time.

The post Bring The Touch Bar Back… And Maybe Put An Intelligent Siri Or Gemini On It first appeared on Yanko Design.

Meta Misread the Future Twice. Now They’re Sitting on a Golden Egg, But Don’t Know It

Mark Zuckerberg changed his company’s name to Meta in October 2021 because he believed the future was virtual. Not just sort-of virtual, like Instagram filters or Zoom calls, but capital-V Virtual: immersive 3D worlds where you’d work, socialize, and live a parallel digital life through a VR headset. Four years and roughly $70 billion in cumulative Reality Labs losses later, Meta is quietly dismantling that vision. In January 2026, the company laid off around 1,500 people from its metaverse division, shut down multiple VR game studios, killed its VR meeting app Workrooms, and effectively admitted that the grand bet on virtual reality had failed. Investors barely blinked. The stock went up.

The official line now is that Meta is pivoting to AI and wearables. Zuckerberg spent much of 2025 building what he calls a “superintelligence” lab, hiring top-tier AI talent with eye-watering compensation packages that are now one of the largest drivers of Meta’s 2026 expense growth. The company released Llama models that benchmark decently against OpenAI and Google, embedded chatbots into WhatsApp and Instagram, and talks constantly about “AI agents” and “new media formats.” But from a product and profit perspective, Meta’s AI strategy looks suspiciously like its metaverse strategy: lots of spending, vague promises, and no breakout consumer experience that people actually love. Meanwhile, the thing that is quietly working, the thing people are buying and using in the real world, is a pair of $300 smart glasses that Meta barely talks about. If this sounds like a pattern, that’s because it is. Meta has now misread the future twice in a row, and both times the answer was hiding in plain sight.

The Metaverse Was a $70 Billion Fantasy

Reality Labs has been hemorrhaging money since late 2020. As of early 2026, cumulative operating losses sit somewhere between $70 and $80 billion, depending on how you slice the quarters. In the third quarter of 2025 alone, Reality Labs posted a $4.4 billion loss on $470 million in revenue. For 2025 as a whole, the division lost more than $19 billion. These are not rounding errors or R&D investments that will pay off next year. These are structural losses tied to a product category, VR headsets and metaverse platforms, that the market simply does not want at the scale Meta imagined.

The vision sounded compelling in a keynote. You would strap on a Quest headset, meet your coworkers in a virtual conference room with floating whiteboards, then hop over to Horizon Worlds to hang out with friends as legless avatars. The problem was that almost no one wanted to do any of that for more than a demo. VR remained a niche gaming platform with occasional fitness and entertainment use cases, not the next paradigm shift in human interaction. Zuckerberg kept insisting the breakthrough was just around the corner. He was wrong, and the January 2026 layoffs and studio closures were the formal acknowledgment that Reality Labs as originally conceived was dead.

The irony is that Meta actually had a potential killer app inside Reality Labs, and it murdered it. Supernatural, a VR fitness game that Meta acquired for $400 million in 2023, was one of the few pieces of Quest software that generated genuine user loyalty and recurring revenue. People who used Supernatural regularly described it as the most effective home workout they had ever done, combining rhythm-based gameplay with full-body movement in a way that treadmills and Peloton bikes could not replicate. It had a subscription model, a dedicated community, and real retention. In January 2026, Meta moved Supernatural into “maintenance mode,” which is corporate speak for “we fired almost everyone and it will get no new content.” If you are trying to prove that VR has mainstream utility beyond gaming, fitness is one of the most obvious wedges. Meta had that wedge, and it chose to kill it in the same round of cuts that shuttered studios working on Batman VR games and other prestige titles. The message was clear: Zuckerberg had lost interest in Quest, even the parts that worked.

The AI Bet That Looks Like the ‘Metaverse Bust’ 2.0

After spending years insisting the future was virtual worlds, Meta pivoted hard to AI in 2023 and 2024. Zuckerberg now talks about AI the way he used to talk about the metaverse: with sweeping language about paradigm shifts and transformative platforms. The company stood up an AI division focused on building what it calls “superintelligence,” hired aggressively from OpenAI and Anthropic, and made technical talent compensation the second-largest contributor to Meta’s 2026 expense growth behind infrastructure. This is not a side project. Meta is spending billions on AI research, training, and deployment, and Zuckerberg expects losses to remain near 2025 levels in 2026 before they start to taper.

From a technical standpoint, Meta’s AI work is solid. The Llama family of models is legitimately competitive with GPT-4 class systems and has found real adoption among developers who want open-source alternatives to OpenAI and Google. Meta’s internal AI is also driving real business value in ad targeting, content ranking, and moderation. Those systems work, and they contribute directly to Meta’s core revenue. But from a consumer product perspective, Meta’s AI feels scattered and often unnecessary. The company has embedded “Meta AI” chatbots into WhatsApp, Instagram, Messenger, and Facebook, none of which feel like natural places for a chatbot. Instagram’s feed is increasingly stuffed with AI-generated images and engagement bait that users actively complain about. Meta has launched character-based AI bots tied to influencers and celebrities, and approximately no one uses them. The gap between “we have impressive models” and “we have a product people love” is enormous, and it is the exact same gap that sank the metaverse.

What Meta is missing, again, is product intuition. OpenAI built ChatGPT and made it feel like the future because the interface was simple, the use cases were obvious, and it delivered consistent value. Google integrated Gemini into Search and productivity tools where users were already working. Meta, by contrast, seems to be throwing AI at every surface it controls and hoping something sticks. Zuckerberg talks about “an explosion of new media formats” and “more interactive feeds,” which in practice means more algorithmic slop and fewer posts from people you actually know. Analysts are starting to notice. One Bernstein note from early 2026 argued that the “winner” criteria in AI is shifting from model quality to product usage, which is a polite way of saying that having a great model does not matter if your product is annoying. Meta has a great model. Its products are annoying.

The financial picture is also murkier than Meta would like to admit. Reality Labs is still losing close to $20 billion a year, and while AI is not a separate reporting segment, the talent and infrastructure costs are clearly rising. Meta’s overall revenue growth is strong, driven by advertising, but the company is not yet showing a clear path to AI profitability outside of ‘ad optimization’. That puts Meta in the awkward position of having pivoted from one unprofitable moonshot (metaverse) to another potentially unprofitable moonshot (consumer AI products) while the actual profitable parts of the business, social ads and engagement, keep the lights on. This is a pattern, and it is not a good one.

The Smart Glasses Lead That Meta Is Poised to Lose

Meta talks about the Ray-Ban smart glasses constantly. Zuckerberg calls them the “ultimate incarnation” of the company’s AI vision, and the pitch is relentless: sales more than tripled in 2025, the glasses represent the future of ambient computing, this is the post-smartphone platform. The problem is not that Meta is ignoring the glasses. The problem is that Meta is about to squander a massive early lead, and the competition is closing in fast. 2026 is shaping up to be a blockbuster year for smart glasses. Samsung confirmed its AR glasses are launching this year. Google is releasing its first pair of smart glasses since 2013, an audio-only pair similar to the Ray-Ban Meta glasses. Apple is reportedly pursuing its own smart glasses and shelved plans for a cheaper Vision Pro to prioritize the project. Meta dominated VR because it was early, cheap, and had no real competition. In smart glasses, that window is closing fast, and the field is getting crowded with all kinds of names, from smaller players like Looktech and Xgimi’s Memomind to mid-sized brands like Xreal, to even larger ones like Google, TCL, and Xiaomi.

The Ray-Ban Meta glasses work because they are simple and focused. They take photos and videos, play music, make calls, and provide real-time answers through an AI assistant. Parents use them to record their kids hands-free. Travelers use them for translation. The form factor, actual Ray-Ban Wayfarers that cost around $300, means they do not scream “I am wearing a computer on my face.” This is the rare Meta hardware product that feels intuitive rather than forced, and it is selling because it solves boring, everyday problems without requiring users to change their behavior.

Then Meta made a critical mistake. To use the glasses, you have to route everything through the Meta AI app, which means you cannot just power-use the hardware without engaging with Meta’s AI-slop ecosystem. Want to access your photos? Meta AI. Want to tweak settings? Meta AI. The app is the mandatory gateway, and it is stuffed with the same kind of algorithmic recommendations and AI-generated suggestions that clutter Instagram and Facebook. Instead of letting the glasses be a clean, utilitarian tool, Meta is using them as another vector to push its AI products. Google and Samsung are not going to make that mistake. Their glasses will integrate with Android XR and existing ecosystems without forcing users into a single AI app. Apple, if and when it launches, will almost certainly take a similar approach: clean hardware, seamless OS integration, optional AI features. Meta had a head start, Ray-Ban branding, and a product people actually liked. It is on track to waste all of that by prioritizing AI evangelism over product discipline, and the competition is going to eat its lunch.

What Happens When You Chase Narratives Instead of Products

The pattern across metaverse and AI is that Meta keeps betting on big, abstract visions rather than iterating on the things that work. Zuckerberg is a narrative-driven founder. He wants to define the future, not respond to it. That impulse gave us Facebook in 2004, when no one else saw the potential of real-identity social networks, but it has led Meta astray repeatedly in the 2020s. The metaverse was a narrative, not a product. The idea that billions of people would strap on headsets to work and socialize in 3D was always more science fiction than product roadmap, but Zuckerberg committed so hard to it that he renamed the company.

AI feels like the same mistake. The narrative is that foundation models and “agents” will transform every part of computing, and Meta wants to be seen as a leader in that transformation. The actual products, chatbots in WhatsApp and AI-generated feed content, do not meaningfully improve the user experience and in many cases make it worse. Meanwhile, the thing that is working, smart glasses, does not fit cleanly into the AI or metaverse narrative, so it gets less attention and investment than it deserves. Meta’s 2026 strategy, “shifting investment from metaverse to wearables,” is a tacit admission of this, but it is couched in language that still emphasizes AI rather than the hardware itself.

The other pattern is that Meta is willing to kill its own successes if they do not fit the broader narrative. The hit VR fitness game on Meta’s Horizon, Supernatural, was working. It had subscribers, retention, and cultural momentum within the VR fitness community. It was also a relatively small, specific product rather than a platform play, and that made it expendable when Meta decided to scale back Reality Labs. The same logic applies to Quest more broadly. The headset had carved out a niche in gaming and fitness, and with sustained investment in content and ecosystem development, it could have grown into a meaningful adjacent business. Instead, Meta is deprioritizing it because Zuckerberg has decided the future is AI and lightweight wearables. That might turn out to be correct, but the way Meta is executing the pivot, by shuttering studios and putting products in maintenance mode rather than spinning them out or finding partners, suggests a lack of product discipline.

Why Smart Glasses Might Actually Be the Next Facebook

If you step back and ask what Meta is actually good at, the answer is not virtual reality or language models. Meta is good at building social products with massive scale, capturing and distributing content, and monetizing attention through ads. The Ray-Ban Meta glasses fit all of those strengths. They make it easier to capture photos and video, which feeds into Instagram and Facebook. They use AI to provide contextual information, which ties into Meta’s model development. And they are a physical product that people wear in public, which is a form of distribution and branding that Meta has never had before.

The bigger story is that smart glasses as a category are exploding, and Meta happened to be early. It is not just Samsung, Google, and Apple entering the space. Meta itself is expanding the Ray-Ban line with Displays (which adds a heads-up display) and partnering with Oakley on HSTN, a sportier model aimed at action sports. Google is teaming up with Warby Parker for its glasses, which gives it instant credibility in eyewear design. And then there are the startups: Even Realities, Xiaomi, Looktech, MemoMind, and dozens more, all slated for 2026 releases. This feels exactly like the moment AirPods sparked the true wireless earbud movement. Apple defined the format, then everyone from Samsung to Sony to no-name brands flooded the market, and now you can buy HMD ANC earbuds for 28 dollars. Smart glasses are following the same trajectory, which means the form factor itself is validated, and Meta’s early lead matters less than whether it can keep iterating faster than everyone else.

The other underrated piece is that having an instant camera on your face is genuinely useful in ways that VR headsets never were. People are using Ray-Ban Meta glasses as GoPro alternatives while skateboarding, cycling, and doing action sports, because POV capture without holding a phone or mounting a camera is frictionless. Content creators are using them to shoot hands-free B-roll at events like CES. Parents are using them to record their kids playing without the weird “I am holding my phone up at the playground” vibe. Pet owners are capturing spontaneous moments with dogs and cats that would be impossible to get with a phone. These are not sci-fi use cases or metaverse fantasies. They are boring, real-world problems that the glasses solve immediately, and that is why they are selling. Meta has spent a decade chasing grand visions of the future, and it accidentally built a product that people want right now. The challenge is whether it can resist the urge to over-complicate it before Google, Samsung, and Apple catch up.

The Real Lesson Is About Focus

Meta has spent the last five years oscillating between grand visions, metaverse and AI, and neglecting the products that actually work. The Ray-Ban Meta glasses are proof that when Meta focuses on solving real problems with tangible products, it can still build things people want. The metaverse failed because it was a solution in search of a problem, and the AI push is struggling because Meta is shipping features rather than products. Smart glasses, by contrast, are succeeding because they make everyday tasks easier without requiring users to change their behavior or buy into a futuristic narrative.

If Zuckerberg can internalize that lesson, Meta might actually have a shot at owning the next platform. But that requires a level of product discipline and restraint that Meta has not shown in years. It means resisting the urge to turn every product into a platform, admitting when a bet has failed rather than pouring another $10 billion into it, and focusing on iteration over narration. The irony is that Meta already has the right product. It just needs to stop looking past it.

The post Meta Misread the Future Twice. Now They’re Sitting on a Golden Egg, But Don’t Know It first appeared on Yanko Design.

Teenage Engineering-inspired Music Sampler Uses AI In The Nerdiest Way Possible

The T.M-4 looks like it escaped from Teenage Engineering’s design studio with a specific mission: teach beginners how to make music using AI without making them feel stupid, or without creating slop. Junho Park’s graduation concept borrows all the right cues from TE’s playbook, that modular control layout, the single bold color, the mix of knobs and buttons that practically beg to be touched, but redirects them toward a gap in the market. Where Teenage Engineering designs for people who already understand synthesis and sampling, the T.M-4 targets people who have ideas but no vocabulary to express them. The device handles the technical translation automatically, separating audio into layers and letting you manipulate them through physical controls. It feels like someone took the OP-1’s attitude and wired it straight into an AI stem separator.

The homage succeeds because Park absorbed what makes Teenage Engineering products special beyond their appearance. TE hardware feels different because it removes friction between intention and result, making complex technology feel approachable through thoughtful interface design and immediate tactile feedback. The T.M-4 brings that same thinking to AI music generation. You’re manipulating machine learning model parameters when you adjust texture, energy, complexity, and brightness, but the physical controls make it feel like direct manipulation of sound rather than abstract technical adjustment. An SD card system lets you swap AI personalities like you would game CDs from a gaming console – something very hardware, very tactile, very TE. Instead of drowning in model settings, you collect cards that give the AI different characters, making experimentation feel natural rather than intimidating.

Designer: Junho Park

What makes this cool is how it attacks the exact point where most beginners give up. Think about the first time you tried to remix a track and realized you had no clean drums, no isolated vocals, nothing you could really move around without wrecking the whole thing. Here, you feed audio in through USB-C, a mic, AUX, or MIDI, and the system just splits it into drum, bass, melody, and FX layers for you. No plugins, no routing, no YouTube rabbit hole about spectral editing. Suddenly you are not wrestling with the file, you are deciding what you want the bass to do while the rest of the track keeps breathing.

The joystick and grid display combo help simplify what would otherwise be a fairly daunting piece of gear. Instead of staring at a dense DAW timeline, you get a grid of dots that represent sections and layers, and you move through them like you are playing with a handheld console. That mental reframe matters. It turns editing into navigation, which is far less intimidating than “production.” Tie that to four core parameters, texture, energy, complexity, brightness, and you get a system that quietly teaches beginners how sound behaves without ever calling it a lesson. You hear the track get busier as you push complexity, you feel the mood shift when you drag energy down, and your brain starts building a map.

Picture it sitting next to a laptop and a cheap MIDI keyboard, acting as a hardware front end for whatever AI engine lives on the computer. You sample from your phone, your synth, a YouTube rip, whatever, then sculpt the layers on the T.M-4 before dumping them into a DAW. It becomes a sort of AI sketchpad, a place where ideas get roughed out physically before you fine tune them digitally. That hybrid workflow is where a lot of music tech is quietly drifting anyway, and this concept leans straight into it.

Of course, as a student project, it dodges the questions about latency, model size, and whether this thing would melt without an external GPU. But as a piece of design thinking, it lands. It treats AI as an invisible assistant, not the star of the show, and gives the spotlight back to the interface and the person poking at it. If someone like Teenage Engineering, or honestly any brave mid-tier hardware company, picked up this idea and pushed it into production, you would suddenly have a very different kind of beginner tool on the market. Less “click here to generate a track,” more “here, touch this, hear what happens, keep going.”

The post Teenage Engineering-inspired Music Sampler Uses AI In The Nerdiest Way Possible first appeared on Yanko Design.