RayNeo Just Put Batman on $299 AR Glasses (And They’re Brilliant)

At some point between CES announcements and MWC reveals, someone at RayNeo had a genuinely inspired idea. They had built the world’s first AR glasses with HDR10 support, partnered with Bang & Olufsen on the audio, and engineered a display that could hold its own against high-end monitors. The product was technically impressive, competitively priced, and ready to ship. Then they added a Batman mask to it. Not a sticker, not a themed wallpaper, but an actual light-blocking cover that makes you look like you are about to patrol Gotham while watching movies on a 201-inch virtual screen.

This is the Air 4 Pro, unveiled at MWC 2026 in Barcelona, and it represents something rare in the wearables market: a product that takes itself seriously enough to deliver legitimate specs, but not so seriously that it forgets to be fun. The hardware alone would make this newsworthy. The fact that it comes with the option to cosplay as either Batman or the Joker while using it makes it irresistible.

Designer: RayNeo

Start with what matters most: the display. The Air 4 Pro is the world’s first AR glasses with HDR10 display support, which is a genuinely significant leap. Powered by RayNeo’s custom Vision 4000 chip, the display hits 1,200 nits of peak brightness, renders 10.7 billion colors with near-professional color accuracy (ΔE < 2), and runs at a smooth 120Hz refresh rate. We're talking about a 201-inch virtual screen that sits in front of your eyes, with a 200,000:1 contrast ratio. That is the kind of color performance you would expect from a high-end monitor, not from something you're wearing on your face.

The HDR10 support matters more than it might seem at first. It means that when you’re watching a movie or gaming with these on, the image is not being compressed into mediocrity. The Vision 4000 chip can also upgrade standard SDR content to HDR in real time, and there is an AI algorithm onboard that converts 2D content into 3D. These are not gimmick features. For anyone who has tried AR glasses before and felt vaguely disappointed by the visual output, this is the version that corrects the course.

Audio-wise, RayNeo partnered with Bang & Olufsen on a self-developed sound tube design with a dual opposing acoustic chamber system. The result is reportedly an 80% reduction in sound loss compared to previous models. That is a partnership that immediately signals intent. Bang & Olufsen does not lend their name to anything half-hearted, and the presence of that collaboration here suggests that RayNeo is going after people who care about the full sensory experience, not just the display numbers.

The glasses weigh 76 grams, which is no small achievement given everything packed inside. They include interchangeable nose pads, TÜV SÜD certification for low blue light and flicker-free performance, and a 3,840Hz PWM hybrid dimming system for eye protection. It is the kind of spec sheet that feels increasingly grown-up.

And then there is the Batman Edition. RayNeo unveiled two limited versions at MWC 2026: the Limited Justice Edition, which is the Batman variant, and the Limited Chaos Edition, styled after the Joker. Both come with a light-shield cover that doubles as a cosplay accessory, blocking ambient light to sharpen your viewing experience while also making you look like you are about to interrogate someone in Gotham City. The packaging is loaded with DC-themed details, and buyers get to literally pick a side.

Is this a marketing stunt? Partially, yes. But it is a clever one, because the light-shield cover is functional, not just decorative. It actually solves a real problem AR glasses have always had in bright environments. The fact that it also looks incredible is a bonus that makes this feel less like a product and more like a collectible.

My honest take is that the Batman collaboration is what will get people through the door, but the hardware is what will make them stay. At $299, with an early bird price of $249 through March 28, the Air 4 Pro is not cheap, but it is positioned well against the competition. It works with iPhones, Android flagships, PS5, Nintendo Switch 2, and most modern devices, which removes a lot of the friction that has held wearables back.

RayNeo has clearly done its homework. The Air 4 Pro is not trying to replace your phone or your TV. It is offering a better version of the portable screen experience, and the Batman costume is just the perfect way to announce it.

The post RayNeo Just Put Batman on $299 AR Glasses (And They’re Brilliant) first appeared on Yanko Design.

Meta Wants to Put an AI Health Tracker on Your Wrist in 2026. What Could Go Wrong??

Meta is building a smartwatch, and it wants to know your heart rate, your sleep patterns, your activity levels, and whatever else it can pull from a sensor pressed against your skin all day. The device is codenamed Malibu 2, it’s targeting a 2026 launch, and by most accounts it sounds like a perfectly competent health wearable. The problem isn’t the hardware. The problem is the company attached to it.

This is the same Meta that just faced congressional scrutiny over social media addiction today. The same Meta whose smart glasses are reportedly inching toward facial recognition. The same Meta that filed a patent for Project Lazarus, a system designed to generate posthumous content from deceased users, because apparently your data doesn’t stop being useful to them just because you do. Handing your most intimate biometric information to that company is a case study in one.

Designer: Meta

To be fair, the product itself has a coherent logic behind it. Meta’s Ray-Ban Display glasses have been received surprisingly well by the press, and the neural wristband that ships with them, which uses electromyography to read muscle signals and translate them into gestures, only works with those glasses. That’s a real limitation. A smartwatch that absorbs that gesture-control functionality while adding health tracking and a persistent AI assistant would close a gap that currently makes the whole setup feel incomplete. From a pure product strategy standpoint, Malibu 2 makes sense.

The hardware ambitions have also matured since Meta’s first attempt at a smartwatch, which was scrapped in 2022 after accumulating plans for detachable cameras and metaverse tie-ins that never quite added up to a coherent device. Malibu 2 is reportedly focused on health tracking and Meta AI integration, which is a much cleaner pitch. The company already has a working partnership with Garmin, visible in the Oakley Vanguard sports glasses and a neural band demo at CES 2026 inside a Garmin-powered car concept. If there’s a natural manufacturing and platform partner for this watch, Garmin is the obvious candidate.

Meta is also reportedly developing the watch to sit alongside updated Ray-Ban Display glasses, internally called Hypernova 2, with both devices likely to be unveiled at Meta Connect in September. The Phoenix mixed reality glasses, meanwhile, have been pushed to 2027 partly because Meta’s executives were concerned about releasing too many devices at once and confusing consumers. That’s a reasonable concern. It’s also a little rich coming from a company whose current product lineup already includes smart glasses with a separate neural band that only controls one device.

The wearables market is genuinely ready for a credible third competitor alongside Apple and Samsung, and Meta has the AI infrastructure and the existing glasses ecosystem to make Malibu 2 compelling from launch. But compelling and trustworthy are different things, and Meta has spent twenty years demonstrating which one it prioritizes. Your Apple Watch data sits in Apple’s ecosystem, behind a company that has made privacy a marketing pillar and a legal battleground. Your Malibu 2 data sits with a company that patented a way to keep monetizing you after you die.

The post Meta Wants to Put an AI Health Tracker on Your Wrist in 2026. What Could Go Wrong?? first appeared on Yanko Design.

Meta Wants to Put an AI Health Tracker on Your Wrist in 2026. What Could Go Wrong??

Meta is building a smartwatch, and it wants to know your heart rate, your sleep patterns, your activity levels, and whatever else it can pull from a sensor pressed against your skin all day. The device is codenamed Malibu 2, it’s targeting a 2026 launch, and by most accounts it sounds like a perfectly competent health wearable. The problem isn’t the hardware. The problem is the company attached to it.

This is the same Meta that just faced congressional scrutiny over social media addiction today. The same Meta whose smart glasses are reportedly inching toward facial recognition. The same Meta that filed a patent for Project Lazarus, a system designed to generate posthumous content from deceased users, because apparently your data doesn’t stop being useful to them just because you do. Handing your most intimate biometric information to that company is a case study in one.

Designer: Meta

To be fair, the product itself has a coherent logic behind it. Meta’s Ray-Ban Display glasses have been received surprisingly well by the press, and the neural wristband that ships with them, which uses electromyography to read muscle signals and translate them into gestures, only works with those glasses. That’s a real limitation. A smartwatch that absorbs that gesture-control functionality while adding health tracking and a persistent AI assistant would close a gap that currently makes the whole setup feel incomplete. From a pure product strategy standpoint, Malibu 2 makes sense.

The hardware ambitions have also matured since Meta’s first attempt at a smartwatch, which was scrapped in 2022 after accumulating plans for detachable cameras and metaverse tie-ins that never quite added up to a coherent device. Malibu 2 is reportedly focused on health tracking and Meta AI integration, which is a much cleaner pitch. The company already has a working partnership with Garmin, visible in the Oakley Vanguard sports glasses and a neural band demo at CES 2026 inside a Garmin-powered car concept. If there’s a natural manufacturing and platform partner for this watch, Garmin is the obvious candidate.

Meta is also reportedly developing the watch to sit alongside updated Ray-Ban Display glasses, internally called Hypernova 2, with both devices likely to be unveiled at Meta Connect in September. The Phoenix mixed reality glasses, meanwhile, have been pushed to 2027 partly because Meta’s executives were concerned about releasing too many devices at once and confusing consumers. That’s a reasonable concern. It’s also a little rich coming from a company whose current product lineup already includes smart glasses with a separate neural band that only controls one device.

The wearables market is genuinely ready for a credible third competitor alongside Apple and Samsung, and Meta has the AI infrastructure and the existing glasses ecosystem to make Malibu 2 compelling from launch. But compelling and trustworthy are different things, and Meta has spent twenty years demonstrating which one it prioritizes. Your Apple Watch data sits in Apple’s ecosystem, behind a company that has made privacy a marketing pillar and a legal battleground. Your Malibu 2 data sits with a company that patented a way to keep monetizing you after you die.

The post Meta Wants to Put an AI Health Tracker on Your Wrist in 2026. What Could Go Wrong?? first appeared on Yanko Design.

Smart Ring With A Built-in Screen Also Doubles As An AI-Assistant Pendant Wearable

Technology often evolves in dramatic spikes – brighter displays, sharper cameras, smarter assistants – but the real breakthroughs are usually quieter. As our devices become smaller and more personal, the focus shifts from adding features to removing friction. The most compelling wearables are the ones that disappear into your routine, responding instinctively without demanding attention. Dribble explores exactly that future, transforming subtle human expression into a seamless digital command system.

Dribble is a pill-shaped wearable built around silent speech recognition. Instead of relying on audible voice commands, the AI-powered gadget interprets lip movements and whispered articulations through integrated microphones and an under-display front camera sensor. It focuses on the physical mechanics of speech rather than the sound itself, allowing users to communicate with digital systems without speaking out loud or lifting a hand.

Designer: Kangmin Park

The vision is straightforward but ambitious: a smartphone-free lifestyle driven by subtle interaction. With gentle touches and silent articulation, users can reply to messages, take calls, or initiate pre-programmed email responses. Everything happens discreetly through the wearable, eliminating the awkwardness of wake words or public voice commands. In professional settings or crowded environments, this approach prioritizes privacy while maintaining efficiency.

Form plays a crucial role in making this concept believable. Dribble is designed to sit comfortably on the index finger, maintaining a compact and ergonomic presence that doesn’t compete with daily wear. Its minimal aesthetic reinforces the idea of technology that blends rather than dominates. A subtle integrated screen reduces visual dependency, encouraging users to stay engaged with their surroundings instead of constantly glancing at a phone.

Versatility is another defining element. Beyond its ring-like configuration, Dribble can shift into a necklace mode, taking on a gem-like appearance that doubles as a fashion accessory. It can also be worn on the wrist or attached to a backpack, adapting to personal style and functional needs. This flexibility positions it not just as a utility device, but as an extension of identity.

The wearable extends its capabilities beyond communication. Built-in sensors monitor vital health parameters, including heart rate, blood oxygen saturation, and stress levels. Pleasant vibration alerts notify users discreetly, reinforcing its role as both a lifestyle and wellness companion. The integration of health tracking adds depth to the concept, aligning it with the broader direction of modern wearable technology.

Dribble also carries meaningful implications for accessibility and safety. Hands-free, silent interaction could benefit individuals with limited mobility or those working in hands-busy environments, such as driving or technical operations. By removing the need for touchscreens or audible speech, it introduces a new layer of intuitive control.

Although still a concept, the project is presented with product-level detailing. Size options ranging from 40mm to 50mm suggest adaptability for different users, while a Plus model promises enhanced ergonomics and advanced features.

The post Smart Ring With A Built-in Screen Also Doubles As An AI-Assistant Pendant Wearable first appeared on Yanko Design.

Forget Step Counters: Dreame’s New Smart Rings Focus On ECG Reports, Sleep, And Real-Time Emotion Data

On any given game day, millions of us become amateur analysts, dissecting every play and scrutinizing every statistic that flashes across the screen. We track player performance with an almost scientific rigor, celebrating the numbers that signal a win and debating the metrics that lead to a loss. This deep dive into data has fundamentally changed how we watch sports, turning passive viewing into an interactive, analytical experience. Yet, for all the attention we pay to the athletes’ performance, our own physiological journey as spectators has remained completely invisible.

Dreame’s new AI Smart Ring proposes a fascinating shift in perspective, turning the sensor technology usually reserved for athletes inward on the audience. The ring’s most ambitious feature, an AI-powered emotion index, aims to quantify the rollercoaster of being a fan, tracking how your body reacts to every thrilling victory and agonizing fumble. It represents a new frontier for wearables, one less concerned with counting your steps and more interested in mapping your heart’s response to the passions that drive you. It is pro-level analytics for the rest of us.

Designer: Dreame

Instead of launching just one device, Dreame is splitting its ambition into a two-ring strategy, which is a seriously interesting market play. The company is effectively acknowledging that “health tracking” means different things to different people. For some, it is about hard, clinical data and safety nets. For others, it is about lifestyle, self-awareness, and emotional insight. So, rather than making one ring that tries to do everything, they have created two distinct products: the Dreame Health Ring, launching in early March, and the Dreame AI Smart Haptic Ring, which is slated for the second half of the year.

The Dreame Health Ring is the more advanced and serious of the two. This is the one aimed squarely at users who want professional-grade monitoring and peace of mind. Its headline feature is the ability to generate ECG reports on demand, moving it closer to a medical-grade device than a typical fitness tracker. It is built around a core of accurate health monitoring and safety alerts, using AI-driven analysis to flag potential issues. Think of this as the quiet, reassuring guardian, focused on delivering vital health data you can potentially share with a doctor, rather than tracking your mood during a movie.

Landing later this year, the Dreame AI Smart Haptic Ring is the lifestyle-focused sibling. You are looking at a 2.5 mm thin body that is about 7.5 mm wide and weighs a featherlight 5.2 grams. The outside is a microcrystalline zirconia nano-ceramic with a Mohs hardness of 8, while the inner band is a slick antibacterial alloy. This ring is all about AI-driven health and sleep tracking, but with a focus on interpretation and daily living. It is designed to be the wearable you forget you are even wearing.

Packed inside that tiny frame is the trifecta of modern health sensors: PPG for heart rate and SpO₂, a temperature sensor, and an accelerometer. This all feeds into the AI sleep algorithms that Dreame claims can nail your REM, deep, and light sleep stages with less than a 5 percent error rate. The AI ring tracks all your key vitals 24/7 and holds about a week of data offline, which is exactly how these trackers should work. But where the Health Ring focuses on ECGs, the AI ring uses this data to power its more experimental features.

This is where we get to the AI ring’s headline feature: the emotion sensing. It claims it can generate a real-time emotion index with 92 percent accuracy. Now, is it going to replace your therapist? Absolutely not. But that is not the point. The real value is in the biofeedback. It is a tool for spotting patterns, for seeing a data-driven trace of how your body reacted to a stressful day while your brain was telling you everything was fine. It is a fascinating, and potentially humbling, new layer of self-awareness that separates it from the more advanced Health Ring.

The design of the AI ring is meant to be invisible. It is a screenless, silent loop of ceramic. Instead of a screen, you get a tiny vibration motor inside for its AI Haptic Alerts, a subtle tap on your finger for a call or message, not a jarring buzz that makes everyone in the room look at you. Those haptics also support tap gestures for controlling music or snapping a photo. The battery life reflects this always-on philosophy, with about a week on the ring itself and a charging case that gives you a claimed at least a 100 days of use before you need a wall outlet.

So why are we seeing this two-ring strategy pop up around the Championship Sunday? It is a smart move. It frames the brand not as just another gadget maker, but as a company thinking deeply about the future of personal health. We are obsessed with the analytics of pro athletes, tracking every metric to understand their performance. Dreame is betting that we are finally ready to apply that same level of nerdy obsession to ourselves, and by offering two distinct paths, they are letting us choose just how deep we want that data to go.

The post Forget Step Counters: Dreame’s New Smart Rings Focus On ECG Reports, Sleep, And Real-Time Emotion Data first appeared on Yanko Design.

PlayStation XR Glasses Concept Makes a Strong Case for Gaming-Focused AR Wearables

Meta talks about XR glasses as companions for your social life. Snap a photo, answer a call, ask an AI what you are looking at. The PlayStation XR Glasses concept spins that idea toward a different center of gravity. Here, the glasses are not about broadcasting your world. They are about pulling the PlayStation universe closer, shrinking the distance between you, your console, and the screen that usually sits across the room.

Here, XR is not a spectacle. It is a subtle layer that folds into your existing PlayStation life. Imagine a virtual screen hovering above your TV stand, system notifications floating at the edge of your vision, a familiar PS logo resting by your temple like the Start button you have pressed a thousand times. The fantasy is not about replacing your PS5, but about letting its world follow you from couch to desk to bed, quietly, through something that looks like ordinary eyewear.

Designer: Shirish Kumar

The frames carry the same visual language as the PS5 and DualSense controller, all smooth curves and deliberate angles that look cohesive sitting next to your console. That blue accent lighting running along the temples is pure PlayStation branding, the kind of detail that works because it feels earned rather than slapped on. The folding hinge reveals those iconic button symbols when you open the arms, which is a nice touch that reinforces you are holding a gaming device that happens to look like eyewear. Whether Sony’s actual industrial design team would ever build something this sleek is another question entirely, but as a design exercise, it holds together.

There is a front-facing camera tucked under the lenses for object tracking and AR overlays, auto-adjusting lenses that darken outdoors and clear indoors, embedded sensors for a heads-up display, gesture controls for navigation. The PS logo on the temple supposedly works like a button, tap for Start and hold for Home, mirroring your muscle memory from the controller. All of that sounds good on paper. The real question is what you actually do with these once they are on your face. Existing PlayStation games would almost certainly run as a virtual screen floating in your field of view, basically a private monitor you wear instead of stare at. True AR gameplay where Aloy from Horizon is dodging around your coffee table requires games built specifically for that, and Kumar does not show or describe any of those experiences.

What this concept does well is stake out a different philosophy for XR glasses. Where Meta wants social connectivity and Apple is aiming for spatial computing as a productivity play, this imagines gaming-first hardware that extends an existing ecosystem rather than trying to create a new one. Whether that is enough to justify another screen in your life is the question every XR device has to answer eventually. For now, it is a polished look at what Sony could build if they decided lightweight AR glasses were the next logical step after VR headsets and portable screens.

The post PlayStation XR Glasses Concept Makes a Strong Case for Gaming-Focused AR Wearables first appeared on Yanko Design.

Forget CarPlay: Sherpa’s AR Glasses Decode Road Signs and Dashboard Icons For Nervous New Drivers

The AR glasses market keeps promising us augmented productivity and enhanced experiences, then delivering expensive ways to check notifications without pulling out your phone. Sherpa takes a different approach by targeting a specific moment of genuine incompetence: those first few months behind the wheel when every intersection feels like a pop quiz you didn’t study for. The concept uses heads-up displays to overlay directional cues and translate dashboard indicators, theoretically keeping your eyes on the road instead of darting between the windshield and that mysterious warning light.

What makes this Hongik University project interesting isn’t the hardware, which looks like standard-issue smart glasses in white plastic. It’s the learning system built around it. After each drive, the companion app analyzes your performance and identifies patterns in your mistakes. Miss the same type of turn signal three times? The AI notices. Struggle with a particular intersection? It breaks down what went wrong. Most new drivers get feedback in the form of angry horns and passenger-seat panic. This proposes something more useful, assuming you’re willing to let an algorithm critique your lane changes.

Designers: Yeongjun Yun, Jaeyun Lee

The hardware itself follows the current playbook for consumer AR: rounded frames thick enough to house display optics, visible sensor cutouts on the nose bridge (likely cameras for environmental and eye tracking), and an adjustable temple mechanism that looks borrowed from premium eyewear design. They’ve skipped the usual temptation to make it look aggressively futuristic, which matters when your target audience already feels self-conscious about their driving abilities. The cylindrical charging case suggests they’re thinking about daily use patterns rather than occasional deployment, treating this like essential equipment you grab before every drive during those first nervous months.

Where this gets genuinely clever is the integration with what they’re calling SDV, or software-defined vehicles. Modern cars already collect absurd amounts of data through their sensor arrays. Sherpa appears designed to tap into that information stream and translate it into actionable guidance. The system knows when you’ve entered a complex intersection, can read your hesitation through eye tracking, and overlay exactly what you’re supposed to watch for at that moment. Then it remembers that you struggle with this specific scenario and adjusts future guidance accordingly.

Unlike entertainment-focused AR wearables, this actually solves a real use case, which puts it ahead of most AR glasses the industry is trying to push down our throats. Driving schools teach you mechanics but abandon you at the precise moment when contextual learning would help most. If Sherpa can fill that gap between instruction and competence, it might be the first consumer AR application that people actually need rather than tolerate. Whether novice drivers will adopt glasses that broadcast their inexperience is a different question entirely, but at minimum someone’s finally asking AR to do actual work.

The post Forget CarPlay: Sherpa’s AR Glasses Decode Road Signs and Dashboard Icons For Nervous New Drivers first appeared on Yanko Design.

Samsung’s New Wearable Audio Concept Looks More Like Jewelry Than Tech

Wearable technology has spent too long looking like wearable technology. Slac breaks that mold with a refreshingly honest approach: if something lives on your body all day, it should look like it belongs there. The circular ear ring and accompanying wrist piece read more like contemporary jewelry than consumer electronics, which is exactly the point.

This concept taps into how Gen Z actually relates to their audio devices. These aren’t tools you begrudgingly carry. They’re expressions of taste, mood shapers, and now with Slac, genuinely attractive accessories. The open hoop design that hugs your ear offers a sculptural quality that traditional earbuds simply can’t match. When paired with the sleek wrist component, you get a cohesive audio system that understands fashion and function aren’t opposing forces. They’re partners in creating technology people actually want to wear.

Designers: Youngha Rho, Minchae Kim, Doa Kim, Si Heon Song, Seunghee Kim

Three components make up the full system: an open ear ring handling audio output, a wrist-worn ring tracking your listening data, and a home charging station. That circular form factor pulls double duty in ways most earbud designs completely miss. Wrapped around your ear, it creates this architectural presence without jamming anything into your ear canal. You stay aware of conversations, traffic, your entire sonic environment while your music layers on top. When you’re done listening, the ear ring snaps magnetically onto the wrist component, transforming the whole setup into what reads as a chunky watch band or bracelet. Nobody’s shoving these into a pocket case like loose change.

The AI running behind the scenes tracks your full 24-hour audio cycle and starts building preference profiles automatically. Machine learning analyzes sound intensity, pitch variations, and tonal characteristics from everything flowing through those ear rings. Cycling to work means you probably want traffic noise punched up alongside your playlist. Grinding through spreadsheets at a coffee shop means the background chatter gets filtered while your focus playlist stays crisp. The system generates these sound filtering categories in real time, and you can tweak individual layers through sliders in the app. Boost voices, drop mechanical hum, amplify nature sounds, whatever combination your brain needs in that specific moment.

They’ve included this gesture control called “Slate” that actually seems thought through. You rotate your hand in a circular motion while wearing both rings, mimicking that clapperboard snap before a film take. One rotation flips you between content-focused mode and environment-focused mode. Your podcast drops to background levels while street sounds come forward, or vice versa. No app diving, no button fumbling, just a quick physical gesture.

The aesthetic commits fully to the jewelry angle without hedging. Both black and metallic colorways show up in the renders, and that wrist component carries enough visual mass to register as intentional rather than apologetic. You could wear this setup to contexts where regular earbuds feel socially awkward. Dinner with your partner’s parents, a work presentation, anywhere those telltale white stems signal that you’re half-checked-out. This project emerged from a design team working within Samsung’s development programs, and you can feel years of wearable experience informing every choice. Slac points toward where personal audio needs to go: context awareness, all-day wearability, and designs that enhance your aesthetic rather than forcing compromises.

Will this exist any time soon? I honestly doubt it. A lot of these large-scale internship/incubation programs are aimed at imagining an alternate reality or future and working to build the technology in that direction, in the hopes that insights and innovations will trickle into existing products. The Slac, as we see it, probably won’t exist… but its overarching theme of technology as jewelry is already fairly popular. Smartwatches and AI Pins are a great example of this, and given how often we already wear TWS earbuds, the idea of an earbud that also masquerades as jewelry seems like a fairly clever route…

The post Samsung’s New Wearable Audio Concept Looks More Like Jewelry Than Tech first appeared on Yanko Design.

Meta Quest 3 feature turns any flat surface into functional Surface Keyboard with trackpad

Meta may have discontinued the Quest for Business program intended for its Quest 3, but the Horizon OS v85 is planning on introducing some niche features to the VR headset, which, of course, will start with the replacement of Horizon Feed with the Navigator UI as the default. In addition to that, if the new Horizon OS v85 Public Test Channel (PTC) is considered, the Meta headset will be able to turn any flat surface – a table or a desk – into a virtual keyboard you can type on like a physical keyboard.

The PTC for Quest OS v85 has started rolling out and initial YouTube hands-on reviews and forum discussions reveal it’s available as an experimental feature exclusively on Quest 3. The Quest 3S may have been left out of the virtual keyboard (the reason is not apparent at the time of writing), which appears like magic on a table and turns it into a keyboard complete with a trackpad.

Designer: Meta

The feature is called Surface Keyboard and it adds a keyboard on top of any surface you want. With the tap on the handheld controllers, you can switch back from the virtual keyboard to controllers and back seamlessly. If mixed reality and hand tracking have always excited you, with v85 of its operating system, the Quest 3 is going to take that experience to a new level.

To be able to truly live this fiction, where you place your hands on a table for a couple of seconds and a keyboard appears out of nowhere (where your hands were), and you can start typing – no buttons, no configuration, just hands and the virtual keyboard to type on. You will be required to opt in to the PTC of the Horizon OS, and receive the pre-release version to toil with.

If we remember correctly, Meta has been working on creating a virtual keyboard of this kind for a better part of the decade. In fact, it was in 2023 that Mark Zuckerberg demoed it and claimed he could reach 100 words per minute. Going by the videos and reviews floating online, the keyboard will take some getting used to. That said, the setup is easy and straightforward.

When you have opted in for the PTC, you can go to Movement Tracking and enable hand movement and body and double-tap controllers for hand tracking (to be able to switch between controller and keyboard). Now, to the Experimental and Unstable Surface Keyboard under the head. Once that’s done, go to devices, click keyboard, and setup and you’re set. Place your hands flat on a surface, and in seconds, a keyboard will appear where your hands are.

 

The post Meta Quest 3 feature turns any flat surface into functional Surface Keyboard with trackpad first appeared on Yanko Design.

Meta Misread the Future Twice. Now They’re Sitting on a Golden Egg, But Don’t Know It

Mark Zuckerberg changed his company’s name to Meta in October 2021 because he believed the future was virtual. Not just sort-of virtual, like Instagram filters or Zoom calls, but capital-V Virtual: immersive 3D worlds where you’d work, socialize, and live a parallel digital life through a VR headset. Four years and roughly $70 billion in cumulative Reality Labs losses later, Meta is quietly dismantling that vision. In January 2026, the company laid off around 1,500 people from its metaverse division, shut down multiple VR game studios, killed its VR meeting app Workrooms, and effectively admitted that the grand bet on virtual reality had failed. Investors barely blinked. The stock went up.

The official line now is that Meta is pivoting to AI and wearables. Zuckerberg spent much of 2025 building what he calls a “superintelligence” lab, hiring top-tier AI talent with eye-watering compensation packages that are now one of the largest drivers of Meta’s 2026 expense growth. The company released Llama models that benchmark decently against OpenAI and Google, embedded chatbots into WhatsApp and Instagram, and talks constantly about “AI agents” and “new media formats.” But from a product and profit perspective, Meta’s AI strategy looks suspiciously like its metaverse strategy: lots of spending, vague promises, and no breakout consumer experience that people actually love. Meanwhile, the thing that is quietly working, the thing people are buying and using in the real world, is a pair of $300 smart glasses that Meta barely talks about. If this sounds like a pattern, that’s because it is. Meta has now misread the future twice in a row, and both times the answer was hiding in plain sight.

The Metaverse Was a $70 Billion Fantasy

Reality Labs has been hemorrhaging money since late 2020. As of early 2026, cumulative operating losses sit somewhere between $70 and $80 billion, depending on how you slice the quarters. In the third quarter of 2025 alone, Reality Labs posted a $4.4 billion loss on $470 million in revenue. For 2025 as a whole, the division lost more than $19 billion. These are not rounding errors or R&D investments that will pay off next year. These are structural losses tied to a product category, VR headsets and metaverse platforms, that the market simply does not want at the scale Meta imagined.

The vision sounded compelling in a keynote. You would strap on a Quest headset, meet your coworkers in a virtual conference room with floating whiteboards, then hop over to Horizon Worlds to hang out with friends as legless avatars. The problem was that almost no one wanted to do any of that for more than a demo. VR remained a niche gaming platform with occasional fitness and entertainment use cases, not the next paradigm shift in human interaction. Zuckerberg kept insisting the breakthrough was just around the corner. He was wrong, and the January 2026 layoffs and studio closures were the formal acknowledgment that Reality Labs as originally conceived was dead.

The irony is that Meta actually had a potential killer app inside Reality Labs, and it murdered it. Supernatural, a VR fitness game that Meta acquired for $400 million in 2023, was one of the few pieces of Quest software that generated genuine user loyalty and recurring revenue. People who used Supernatural regularly described it as the most effective home workout they had ever done, combining rhythm-based gameplay with full-body movement in a way that treadmills and Peloton bikes could not replicate. It had a subscription model, a dedicated community, and real retention. In January 2026, Meta moved Supernatural into “maintenance mode,” which is corporate speak for “we fired almost everyone and it will get no new content.” If you are trying to prove that VR has mainstream utility beyond gaming, fitness is one of the most obvious wedges. Meta had that wedge, and it chose to kill it in the same round of cuts that shuttered studios working on Batman VR games and other prestige titles. The message was clear: Zuckerberg had lost interest in Quest, even the parts that worked.

The AI Bet That Looks Like the ‘Metaverse Bust’ 2.0

After spending years insisting the future was virtual worlds, Meta pivoted hard to AI in 2023 and 2024. Zuckerberg now talks about AI the way he used to talk about the metaverse: with sweeping language about paradigm shifts and transformative platforms. The company stood up an AI division focused on building what it calls “superintelligence,” hired aggressively from OpenAI and Anthropic, and made technical talent compensation the second-largest contributor to Meta’s 2026 expense growth behind infrastructure. This is not a side project. Meta is spending billions on AI research, training, and deployment, and Zuckerberg expects losses to remain near 2025 levels in 2026 before they start to taper.

From a technical standpoint, Meta’s AI work is solid. The Llama family of models is legitimately competitive with GPT-4 class systems and has found real adoption among developers who want open-source alternatives to OpenAI and Google. Meta’s internal AI is also driving real business value in ad targeting, content ranking, and moderation. Those systems work, and they contribute directly to Meta’s core revenue. But from a consumer product perspective, Meta’s AI feels scattered and often unnecessary. The company has embedded “Meta AI” chatbots into WhatsApp, Instagram, Messenger, and Facebook, none of which feel like natural places for a chatbot. Instagram’s feed is increasingly stuffed with AI-generated images and engagement bait that users actively complain about. Meta has launched character-based AI bots tied to influencers and celebrities, and approximately no one uses them. The gap between “we have impressive models” and “we have a product people love” is enormous, and it is the exact same gap that sank the metaverse.

What Meta is missing, again, is product intuition. OpenAI built ChatGPT and made it feel like the future because the interface was simple, the use cases were obvious, and it delivered consistent value. Google integrated Gemini into Search and productivity tools where users were already working. Meta, by contrast, seems to be throwing AI at every surface it controls and hoping something sticks. Zuckerberg talks about “an explosion of new media formats” and “more interactive feeds,” which in practice means more algorithmic slop and fewer posts from people you actually know. Analysts are starting to notice. One Bernstein note from early 2026 argued that the “winner” criteria in AI is shifting from model quality to product usage, which is a polite way of saying that having a great model does not matter if your product is annoying. Meta has a great model. Its products are annoying.

The financial picture is also murkier than Meta would like to admit. Reality Labs is still losing close to $20 billion a year, and while AI is not a separate reporting segment, the talent and infrastructure costs are clearly rising. Meta’s overall revenue growth is strong, driven by advertising, but the company is not yet showing a clear path to AI profitability outside of ‘ad optimization’. That puts Meta in the awkward position of having pivoted from one unprofitable moonshot (metaverse) to another potentially unprofitable moonshot (consumer AI products) while the actual profitable parts of the business, social ads and engagement, keep the lights on. This is a pattern, and it is not a good one.

The Smart Glasses Lead That Meta Is Poised to Lose

Meta talks about the Ray-Ban smart glasses constantly. Zuckerberg calls them the “ultimate incarnation” of the company’s AI vision, and the pitch is relentless: sales more than tripled in 2025, the glasses represent the future of ambient computing, this is the post-smartphone platform. The problem is not that Meta is ignoring the glasses. The problem is that Meta is about to squander a massive early lead, and the competition is closing in fast. 2026 is shaping up to be a blockbuster year for smart glasses. Samsung confirmed its AR glasses are launching this year. Google is releasing its first pair of smart glasses since 2013, an audio-only pair similar to the Ray-Ban Meta glasses. Apple is reportedly pursuing its own smart glasses and shelved plans for a cheaper Vision Pro to prioritize the project. Meta dominated VR because it was early, cheap, and had no real competition. In smart glasses, that window is closing fast, and the field is getting crowded with all kinds of names, from smaller players like Looktech and Xgimi’s Memomind to mid-sized brands like Xreal, to even larger ones like Google, TCL, and Xiaomi.

The Ray-Ban Meta glasses work because they are simple and focused. They take photos and videos, play music, make calls, and provide real-time answers through an AI assistant. Parents use them to record their kids hands-free. Travelers use them for translation. The form factor, actual Ray-Ban Wayfarers that cost around $300, means they do not scream “I am wearing a computer on my face.” This is the rare Meta hardware product that feels intuitive rather than forced, and it is selling because it solves boring, everyday problems without requiring users to change their behavior.

Then Meta made a critical mistake. To use the glasses, you have to route everything through the Meta AI app, which means you cannot just power-use the hardware without engaging with Meta’s AI-slop ecosystem. Want to access your photos? Meta AI. Want to tweak settings? Meta AI. The app is the mandatory gateway, and it is stuffed with the same kind of algorithmic recommendations and AI-generated suggestions that clutter Instagram and Facebook. Instead of letting the glasses be a clean, utilitarian tool, Meta is using them as another vector to push its AI products. Google and Samsung are not going to make that mistake. Their glasses will integrate with Android XR and existing ecosystems without forcing users into a single AI app. Apple, if and when it launches, will almost certainly take a similar approach: clean hardware, seamless OS integration, optional AI features. Meta had a head start, Ray-Ban branding, and a product people actually liked. It is on track to waste all of that by prioritizing AI evangelism over product discipline, and the competition is going to eat its lunch.

What Happens When You Chase Narratives Instead of Products

The pattern across metaverse and AI is that Meta keeps betting on big, abstract visions rather than iterating on the things that work. Zuckerberg is a narrative-driven founder. He wants to define the future, not respond to it. That impulse gave us Facebook in 2004, when no one else saw the potential of real-identity social networks, but it has led Meta astray repeatedly in the 2020s. The metaverse was a narrative, not a product. The idea that billions of people would strap on headsets to work and socialize in 3D was always more science fiction than product roadmap, but Zuckerberg committed so hard to it that he renamed the company.

AI feels like the same mistake. The narrative is that foundation models and “agents” will transform every part of computing, and Meta wants to be seen as a leader in that transformation. The actual products, chatbots in WhatsApp and AI-generated feed content, do not meaningfully improve the user experience and in many cases make it worse. Meanwhile, the thing that is working, smart glasses, does not fit cleanly into the AI or metaverse narrative, so it gets less attention and investment than it deserves. Meta’s 2026 strategy, “shifting investment from metaverse to wearables,” is a tacit admission of this, but it is couched in language that still emphasizes AI rather than the hardware itself.

The other pattern is that Meta is willing to kill its own successes if they do not fit the broader narrative. The hit VR fitness game on Meta’s Horizon, Supernatural, was working. It had subscribers, retention, and cultural momentum within the VR fitness community. It was also a relatively small, specific product rather than a platform play, and that made it expendable when Meta decided to scale back Reality Labs. The same logic applies to Quest more broadly. The headset had carved out a niche in gaming and fitness, and with sustained investment in content and ecosystem development, it could have grown into a meaningful adjacent business. Instead, Meta is deprioritizing it because Zuckerberg has decided the future is AI and lightweight wearables. That might turn out to be correct, but the way Meta is executing the pivot, by shuttering studios and putting products in maintenance mode rather than spinning them out or finding partners, suggests a lack of product discipline.

Why Smart Glasses Might Actually Be the Next Facebook

If you step back and ask what Meta is actually good at, the answer is not virtual reality or language models. Meta is good at building social products with massive scale, capturing and distributing content, and monetizing attention through ads. The Ray-Ban Meta glasses fit all of those strengths. They make it easier to capture photos and video, which feeds into Instagram and Facebook. They use AI to provide contextual information, which ties into Meta’s model development. And they are a physical product that people wear in public, which is a form of distribution and branding that Meta has never had before.

The bigger story is that smart glasses as a category are exploding, and Meta happened to be early. It is not just Samsung, Google, and Apple entering the space. Meta itself is expanding the Ray-Ban line with Displays (which adds a heads-up display) and partnering with Oakley on HSTN, a sportier model aimed at action sports. Google is teaming up with Warby Parker for its glasses, which gives it instant credibility in eyewear design. And then there are the startups: Even Realities, Xiaomi, Looktech, MemoMind, and dozens more, all slated for 2026 releases. This feels exactly like the moment AirPods sparked the true wireless earbud movement. Apple defined the format, then everyone from Samsung to Sony to no-name brands flooded the market, and now you can buy HMD ANC earbuds for 28 dollars. Smart glasses are following the same trajectory, which means the form factor itself is validated, and Meta’s early lead matters less than whether it can keep iterating faster than everyone else.

The other underrated piece is that having an instant camera on your face is genuinely useful in ways that VR headsets never were. People are using Ray-Ban Meta glasses as GoPro alternatives while skateboarding, cycling, and doing action sports, because POV capture without holding a phone or mounting a camera is frictionless. Content creators are using them to shoot hands-free B-roll at events like CES. Parents are using them to record their kids playing without the weird “I am holding my phone up at the playground” vibe. Pet owners are capturing spontaneous moments with dogs and cats that would be impossible to get with a phone. These are not sci-fi use cases or metaverse fantasies. They are boring, real-world problems that the glasses solve immediately, and that is why they are selling. Meta has spent a decade chasing grand visions of the future, and it accidentally built a product that people want right now. The challenge is whether it can resist the urge to over-complicate it before Google, Samsung, and Apple catch up.

The Real Lesson Is About Focus

Meta has spent the last five years oscillating between grand visions, metaverse and AI, and neglecting the products that actually work. The Ray-Ban Meta glasses are proof that when Meta focuses on solving real problems with tangible products, it can still build things people want. The metaverse failed because it was a solution in search of a problem, and the AI push is struggling because Meta is shipping features rather than products. Smart glasses, by contrast, are succeeding because they make everyday tasks easier without requiring users to change their behavior or buy into a futuristic narrative.

If Zuckerberg can internalize that lesson, Meta might actually have a shot at owning the next platform. But that requires a level of product discipline and restraint that Meta has not shown in years. It means resisting the urge to turn every product into a platform, admitting when a bet has failed rather than pouring another $10 billion into it, and focusing on iteration over narration. The irony is that Meta already has the right product. It just needs to stop looking past it.

The post Meta Misread the Future Twice. Now They’re Sitting on a Golden Egg, But Don’t Know It first appeared on Yanko Design.