Rokid’s Smart Glasses Let You Pick Your AI: Gemini or ChatGPT

Most wearable tech that puts an AI assistant in your ear assumes you want only theirs. The earpiece, the speaker, the entire software stack, all funneled through one model chosen for you before you even open the box. Rokid’s latest update to the AI Glasses Style takes a different position entirely, turning the glasses into what is effectively an open platform where you pick the brain behind the voice.

The update makes the Style the first smart glasses to natively support Google’s Gemini, sitting alongside OpenAI’s ChatGPT, DeepSeek, and Alibaba’s Qwen in a unified interface. Users toggle between them freely, which means reaching for Gemini for a quick Google Maps query and switching to ChatGPT for something else entirely is up to you.

Designer: Rokid

The glasses themselves debuted at CES 2026 in January, and the hardware makes a reasonable case for the category. At 38.5 grams, with a TR90 frame and titanium alloy hinges, they sit closer to a regular pair of prescription glasses than anything resembling a prototype. The frame takes prescription lenses directly, with a fitting service starting at $79, including photochromic options in over 200 colors that darken within 25 seconds.

Powering the AI and imaging workload is a dual-chip setup: an NXP RT600 handles always-on, low-power tasks, while a Qualcomm AR1 manages heavier processing. The same Qualcomm chip is in Meta’s Ray-Ban glasses, though the battery life here runs to 12 hours, noticeably longer than Meta’s. A 12MP Sony-sensor camera sits at the bridge, capturing 4K stills and 3K 30fps video with up to 10 minutes of continuous recording. A privacy indicator light signals to people nearby when the camera is active.

Audio comes through directional AAC speakers built into the temples, focused toward the ears with minimal bleed. The AI interaction itself works through a two-finger tap to summon any of the four models, head gestures for call management, and voice prompts in 12 supported languages. Real-time translation, navigation, photo recognition, and AI-generated meeting summaries are all part of the feature set, fed through whichever model the user has selected.

For anyone already oriented around a specific AI assistant, the practical appeal is straightforward. Someone in Google’s ecosystem gets Gemini in their glasses without compromise; someone who prefers ChatGPT for writing picks that instead. At $299 to start, with a lens fitting service folding in prescription and photochromic options, the Style has cleared 15,000 units sold ahead of its formal global rollout, which is a reasonable early signal for a category still working out what it wants to be.

The post Rokid’s Smart Glasses Let You Pick Your AI: Gemini or ChatGPT first appeared on Yanko Design.

Meta better be worried. Qwen’s affordable AI Smart Glasses have cameras, speakers, and even a built-in display

It was one of the more audacious moves at MWC 2026. Right across the aisle from Meta’s smart glasses booth at Fira Gran Via, Alibaba’s Qwen pavilion was anchored by a pair of glasses so oversized they were practically architecture, a giant sculptural prop that functioned as a very literal invitation to come over and look. People did. And once they got close enough to see the actual products, the conversation shifted fairly quickly from “interesting marketing stunt” to “wait, what exactly is this?”

What they found were two frame styles that could sit in any optician’s window without raising an eyebrow. A rectangular wayfarer in matte black, clean and understated. A rounded frame in warm tortoiseshell with a two-tone contrast that leans vintage without being self-conscious about it. Both carry the “Qwen” wordmark on the temple, small and unobtrusive. Both have cameras tucked discreetly at the hinge corners rather than mounted on the bridge. And inside the lenses, visible only when you look closely, is the faint shimmer of a waveguide display.

Designer: Qwen

That last detail is where the competitive context gets genuinely interesting. The smart glasses market in 2026 has essentially sorted itself into two camps. On one side, you have camera-and-speakers devices like the mainstream Ray-Ban Metas, starting around $299, which have been wildly successful because they figured out that looking normal matters more than most features. On the other, you have display-first devices like the Even Realities G1 and G2, which sit at $599 and offer binocular waveguide displays, but sacrifice the camera entirely and strip out the speakers to keep weight down to a remarkable 36 grams. Meta entered the premium display tier late last year with the $799 Ray-Ban Display, a full-colour waveguide in one eye, a 12MP camera, and open-ear audio. It’s a compelling package, but $799 is a significant ask for a first-generation product in a category most consumers are still on the fence about.

The Qwen glasses, if they land close to the pricing of Alibaba’s previous Quark AI Glasses at around $277, would be threading an entirely different needle. Camera, display, on-device AI, and a frame design that competes aesthetically with anything in this space, all at a price that undercuts the Even G2 by more than half and the Meta Display by almost two-thirds. On paper, that’s a serious value proposition. The technology powering it is a lightened version of Qwen 3.5, running directly on the device rather than offloading everything to the cloud, which matters both for latency and for use cases where connectivity is limited.

The honest caveat is the brand itself, and it’s worth sitting with. Qwen is well regarded within AI research circles, particularly since Alibaba open-sourced much of the model family and developers worldwide have built on it. But Qwen as a consumer product, as something you’d buy at a store or recommend to a friend in Europe or North America, carries essentially zero name recognition. The app ecosystem that Alibaba plans to migrate onto the glasses, things like food delivery and ride-hailing integrations, is deeply rooted in China’s domestic services infrastructure and doesn’t translate directly to international markets without significant rework. Meta spent years building the Ray-Ban brand before it put a chip inside the frame. Alibaba is trying to build hardware credibility and software trust simultaneously, in markets where it starts from a cold position.

None of that makes the product less interesting. The Qwen glasses are arguably the first device in this category to arrive with a camera, a waveguide display, on-device AI, and a design that doesn’t require the wearer to make aesthetic compromises, all at a price that could realistically attract mainstream buyers rather than just enthusiasts. With North America and Western Europe commanding the vast majority of global smart glasses demand, Alibaba is clearly going after the big markets, and the product is credible enough to deserve a proper hearing there. The harder work, convincing people in those markets to trust a brand they have never heard of with a face-worn AI device that has cameras and a display, is the challenge that no amount of giant sculpture at a trade show can solve on its own.

What MWC established is that the hardware is real, the ambition is real, and the timing is deliberate. Alibaba confirmed that AI earbuds and a smart ring are coming later this year under the same Qwen brand, building out a wearable ecosystem that mirrors the strategy Meta has been executing for several years. The glasses are the opening argument. Whether the rest of the world ends up listening is the part that plays out over the next twelve months.

The post Meta better be worried. Qwen’s affordable AI Smart Glasses have cameras, speakers, and even a built-in display first appeared on Yanko Design.

PlayStation XR Glasses Concept Makes a Strong Case for Gaming-Focused AR Wearables

Meta talks about XR glasses as companions for your social life. Snap a photo, answer a call, ask an AI what you are looking at. The PlayStation XR Glasses concept spins that idea toward a different center of gravity. Here, the glasses are not about broadcasting your world. They are about pulling the PlayStation universe closer, shrinking the distance between you, your console, and the screen that usually sits across the room.

Here, XR is not a spectacle. It is a subtle layer that folds into your existing PlayStation life. Imagine a virtual screen hovering above your TV stand, system notifications floating at the edge of your vision, a familiar PS logo resting by your temple like the Start button you have pressed a thousand times. The fantasy is not about replacing your PS5, but about letting its world follow you from couch to desk to bed, quietly, through something that looks like ordinary eyewear.

Designer: Shirish Kumar

The frames carry the same visual language as the PS5 and DualSense controller, all smooth curves and deliberate angles that look cohesive sitting next to your console. That blue accent lighting running along the temples is pure PlayStation branding, the kind of detail that works because it feels earned rather than slapped on. The folding hinge reveals those iconic button symbols when you open the arms, which is a nice touch that reinforces you are holding a gaming device that happens to look like eyewear. Whether Sony’s actual industrial design team would ever build something this sleek is another question entirely, but as a design exercise, it holds together.

There is a front-facing camera tucked under the lenses for object tracking and AR overlays, auto-adjusting lenses that darken outdoors and clear indoors, embedded sensors for a heads-up display, gesture controls for navigation. The PS logo on the temple supposedly works like a button, tap for Start and hold for Home, mirroring your muscle memory from the controller. All of that sounds good on paper. The real question is what you actually do with these once they are on your face. Existing PlayStation games would almost certainly run as a virtual screen floating in your field of view, basically a private monitor you wear instead of stare at. True AR gameplay where Aloy from Horizon is dodging around your coffee table requires games built specifically for that, and Kumar does not show or describe any of those experiences.

What this concept does well is stake out a different philosophy for XR glasses. Where Meta wants social connectivity and Apple is aiming for spatial computing as a productivity play, this imagines gaming-first hardware that extends an existing ecosystem rather than trying to create a new one. Whether that is enough to justify another screen in your life is the question every XR device has to answer eventually. For now, it is a polished look at what Sony could build if they decided lightweight AR glasses were the next logical step after VR headsets and portable screens.

The post PlayStation XR Glasses Concept Makes a Strong Case for Gaming-Focused AR Wearables first appeared on Yanko Design.

Meta Misread the Future Twice. Now They’re Sitting on a Golden Egg, But Don’t Know It

Mark Zuckerberg changed his company’s name to Meta in October 2021 because he believed the future was virtual. Not just sort-of virtual, like Instagram filters or Zoom calls, but capital-V Virtual: immersive 3D worlds where you’d work, socialize, and live a parallel digital life through a VR headset. Four years and roughly $70 billion in cumulative Reality Labs losses later, Meta is quietly dismantling that vision. In January 2026, the company laid off around 1,500 people from its metaverse division, shut down multiple VR game studios, killed its VR meeting app Workrooms, and effectively admitted that the grand bet on virtual reality had failed. Investors barely blinked. The stock went up.

The official line now is that Meta is pivoting to AI and wearables. Zuckerberg spent much of 2025 building what he calls a “superintelligence” lab, hiring top-tier AI talent with eye-watering compensation packages that are now one of the largest drivers of Meta’s 2026 expense growth. The company released Llama models that benchmark decently against OpenAI and Google, embedded chatbots into WhatsApp and Instagram, and talks constantly about “AI agents” and “new media formats.” But from a product and profit perspective, Meta’s AI strategy looks suspiciously like its metaverse strategy: lots of spending, vague promises, and no breakout consumer experience that people actually love. Meanwhile, the thing that is quietly working, the thing people are buying and using in the real world, is a pair of $300 smart glasses that Meta barely talks about. If this sounds like a pattern, that’s because it is. Meta has now misread the future twice in a row, and both times the answer was hiding in plain sight.

The Metaverse Was a $70 Billion Fantasy

Reality Labs has been hemorrhaging money since late 2020. As of early 2026, cumulative operating losses sit somewhere between $70 and $80 billion, depending on how you slice the quarters. In the third quarter of 2025 alone, Reality Labs posted a $4.4 billion loss on $470 million in revenue. For 2025 as a whole, the division lost more than $19 billion. These are not rounding errors or R&D investments that will pay off next year. These are structural losses tied to a product category, VR headsets and metaverse platforms, that the market simply does not want at the scale Meta imagined.

The vision sounded compelling in a keynote. You would strap on a Quest headset, meet your coworkers in a virtual conference room with floating whiteboards, then hop over to Horizon Worlds to hang out with friends as legless avatars. The problem was that almost no one wanted to do any of that for more than a demo. VR remained a niche gaming platform with occasional fitness and entertainment use cases, not the next paradigm shift in human interaction. Zuckerberg kept insisting the breakthrough was just around the corner. He was wrong, and the January 2026 layoffs and studio closures were the formal acknowledgment that Reality Labs as originally conceived was dead.

The irony is that Meta actually had a potential killer app inside Reality Labs, and it murdered it. Supernatural, a VR fitness game that Meta acquired for $400 million in 2023, was one of the few pieces of Quest software that generated genuine user loyalty and recurring revenue. People who used Supernatural regularly described it as the most effective home workout they had ever done, combining rhythm-based gameplay with full-body movement in a way that treadmills and Peloton bikes could not replicate. It had a subscription model, a dedicated community, and real retention. In January 2026, Meta moved Supernatural into “maintenance mode,” which is corporate speak for “we fired almost everyone and it will get no new content.” If you are trying to prove that VR has mainstream utility beyond gaming, fitness is one of the most obvious wedges. Meta had that wedge, and it chose to kill it in the same round of cuts that shuttered studios working on Batman VR games and other prestige titles. The message was clear: Zuckerberg had lost interest in Quest, even the parts that worked.

The AI Bet That Looks Like the ‘Metaverse Bust’ 2.0

After spending years insisting the future was virtual worlds, Meta pivoted hard to AI in 2023 and 2024. Zuckerberg now talks about AI the way he used to talk about the metaverse: with sweeping language about paradigm shifts and transformative platforms. The company stood up an AI division focused on building what it calls “superintelligence,” hired aggressively from OpenAI and Anthropic, and made technical talent compensation the second-largest contributor to Meta’s 2026 expense growth behind infrastructure. This is not a side project. Meta is spending billions on AI research, training, and deployment, and Zuckerberg expects losses to remain near 2025 levels in 2026 before they start to taper.

From a technical standpoint, Meta’s AI work is solid. The Llama family of models is legitimately competitive with GPT-4 class systems and has found real adoption among developers who want open-source alternatives to OpenAI and Google. Meta’s internal AI is also driving real business value in ad targeting, content ranking, and moderation. Those systems work, and they contribute directly to Meta’s core revenue. But from a consumer product perspective, Meta’s AI feels scattered and often unnecessary. The company has embedded “Meta AI” chatbots into WhatsApp, Instagram, Messenger, and Facebook, none of which feel like natural places for a chatbot. Instagram’s feed is increasingly stuffed with AI-generated images and engagement bait that users actively complain about. Meta has launched character-based AI bots tied to influencers and celebrities, and approximately no one uses them. The gap between “we have impressive models” and “we have a product people love” is enormous, and it is the exact same gap that sank the metaverse.

What Meta is missing, again, is product intuition. OpenAI built ChatGPT and made it feel like the future because the interface was simple, the use cases were obvious, and it delivered consistent value. Google integrated Gemini into Search and productivity tools where users were already working. Meta, by contrast, seems to be throwing AI at every surface it controls and hoping something sticks. Zuckerberg talks about “an explosion of new media formats” and “more interactive feeds,” which in practice means more algorithmic slop and fewer posts from people you actually know. Analysts are starting to notice. One Bernstein note from early 2026 argued that the “winner” criteria in AI is shifting from model quality to product usage, which is a polite way of saying that having a great model does not matter if your product is annoying. Meta has a great model. Its products are annoying.

The financial picture is also murkier than Meta would like to admit. Reality Labs is still losing close to $20 billion a year, and while AI is not a separate reporting segment, the talent and infrastructure costs are clearly rising. Meta’s overall revenue growth is strong, driven by advertising, but the company is not yet showing a clear path to AI profitability outside of ‘ad optimization’. That puts Meta in the awkward position of having pivoted from one unprofitable moonshot (metaverse) to another potentially unprofitable moonshot (consumer AI products) while the actual profitable parts of the business, social ads and engagement, keep the lights on. This is a pattern, and it is not a good one.

The Smart Glasses Lead That Meta Is Poised to Lose

Meta talks about the Ray-Ban smart glasses constantly. Zuckerberg calls them the “ultimate incarnation” of the company’s AI vision, and the pitch is relentless: sales more than tripled in 2025, the glasses represent the future of ambient computing, this is the post-smartphone platform. The problem is not that Meta is ignoring the glasses. The problem is that Meta is about to squander a massive early lead, and the competition is closing in fast. 2026 is shaping up to be a blockbuster year for smart glasses. Samsung confirmed its AR glasses are launching this year. Google is releasing its first pair of smart glasses since 2013, an audio-only pair similar to the Ray-Ban Meta glasses. Apple is reportedly pursuing its own smart glasses and shelved plans for a cheaper Vision Pro to prioritize the project. Meta dominated VR because it was early, cheap, and had no real competition. In smart glasses, that window is closing fast, and the field is getting crowded with all kinds of names, from smaller players like Looktech and Xgimi’s Memomind to mid-sized brands like Xreal, to even larger ones like Google, TCL, and Xiaomi.

The Ray-Ban Meta glasses work because they are simple and focused. They take photos and videos, play music, make calls, and provide real-time answers through an AI assistant. Parents use them to record their kids hands-free. Travelers use them for translation. The form factor, actual Ray-Ban Wayfarers that cost around $300, means they do not scream “I am wearing a computer on my face.” This is the rare Meta hardware product that feels intuitive rather than forced, and it is selling because it solves boring, everyday problems without requiring users to change their behavior.

Then Meta made a critical mistake. To use the glasses, you have to route everything through the Meta AI app, which means you cannot just power-use the hardware without engaging with Meta’s AI-slop ecosystem. Want to access your photos? Meta AI. Want to tweak settings? Meta AI. The app is the mandatory gateway, and it is stuffed with the same kind of algorithmic recommendations and AI-generated suggestions that clutter Instagram and Facebook. Instead of letting the glasses be a clean, utilitarian tool, Meta is using them as another vector to push its AI products. Google and Samsung are not going to make that mistake. Their glasses will integrate with Android XR and existing ecosystems without forcing users into a single AI app. Apple, if and when it launches, will almost certainly take a similar approach: clean hardware, seamless OS integration, optional AI features. Meta had a head start, Ray-Ban branding, and a product people actually liked. It is on track to waste all of that by prioritizing AI evangelism over product discipline, and the competition is going to eat its lunch.

What Happens When You Chase Narratives Instead of Products

The pattern across metaverse and AI is that Meta keeps betting on big, abstract visions rather than iterating on the things that work. Zuckerberg is a narrative-driven founder. He wants to define the future, not respond to it. That impulse gave us Facebook in 2004, when no one else saw the potential of real-identity social networks, but it has led Meta astray repeatedly in the 2020s. The metaverse was a narrative, not a product. The idea that billions of people would strap on headsets to work and socialize in 3D was always more science fiction than product roadmap, but Zuckerberg committed so hard to it that he renamed the company.

AI feels like the same mistake. The narrative is that foundation models and “agents” will transform every part of computing, and Meta wants to be seen as a leader in that transformation. The actual products, chatbots in WhatsApp and AI-generated feed content, do not meaningfully improve the user experience and in many cases make it worse. Meanwhile, the thing that is working, smart glasses, does not fit cleanly into the AI or metaverse narrative, so it gets less attention and investment than it deserves. Meta’s 2026 strategy, “shifting investment from metaverse to wearables,” is a tacit admission of this, but it is couched in language that still emphasizes AI rather than the hardware itself.

The other pattern is that Meta is willing to kill its own successes if they do not fit the broader narrative. The hit VR fitness game on Meta’s Horizon, Supernatural, was working. It had subscribers, retention, and cultural momentum within the VR fitness community. It was also a relatively small, specific product rather than a platform play, and that made it expendable when Meta decided to scale back Reality Labs. The same logic applies to Quest more broadly. The headset had carved out a niche in gaming and fitness, and with sustained investment in content and ecosystem development, it could have grown into a meaningful adjacent business. Instead, Meta is deprioritizing it because Zuckerberg has decided the future is AI and lightweight wearables. That might turn out to be correct, but the way Meta is executing the pivot, by shuttering studios and putting products in maintenance mode rather than spinning them out or finding partners, suggests a lack of product discipline.

Why Smart Glasses Might Actually Be the Next Facebook

If you step back and ask what Meta is actually good at, the answer is not virtual reality or language models. Meta is good at building social products with massive scale, capturing and distributing content, and monetizing attention through ads. The Ray-Ban Meta glasses fit all of those strengths. They make it easier to capture photos and video, which feeds into Instagram and Facebook. They use AI to provide contextual information, which ties into Meta’s model development. And they are a physical product that people wear in public, which is a form of distribution and branding that Meta has never had before.

The bigger story is that smart glasses as a category are exploding, and Meta happened to be early. It is not just Samsung, Google, and Apple entering the space. Meta itself is expanding the Ray-Ban line with Displays (which adds a heads-up display) and partnering with Oakley on HSTN, a sportier model aimed at action sports. Google is teaming up with Warby Parker for its glasses, which gives it instant credibility in eyewear design. And then there are the startups: Even Realities, Xiaomi, Looktech, MemoMind, and dozens more, all slated for 2026 releases. This feels exactly like the moment AirPods sparked the true wireless earbud movement. Apple defined the format, then everyone from Samsung to Sony to no-name brands flooded the market, and now you can buy HMD ANC earbuds for 28 dollars. Smart glasses are following the same trajectory, which means the form factor itself is validated, and Meta’s early lead matters less than whether it can keep iterating faster than everyone else.

The other underrated piece is that having an instant camera on your face is genuinely useful in ways that VR headsets never were. People are using Ray-Ban Meta glasses as GoPro alternatives while skateboarding, cycling, and doing action sports, because POV capture without holding a phone or mounting a camera is frictionless. Content creators are using them to shoot hands-free B-roll at events like CES. Parents are using them to record their kids playing without the weird “I am holding my phone up at the playground” vibe. Pet owners are capturing spontaneous moments with dogs and cats that would be impossible to get with a phone. These are not sci-fi use cases or metaverse fantasies. They are boring, real-world problems that the glasses solve immediately, and that is why they are selling. Meta has spent a decade chasing grand visions of the future, and it accidentally built a product that people want right now. The challenge is whether it can resist the urge to over-complicate it before Google, Samsung, and Apple catch up.

The Real Lesson Is About Focus

Meta has spent the last five years oscillating between grand visions, metaverse and AI, and neglecting the products that actually work. The Ray-Ban Meta glasses are proof that when Meta focuses on solving real problems with tangible products, it can still build things people want. The metaverse failed because it was a solution in search of a problem, and the AI push is struggling because Meta is shipping features rather than products. Smart glasses, by contrast, are succeeding because they make everyday tasks easier without requiring users to change their behavior or buy into a futuristic narrative.

If Zuckerberg can internalize that lesson, Meta might actually have a shot at owning the next platform. But that requires a level of product discipline and restraint that Meta has not shown in years. It means resisting the urge to turn every product into a platform, admitting when a bet has failed rather than pouring another $10 billion into it, and focusing on iteration over narration. The irony is that Meta already has the right product. It just needs to stop looking past it.

The post Meta Misread the Future Twice. Now They’re Sitting on a Golden Egg, But Don’t Know It first appeared on Yanko Design.

Leion Hey2 Brings First AR Glasses Built for Translation to CES 2026

Cross-language conversations create a familiar kind of friction. You hold a phone over menus, miss half a sentence while an app catches up, or watch a partner speak fast in a meeting while your translation lags behind. Even people who travel or work globally still juggle apps, hand-held translators, and guesswork just to keep up with what is being said in the room, which pulls attention away from the actual conversation.

Leion Hey2 is translation that lives where your eyes already are, in a pair of glasses that quietly turns speech into subtitles without asking you to look down or pass a device back and forth. The glasses were built for translation first, not as an afterthought on top of entertainment or social features, and they are meant to last through full days of meetings or classes instead of dying halfway through, when you need them most.

Designer: LLVision

Click here to know more.

Glasses That Care About Conversation, Not Spectacle

Leion Hey2 is a pair of professional AR translation glasses from LLVision, a company that has spent more than a decade deploying AR and AI in industrial and public-sector settings. Hey2 is not trying to be an all-in-one headset; it is engineered from the ground up for real-time translation and captioning, supporting more than 100 languages and dialects with bidirectional translation and latency under 500 ms in typical conditions, plus 6–8 hours of continuous translation on a single charge.

Hey2 is designed to wear like everyday eyewear rather than a gadget. The classic browline frame, 49g weight, magnesium-lithium alloy structure, and adjustable titanium nose pads are all chosen to make it feel like a normal pair of glasses you forget you are wearing. A stepless spring hinge adapts to different faces, and the camera-free, microphone-only design, which follows GDPR-aligned privacy principles and is supported by a secure cloud infrastructure built on Microsoft Azure, helping keep both wearers and bystanders more comfortable in sensitive environments.

Subtitles in Your Line of Sight

Hey2 uses waveguide optics and a micro-LED engine to project crisp, green subtitles into both eyes, with a 25-degree field of view and more than 90% passthrough so the real world stays bright. The optical engine is tuned to reduce rainbow artifacts by up to 98%, keeping text stable and readable in different lighting conditions, while three levels of subtitle size and position let you decide how prominently captions sit in your forward field of view.

The audio side relies on a four-microphone array that performs 360-degree spatial detection to identify who is speaking, while face-to-face directional pickup prioritizes the person within roughly a 60-degree cone in front of you. A neural noise-reduction algorithm uses beamforming and multi-channel processing to isolate the main voice, which helps in noisy restaurants, busy trade-show floors, or classrooms where questions come from different directions, without forcing you to constantly adjust settings.

Modes That Support Work, Learning, and Accessibility

In translation and Free Talk modes, foreign speech is converted into your language as subtitles in your line of sight, so you can mix languages freely and still follow long-form speech without constantly checking a screen. In Free Talk, Hey2 provides subtitles for what you hear and spoken translation for what you say, turning a two-language conversation into something that feels more like a normal chat than a tech demo, with the charging case extending total use to 96 hours across 12 recharges.

Teleprompter mode scrolls your script in your line of sight and advances it automatically as you speak, useful for lectures, pitches, or keynotes where you want to keep eye contact without glancing at notes. AI Q&A, triggered by a temple tap, taps into ChatGPT-powered answers for discreet look-ups, while Captions mode turns fast speech into clean text, helping students, professionals, and Deaf or hard-of-hearing users stay on top of what is being said, even in noisy environments where handheld devices struggle.

A Different Kind of AR Story

When Leion Hey2 steps onto the CES 2026 stage, it represents a quieter kind of AR story. Instead of chasing spectacle, it narrows the brief to something very human, helping people speak, listen, and be understood across languages and hearing abilities. For a show that often celebrates what technology can do, Hey2 is a reminder that sometimes the most interesting innovation is the one that simply lets you keep your head up and stay in the conversation.

Click here to know more.

The post Leion Hey2 Brings First AR Glasses Built for Translation to CES 2026 first appeared on Yanko Design.

Even Realities G2 Just Solved the Biggest Problem With Smart Glasses… Using A Ring

Even Realities launched their first smart glasses last year with a pitch that felt almost countercultural: what if your eyewear didn’t record everything around you, didn’t pipe audio into your ears, and didn’t make everyone nearby wonder if you were filming them? Instead of packing their frames with cameras and speakers, they focused on a single function: a clean, effective heads-up display. The G1 glasses were a minimalist take on wearables, offering monochrome green text in your line of sight for notifications and AI assistance, all without the privacy concerns of outward-facing cameras. This focused approach found its niche, landing the G1 in 350 luxury eyewear shops globally and proving there’s a real appetite for smart glasses that prioritize subtlety and practical assistance.

The G2 glasses themselves improve on last year’s G1 in predictable but welcome ways. Bigger display, better optics, lighter frame, longer battery life. They still avoid cameras and speakers entirely, sticking with Even’s “Quiet Tech” philosophy of providing information without creating privacy concerns. But pair them with the new R1 ring and you get something more interesting than incremental hardware improvements. The ring lets you control the glasses with thumb gestures against your index finger, turning navigation into something closer to using a trackpad than fumbling with voice commands or head taps. Whether that’s actually more natural in practice than the alternatives depends partly on how well the gesture recognition works and partly on whether you’re the kind of person who wants to wear a ring in the first place.

Designer: Even Realities

The display improvements are significant enough to matter in daily use. Even calls their new system HAO 2.0, which stands for Holistic Adaptive Optics, and the practical result is that information appears in layers rather than as flat text plastered across your vision. Quick notifications and AI prompts sit closer in your field of view, while longer content like navigation directions or notes recede slightly into the background. It’s still monochrome green, the same matrix-style aesthetic from the G1, but sharper and easier to read in motion or bright light. The frame itself weighs just 36 grams and carries an IP67 rating for water and dust resistance, so you can wear them in the rain without worrying about killing a $599 investment. Battery life stretches past two days now, and the prescription range goes from -12 to +12, covering most people who need corrective lenses.

What made the G1 frustrating for some users was the interaction model. You could talk to the glasses, but that meant either looking weird in public or finding a quiet spot. You could tap the touch-sensitive nubs on the temples, but they were finicky and required you to constantly reach up to your face. While the G2 improves the reliability of those touchpads significantly, Even Realities’ R1 smart ring practically revolutionizes how you interact with the smart display. Worn on your index finger, the ring lets you swipe up and down with your thumb or tap to select options, essentially turning your hand into a trackpad for your face. The ring is made from zirconia ceramic and stainless steel, costs $249 separately, and connects to the glasses through what Even calls their TriSync ecosystem, linking the glasses, ring, and phone into one synchronized unit.

The gesture controls take some getting used to, based on early reviews. Accidental swipes are common at first, and the learning curve means you might fumble through menus for the first few days. But when it works smoothly, navigating with the ring is more subtle than any of the alternatives. You can check a notification, dismiss it, and move on without anyone noticing you’ve interacted with your glasses at all. That subtlety matters more than it sounds like it would, especially if you’re using features like the built-in teleprompter for presentations or the real-time translation during conversations. The glasses still support the old interaction methods too, so you’re not locked into one way of controlling them.

The AI side of things has been upgraded as well, with Even introducing what they call the Conversate assistant. It handles the usual smart glasses tasks like showing notifications, reading messages, and providing contextual information, but it’s designed to be less intrusive about it. You talk to it and get text responses on the display rather than audio, which keeps conversations private and avoids the awkwardness of having your glasses talk back to you in a quiet room. The system pulls from your phone’s connectivity, so there’s no separate data plan or complex setup required. The AI integration feels thoughtful rather than forced, providing information when you need it without constantly demanding attention.

One detail worth noting: the R1 ring is not compatible with the original G1 glasses. If you bought the first generation and want the ring’s functionality, you’ll need to upgrade to the G2 entirely. Even is offering a launch promotion where buying the G2 gets you the ring and other accessories at 50 percent off, which brings the combined price to $724 instead of $848. For context, Meta’s Ray-Ban smart glasses with their Neural Band controller and full-color display cost $799, though those come with cameras and all the privacy considerations that entails. The G2 and R1 combo sits in an interesting middle ground, offering more focused functionality at a similar price point.

The combination of display-only glasses and a gesture-controlled ring represents a particular vision of what smart eyewear could be. It’s not trying to replace your phone or capture every moment of your life. Instead, it extends your phone’s functionality into your field of view while giving you a discreet way to interact with that information. For people who give frequent presentations, the teleprompter feature alone could justify the cost. For travelers, having real-time translation floating in your vision during conversations is genuinely useful. And for anyone tired of constantly pulling out their phone to check notifications, the G2 offers a less disruptive alternative. Even Realities is refining an approach that feels increasingly relevant as smart glasses move from novelty to practical tool, and the G2 with R1 suggests they’re learning the right lessons from their first attempt.

The post Even Realities G2 Just Solved the Biggest Problem With Smart Glasses… Using A Ring first appeared on Yanko Design.

Apple is allegedly working on an Affordable, Consumer-grade Spatial Headset

Apple showed us what a mixed reality headset could be capable of with the debut of the Vision Pro at WWDC in 2023. It had all the bells and whistles required of an AR and VR headset from Apple, but didn’t find many takers. Perhaps because of its steep price tag or maybe, no one was ready for a headset positioning them into the spatial computing just yet.

For me, per se – it was the price, bulkiness, and small market size for a standalone device in the smart glasses category. Apple soon realized it after significant losses in projected sales. This is why rumors of Apple mulling the rollout of a more affordable non-Pro mixed reality headset model started doing the rounds.

Designer: Apple

Such a device would be made possible by trimming down the features and functionalities of the Vision Pro, but the Cupertino company has thought otherwise (at least for now). New reports by way of Bloomberg’s Mark Gurman, Apple is instead planning a pair of smart glasses that would be targeted at the masses – like the Meta’s Ray Bans – and fit better in the Apple ecosystem than the Vision Pro or its stripped-down brother.

The latest information suggests that the budget-friendly Vision model could have been postponed until after 2027, while the new internal study, codenamed project “Atlas” is running within Apple to understand from the company’s employees where they stand on the topic of smart glasses. Based on the internal understanding, Apple is thinking about smart glasses that would somewhat target the consumer segment that Meta’s Orion augmented reality glasses intend to.

The Orion glasses for now are a prototype themselves. It wouldn’t be the best choice to compare or base the two non-existent devices on the same footing. But the basic idea is that Apple could have a pair of smart glasses that look like regular glasses and are a combination of slick design and useful features that would allow a connected iPhone to do most of the computing.

At the time of writing, it is not known whether Apple has started building such a product. Still, we learn that feasibility studies are happening within the company to deliver eyewear that addresses the issues of convenience, weight, and battery life. Irrespective of what direction Apple intends to take with the idea of smart glasses, it’ll almost take a few years to reach the market. If you’re in a hurry, get your hands on the Meta options!

The post Apple is allegedly working on an Affordable, Consumer-grade Spatial Headset first appeared on Yanko Design.

Should you upgrade to the new Ray-Ban Meta Wayfarer Limited-Edition Transparent Model?

Ray-Ban’s Meta Wayfarer glasses have quickly become the intersection of fashion and technology, combining classic style with advanced smart features. Recently, Ray-Ban and Meta unveiled the new Shiny Transparent Wayfarer, featuring exposed internal components and Clear to Sapphire Transitions lenses. While this new model pushes the boundaries of what smart glasses can look like, the big question is: should you upgrade, especially if you already own a pair? Let’s break it down.

Designer: Ray-Ban + Meta

If Money Is No Object, Then Yes—Go for It

If price isn’t a barrier, the decision to upgrade is straightforward. At $429 USD, the Shiny Transparent Wayfarer offers a visually striking design that showcases the internal technology, creating a futuristic look that stands apart from the Matte Black version. The Clear to Sapphire Transitions lenses add another layer of sophistication, adapting to light conditions and giving the glasses a sleek sapphire tint when outdoors. This is an easy yes for those who enjoy staying at the forefront of wearable tech.

If You Want the New Lens Transition, It’s Worth Considering

If your current Ray-Ban Meta Wayfarer comes with standard clear lenses or basic non-adaptive sunglasses, upgrading to the new Transitions lenses could make a big difference in how you use the glasses day-to-day. The Clear to Sapphire Transitions lenses offer a smooth transition between indoor and outdoor settings, making it easier to adapt to different lighting conditions without needing to switch eyewear. When you’re indoors, the lenses remain clear, providing a natural and unobstructed view. However, once you step outside, they automatically darken to a sleek sapphire tint, adding a touch of style and protecting your eyes from harsh sunlight. For anyone who finds themselves frequently moving between environments, this flexibility could be a major convenience.

On the other hand, if you already own a pair with Clear to Green Transitions lenses, the upgrade may not offer enough of a difference to justify the change. Both lenses provide the same adaptive functionality, adjusting to light to enhance your vision while adding a color tint. The real difference lies in the aesthetic—whether you prefer the cooler sapphire tint or the more classic green hue. If you’re satisfied with the current performance and look of your lenses, there may be little reason to make the leap unless the sapphire color truly appeals to you.

If You Want a New Design with Exposed Tech, Then Yes

The most noticeable difference in the new model is the Shiny Transparent frame. This design exposes the inner workings of the glasses, giving them a high-tech look that contrasts with the more traditional Matte Black frame. The transparent frame brings an aesthetic shift, showcasing the cutting-edge technology that powers the glasses in a more visually pronounced way. It’s an intriguing design choice for those who appreciate a bold, futuristic look.

If you’re drawn to a more tech-forward, modern aesthetic, this new design is worth considering. The transparent frame is eye-catching and adds a fresh dimension to the Ray-Ban Meta Wayfarer collection. For those who want their eyewear to make a visual statement, the exposed components are a step forward in wearable tech design. However, if you prefer a more classic and understated look of the Matte Black Wayfarer, you might find that the new frame doesn’t offer enough reason to make the switch.

For Me, It’s a Hard No

For anyone who already owns the Matte Black Wayfarer with Clear to Green Transitions lenses, upgrading to the new Shiny Transparent model may not be necessary. Your current pair offers the same core features—AI-powered assistance, a 12MP camera, open-ear speakers, and a touchpad for easy control. The Clear to Green Transitions lenses provide excellent functionality, and if you’re happy with the design and tech you already have, there’s no pressing need to make the switch.

The Introduction of AI-Powered Features

With the recent updates, Ray-Ban and Meta have significantly improved the AI capabilities of the glasses. Now, you can use voice commands by simply saying “Hey Meta” and follow up with additional commands without repeating the wake word. The glasses can also remember important details like where you parked your car or set reminders for when you land after a flight. The ability to send voice messages via WhatsApp or Messenger while your hands are occupied adds an extra layer of convenience for staying connected on the go.

One of the more impressive AI features is real-time video assistance. Whether you’re exploring a new city or browsing the aisles of a grocery store, Meta AI can offer real-time help by identifying landmarks or suggesting meals based on the ingredients you’re looking at. Additionally, real-time language translation for Spanish, French, and Italian can remove language barriers, and future updates will likely support more languages.

Expanding Partnerships with Major Platforms

The glasses also support deeper integrations with platforms like Spotify and Amazon Music, but Ray-Ban has expanded these offerings to include Audible and iHeart as well. Now, you can use voice commands to search and play music or audiobooks without touching your phone. This makes the listening experience even more seamless, allowing you to ask questions like “What album is this from?” while on the move. These expanded partnerships deepen the glasses’ role in day-to-day media consumption.

The collaboration with Be My Eyes is another significant step in making the glasses more accessible. This app, designed for individuals who are blind or have low vision, pairs users with sighted volunteers who provide real-time assistance. The glasses’ camera allows the volunteer to see what the wearer sees, enabling them to help with tasks like reading mail or navigating new environments.

Are You Going for It?

Ultimately, the decision to upgrade comes down to personal preference and how much you value the new design and lens options. If money isn’t an issue or you’re drawn to the transparent frame and sapphire lenses, the upgrade makes sense. However, if you’re content with your current Matte Black Wayfarer with Clear to Green Transitions lenses, there’s no pressing reason to switch. The new features and design are exciting, but your existing pair still holds up as a stylish, highly functional piece of wearable tech.

The post Should you upgrade to the new Ray-Ban Meta Wayfarer Limited-Edition Transparent Model? first appeared on Yanko Design.