RayNeo Just Put Batman on $299 AR Glasses (And They’re Brilliant)

At some point between CES announcements and MWC reveals, someone at RayNeo had a genuinely inspired idea. They had built the world’s first AR glasses with HDR10 support, partnered with Bang & Olufsen on the audio, and engineered a display that could hold its own against high-end monitors. The product was technically impressive, competitively priced, and ready to ship. Then they added a Batman mask to it. Not a sticker, not a themed wallpaper, but an actual light-blocking cover that makes you look like you are about to patrol Gotham while watching movies on a 201-inch virtual screen.

This is the Air 4 Pro, unveiled at MWC 2026 in Barcelona, and it represents something rare in the wearables market: a product that takes itself seriously enough to deliver legitimate specs, but not so seriously that it forgets to be fun. The hardware alone would make this newsworthy. The fact that it comes with the option to cosplay as either Batman or the Joker while using it makes it irresistible.

Designer: RayNeo

Start with what matters most: the display. The Air 4 Pro is the world’s first AR glasses with HDR10 display support, which is a genuinely significant leap. Powered by RayNeo’s custom Vision 4000 chip, the display hits 1,200 nits of peak brightness, renders 10.7 billion colors with near-professional color accuracy (ΔE < 2), and runs at a smooth 120Hz refresh rate. We're talking about a 201-inch virtual screen that sits in front of your eyes, with a 200,000:1 contrast ratio. That is the kind of color performance you would expect from a high-end monitor, not from something you're wearing on your face.

The HDR10 support matters more than it might seem at first. It means that when you’re watching a movie or gaming with these on, the image is not being compressed into mediocrity. The Vision 4000 chip can also upgrade standard SDR content to HDR in real time, and there is an AI algorithm onboard that converts 2D content into 3D. These are not gimmick features. For anyone who has tried AR glasses before and felt vaguely disappointed by the visual output, this is the version that corrects the course.

Audio-wise, RayNeo partnered with Bang & Olufsen on a self-developed sound tube design with a dual opposing acoustic chamber system. The result is reportedly an 80% reduction in sound loss compared to previous models. That is a partnership that immediately signals intent. Bang & Olufsen does not lend their name to anything half-hearted, and the presence of that collaboration here suggests that RayNeo is going after people who care about the full sensory experience, not just the display numbers.

The glasses weigh 76 grams, which is no small achievement given everything packed inside. They include interchangeable nose pads, TÜV SÜD certification for low blue light and flicker-free performance, and a 3,840Hz PWM hybrid dimming system for eye protection. It is the kind of spec sheet that feels increasingly grown-up.

And then there is the Batman Edition. RayNeo unveiled two limited versions at MWC 2026: the Limited Justice Edition, which is the Batman variant, and the Limited Chaos Edition, styled after the Joker. Both come with a light-shield cover that doubles as a cosplay accessory, blocking ambient light to sharpen your viewing experience while also making you look like you are about to interrogate someone in Gotham City. The packaging is loaded with DC-themed details, and buyers get to literally pick a side.

Is this a marketing stunt? Partially, yes. But it is a clever one, because the light-shield cover is functional, not just decorative. It actually solves a real problem AR glasses have always had in bright environments. The fact that it also looks incredible is a bonus that makes this feel less like a product and more like a collectible.

My honest take is that the Batman collaboration is what will get people through the door, but the hardware is what will make them stay. At $299, with an early bird price of $249 through March 28, the Air 4 Pro is not cheap, but it is positioned well against the competition. It works with iPhones, Android flagships, PS5, Nintendo Switch 2, and most modern devices, which removes a lot of the friction that has held wearables back.

RayNeo has clearly done its homework. The Air 4 Pro is not trying to replace your phone or your TV. It is offering a better version of the portable screen experience, and the Batman costume is just the perfect way to announce it.

The post RayNeo Just Put Batman on $299 AR Glasses (And They’re Brilliant) first appeared on Yanko Design.

PlayStation XR Glasses Concept Makes a Strong Case for Gaming-Focused AR Wearables

Meta talks about XR glasses as companions for your social life. Snap a photo, answer a call, ask an AI what you are looking at. The PlayStation XR Glasses concept spins that idea toward a different center of gravity. Here, the glasses are not about broadcasting your world. They are about pulling the PlayStation universe closer, shrinking the distance between you, your console, and the screen that usually sits across the room.

Here, XR is not a spectacle. It is a subtle layer that folds into your existing PlayStation life. Imagine a virtual screen hovering above your TV stand, system notifications floating at the edge of your vision, a familiar PS logo resting by your temple like the Start button you have pressed a thousand times. The fantasy is not about replacing your PS5, but about letting its world follow you from couch to desk to bed, quietly, through something that looks like ordinary eyewear.

Designer: Shirish Kumar

The frames carry the same visual language as the PS5 and DualSense controller, all smooth curves and deliberate angles that look cohesive sitting next to your console. That blue accent lighting running along the temples is pure PlayStation branding, the kind of detail that works because it feels earned rather than slapped on. The folding hinge reveals those iconic button symbols when you open the arms, which is a nice touch that reinforces you are holding a gaming device that happens to look like eyewear. Whether Sony’s actual industrial design team would ever build something this sleek is another question entirely, but as a design exercise, it holds together.

There is a front-facing camera tucked under the lenses for object tracking and AR overlays, auto-adjusting lenses that darken outdoors and clear indoors, embedded sensors for a heads-up display, gesture controls for navigation. The PS logo on the temple supposedly works like a button, tap for Start and hold for Home, mirroring your muscle memory from the controller. All of that sounds good on paper. The real question is what you actually do with these once they are on your face. Existing PlayStation games would almost certainly run as a virtual screen floating in your field of view, basically a private monitor you wear instead of stare at. True AR gameplay where Aloy from Horizon is dodging around your coffee table requires games built specifically for that, and Kumar does not show or describe any of those experiences.

What this concept does well is stake out a different philosophy for XR glasses. Where Meta wants social connectivity and Apple is aiming for spatial computing as a productivity play, this imagines gaming-first hardware that extends an existing ecosystem rather than trying to create a new one. Whether that is enough to justify another screen in your life is the question every XR device has to answer eventually. For now, it is a polished look at what Sony could build if they decided lightweight AR glasses were the next logical step after VR headsets and portable screens.

The post PlayStation XR Glasses Concept Makes a Strong Case for Gaming-Focused AR Wearables first appeared on Yanko Design.

Forget CarPlay: Sherpa’s AR Glasses Decode Road Signs and Dashboard Icons For Nervous New Drivers

The AR glasses market keeps promising us augmented productivity and enhanced experiences, then delivering expensive ways to check notifications without pulling out your phone. Sherpa takes a different approach by targeting a specific moment of genuine incompetence: those first few months behind the wheel when every intersection feels like a pop quiz you didn’t study for. The concept uses heads-up displays to overlay directional cues and translate dashboard indicators, theoretically keeping your eyes on the road instead of darting between the windshield and that mysterious warning light.

What makes this Hongik University project interesting isn’t the hardware, which looks like standard-issue smart glasses in white plastic. It’s the learning system built around it. After each drive, the companion app analyzes your performance and identifies patterns in your mistakes. Miss the same type of turn signal three times? The AI notices. Struggle with a particular intersection? It breaks down what went wrong. Most new drivers get feedback in the form of angry horns and passenger-seat panic. This proposes something more useful, assuming you’re willing to let an algorithm critique your lane changes.

Designers: Yeongjun Yun, Jaeyun Lee

The hardware itself follows the current playbook for consumer AR: rounded frames thick enough to house display optics, visible sensor cutouts on the nose bridge (likely cameras for environmental and eye tracking), and an adjustable temple mechanism that looks borrowed from premium eyewear design. They’ve skipped the usual temptation to make it look aggressively futuristic, which matters when your target audience already feels self-conscious about their driving abilities. The cylindrical charging case suggests they’re thinking about daily use patterns rather than occasional deployment, treating this like essential equipment you grab before every drive during those first nervous months.

Where this gets genuinely clever is the integration with what they’re calling SDV, or software-defined vehicles. Modern cars already collect absurd amounts of data through their sensor arrays. Sherpa appears designed to tap into that information stream and translate it into actionable guidance. The system knows when you’ve entered a complex intersection, can read your hesitation through eye tracking, and overlay exactly what you’re supposed to watch for at that moment. Then it remembers that you struggle with this specific scenario and adjusts future guidance accordingly.

Unlike entertainment-focused AR wearables, this actually solves a real use case, which puts it ahead of most AR glasses the industry is trying to push down our throats. Driving schools teach you mechanics but abandon you at the precise moment when contextual learning would help most. If Sherpa can fill that gap between instruction and competence, it might be the first consumer AR application that people actually need rather than tolerate. Whether novice drivers will adopt glasses that broadcast their inexperience is a different question entirely, but at minimum someone’s finally asking AR to do actual work.

The post Forget CarPlay: Sherpa’s AR Glasses Decode Road Signs and Dashboard Icons For Nervous New Drivers first appeared on Yanko Design.

Meta Quest 3 feature turns any flat surface into functional Surface Keyboard with trackpad

Meta may have discontinued the Quest for Business program intended for its Quest 3, but the Horizon OS v85 is planning on introducing some niche features to the VR headset, which, of course, will start with the replacement of Horizon Feed with the Navigator UI as the default. In addition to that, if the new Horizon OS v85 Public Test Channel (PTC) is considered, the Meta headset will be able to turn any flat surface – a table or a desk – into a virtual keyboard you can type on like a physical keyboard.

The PTC for Quest OS v85 has started rolling out and initial YouTube hands-on reviews and forum discussions reveal it’s available as an experimental feature exclusively on Quest 3. The Quest 3S may have been left out of the virtual keyboard (the reason is not apparent at the time of writing), which appears like magic on a table and turns it into a keyboard complete with a trackpad.

Designer: Meta

The feature is called Surface Keyboard and it adds a keyboard on top of any surface you want. With the tap on the handheld controllers, you can switch back from the virtual keyboard to controllers and back seamlessly. If mixed reality and hand tracking have always excited you, with v85 of its operating system, the Quest 3 is going to take that experience to a new level.

To be able to truly live this fiction, where you place your hands on a table for a couple of seconds and a keyboard appears out of nowhere (where your hands were), and you can start typing – no buttons, no configuration, just hands and the virtual keyboard to type on. You will be required to opt in to the PTC of the Horizon OS, and receive the pre-release version to toil with.

If we remember correctly, Meta has been working on creating a virtual keyboard of this kind for a better part of the decade. In fact, it was in 2023 that Mark Zuckerberg demoed it and claimed he could reach 100 words per minute. Going by the videos and reviews floating online, the keyboard will take some getting used to. That said, the setup is easy and straightforward.

When you have opted in for the PTC, you can go to Movement Tracking and enable hand movement and body and double-tap controllers for hand tracking (to be able to switch between controller and keyboard). Now, to the Experimental and Unstable Surface Keyboard under the head. Once that’s done, go to devices, click keyboard, and setup and you’re set. Place your hands flat on a surface, and in seconds, a keyboard will appear where your hands are.

 

The post Meta Quest 3 feature turns any flat surface into functional Surface Keyboard with trackpad first appeared on Yanko Design.

Meta Misread the Future Twice. Now They’re Sitting on a Golden Egg, But Don’t Know It

Mark Zuckerberg changed his company’s name to Meta in October 2021 because he believed the future was virtual. Not just sort-of virtual, like Instagram filters or Zoom calls, but capital-V Virtual: immersive 3D worlds where you’d work, socialize, and live a parallel digital life through a VR headset. Four years and roughly $70 billion in cumulative Reality Labs losses later, Meta is quietly dismantling that vision. In January 2026, the company laid off around 1,500 people from its metaverse division, shut down multiple VR game studios, killed its VR meeting app Workrooms, and effectively admitted that the grand bet on virtual reality had failed. Investors barely blinked. The stock went up.

The official line now is that Meta is pivoting to AI and wearables. Zuckerberg spent much of 2025 building what he calls a “superintelligence” lab, hiring top-tier AI talent with eye-watering compensation packages that are now one of the largest drivers of Meta’s 2026 expense growth. The company released Llama models that benchmark decently against OpenAI and Google, embedded chatbots into WhatsApp and Instagram, and talks constantly about “AI agents” and “new media formats.” But from a product and profit perspective, Meta’s AI strategy looks suspiciously like its metaverse strategy: lots of spending, vague promises, and no breakout consumer experience that people actually love. Meanwhile, the thing that is quietly working, the thing people are buying and using in the real world, is a pair of $300 smart glasses that Meta barely talks about. If this sounds like a pattern, that’s because it is. Meta has now misread the future twice in a row, and both times the answer was hiding in plain sight.

The Metaverse Was a $70 Billion Fantasy

Reality Labs has been hemorrhaging money since late 2020. As of early 2026, cumulative operating losses sit somewhere between $70 and $80 billion, depending on how you slice the quarters. In the third quarter of 2025 alone, Reality Labs posted a $4.4 billion loss on $470 million in revenue. For 2025 as a whole, the division lost more than $19 billion. These are not rounding errors or R&D investments that will pay off next year. These are structural losses tied to a product category, VR headsets and metaverse platforms, that the market simply does not want at the scale Meta imagined.

The vision sounded compelling in a keynote. You would strap on a Quest headset, meet your coworkers in a virtual conference room with floating whiteboards, then hop over to Horizon Worlds to hang out with friends as legless avatars. The problem was that almost no one wanted to do any of that for more than a demo. VR remained a niche gaming platform with occasional fitness and entertainment use cases, not the next paradigm shift in human interaction. Zuckerberg kept insisting the breakthrough was just around the corner. He was wrong, and the January 2026 layoffs and studio closures were the formal acknowledgment that Reality Labs as originally conceived was dead.

The irony is that Meta actually had a potential killer app inside Reality Labs, and it murdered it. Supernatural, a VR fitness game that Meta acquired for $400 million in 2023, was one of the few pieces of Quest software that generated genuine user loyalty and recurring revenue. People who used Supernatural regularly described it as the most effective home workout they had ever done, combining rhythm-based gameplay with full-body movement in a way that treadmills and Peloton bikes could not replicate. It had a subscription model, a dedicated community, and real retention. In January 2026, Meta moved Supernatural into “maintenance mode,” which is corporate speak for “we fired almost everyone and it will get no new content.” If you are trying to prove that VR has mainstream utility beyond gaming, fitness is one of the most obvious wedges. Meta had that wedge, and it chose to kill it in the same round of cuts that shuttered studios working on Batman VR games and other prestige titles. The message was clear: Zuckerberg had lost interest in Quest, even the parts that worked.

The AI Bet That Looks Like the ‘Metaverse Bust’ 2.0

After spending years insisting the future was virtual worlds, Meta pivoted hard to AI in 2023 and 2024. Zuckerberg now talks about AI the way he used to talk about the metaverse: with sweeping language about paradigm shifts and transformative platforms. The company stood up an AI division focused on building what it calls “superintelligence,” hired aggressively from OpenAI and Anthropic, and made technical talent compensation the second-largest contributor to Meta’s 2026 expense growth behind infrastructure. This is not a side project. Meta is spending billions on AI research, training, and deployment, and Zuckerberg expects losses to remain near 2025 levels in 2026 before they start to taper.

From a technical standpoint, Meta’s AI work is solid. The Llama family of models is legitimately competitive with GPT-4 class systems and has found real adoption among developers who want open-source alternatives to OpenAI and Google. Meta’s internal AI is also driving real business value in ad targeting, content ranking, and moderation. Those systems work, and they contribute directly to Meta’s core revenue. But from a consumer product perspective, Meta’s AI feels scattered and often unnecessary. The company has embedded “Meta AI” chatbots into WhatsApp, Instagram, Messenger, and Facebook, none of which feel like natural places for a chatbot. Instagram’s feed is increasingly stuffed with AI-generated images and engagement bait that users actively complain about. Meta has launched character-based AI bots tied to influencers and celebrities, and approximately no one uses them. The gap between “we have impressive models” and “we have a product people love” is enormous, and it is the exact same gap that sank the metaverse.

What Meta is missing, again, is product intuition. OpenAI built ChatGPT and made it feel like the future because the interface was simple, the use cases were obvious, and it delivered consistent value. Google integrated Gemini into Search and productivity tools where users were already working. Meta, by contrast, seems to be throwing AI at every surface it controls and hoping something sticks. Zuckerberg talks about “an explosion of new media formats” and “more interactive feeds,” which in practice means more algorithmic slop and fewer posts from people you actually know. Analysts are starting to notice. One Bernstein note from early 2026 argued that the “winner” criteria in AI is shifting from model quality to product usage, which is a polite way of saying that having a great model does not matter if your product is annoying. Meta has a great model. Its products are annoying.

The financial picture is also murkier than Meta would like to admit. Reality Labs is still losing close to $20 billion a year, and while AI is not a separate reporting segment, the talent and infrastructure costs are clearly rising. Meta’s overall revenue growth is strong, driven by advertising, but the company is not yet showing a clear path to AI profitability outside of ‘ad optimization’. That puts Meta in the awkward position of having pivoted from one unprofitable moonshot (metaverse) to another potentially unprofitable moonshot (consumer AI products) while the actual profitable parts of the business, social ads and engagement, keep the lights on. This is a pattern, and it is not a good one.

The Smart Glasses Lead That Meta Is Poised to Lose

Meta talks about the Ray-Ban smart glasses constantly. Zuckerberg calls them the “ultimate incarnation” of the company’s AI vision, and the pitch is relentless: sales more than tripled in 2025, the glasses represent the future of ambient computing, this is the post-smartphone platform. The problem is not that Meta is ignoring the glasses. The problem is that Meta is about to squander a massive early lead, and the competition is closing in fast. 2026 is shaping up to be a blockbuster year for smart glasses. Samsung confirmed its AR glasses are launching this year. Google is releasing its first pair of smart glasses since 2013, an audio-only pair similar to the Ray-Ban Meta glasses. Apple is reportedly pursuing its own smart glasses and shelved plans for a cheaper Vision Pro to prioritize the project. Meta dominated VR because it was early, cheap, and had no real competition. In smart glasses, that window is closing fast, and the field is getting crowded with all kinds of names, from smaller players like Looktech and Xgimi’s Memomind to mid-sized brands like Xreal, to even larger ones like Google, TCL, and Xiaomi.

The Ray-Ban Meta glasses work because they are simple and focused. They take photos and videos, play music, make calls, and provide real-time answers through an AI assistant. Parents use them to record their kids hands-free. Travelers use them for translation. The form factor, actual Ray-Ban Wayfarers that cost around $300, means they do not scream “I am wearing a computer on my face.” This is the rare Meta hardware product that feels intuitive rather than forced, and it is selling because it solves boring, everyday problems without requiring users to change their behavior.

Then Meta made a critical mistake. To use the glasses, you have to route everything through the Meta AI app, which means you cannot just power-use the hardware without engaging with Meta’s AI-slop ecosystem. Want to access your photos? Meta AI. Want to tweak settings? Meta AI. The app is the mandatory gateway, and it is stuffed with the same kind of algorithmic recommendations and AI-generated suggestions that clutter Instagram and Facebook. Instead of letting the glasses be a clean, utilitarian tool, Meta is using them as another vector to push its AI products. Google and Samsung are not going to make that mistake. Their glasses will integrate with Android XR and existing ecosystems without forcing users into a single AI app. Apple, if and when it launches, will almost certainly take a similar approach: clean hardware, seamless OS integration, optional AI features. Meta had a head start, Ray-Ban branding, and a product people actually liked. It is on track to waste all of that by prioritizing AI evangelism over product discipline, and the competition is going to eat its lunch.

What Happens When You Chase Narratives Instead of Products

The pattern across metaverse and AI is that Meta keeps betting on big, abstract visions rather than iterating on the things that work. Zuckerberg is a narrative-driven founder. He wants to define the future, not respond to it. That impulse gave us Facebook in 2004, when no one else saw the potential of real-identity social networks, but it has led Meta astray repeatedly in the 2020s. The metaverse was a narrative, not a product. The idea that billions of people would strap on headsets to work and socialize in 3D was always more science fiction than product roadmap, but Zuckerberg committed so hard to it that he renamed the company.

AI feels like the same mistake. The narrative is that foundation models and “agents” will transform every part of computing, and Meta wants to be seen as a leader in that transformation. The actual products, chatbots in WhatsApp and AI-generated feed content, do not meaningfully improve the user experience and in many cases make it worse. Meanwhile, the thing that is working, smart glasses, does not fit cleanly into the AI or metaverse narrative, so it gets less attention and investment than it deserves. Meta’s 2026 strategy, “shifting investment from metaverse to wearables,” is a tacit admission of this, but it is couched in language that still emphasizes AI rather than the hardware itself.

The other pattern is that Meta is willing to kill its own successes if they do not fit the broader narrative. The hit VR fitness game on Meta’s Horizon, Supernatural, was working. It had subscribers, retention, and cultural momentum within the VR fitness community. It was also a relatively small, specific product rather than a platform play, and that made it expendable when Meta decided to scale back Reality Labs. The same logic applies to Quest more broadly. The headset had carved out a niche in gaming and fitness, and with sustained investment in content and ecosystem development, it could have grown into a meaningful adjacent business. Instead, Meta is deprioritizing it because Zuckerberg has decided the future is AI and lightweight wearables. That might turn out to be correct, but the way Meta is executing the pivot, by shuttering studios and putting products in maintenance mode rather than spinning them out or finding partners, suggests a lack of product discipline.

Why Smart Glasses Might Actually Be the Next Facebook

If you step back and ask what Meta is actually good at, the answer is not virtual reality or language models. Meta is good at building social products with massive scale, capturing and distributing content, and monetizing attention through ads. The Ray-Ban Meta glasses fit all of those strengths. They make it easier to capture photos and video, which feeds into Instagram and Facebook. They use AI to provide contextual information, which ties into Meta’s model development. And they are a physical product that people wear in public, which is a form of distribution and branding that Meta has never had before.

The bigger story is that smart glasses as a category are exploding, and Meta happened to be early. It is not just Samsung, Google, and Apple entering the space. Meta itself is expanding the Ray-Ban line with Displays (which adds a heads-up display) and partnering with Oakley on HSTN, a sportier model aimed at action sports. Google is teaming up with Warby Parker for its glasses, which gives it instant credibility in eyewear design. And then there are the startups: Even Realities, Xiaomi, Looktech, MemoMind, and dozens more, all slated for 2026 releases. This feels exactly like the moment AirPods sparked the true wireless earbud movement. Apple defined the format, then everyone from Samsung to Sony to no-name brands flooded the market, and now you can buy HMD ANC earbuds for 28 dollars. Smart glasses are following the same trajectory, which means the form factor itself is validated, and Meta’s early lead matters less than whether it can keep iterating faster than everyone else.

The other underrated piece is that having an instant camera on your face is genuinely useful in ways that VR headsets never were. People are using Ray-Ban Meta glasses as GoPro alternatives while skateboarding, cycling, and doing action sports, because POV capture without holding a phone or mounting a camera is frictionless. Content creators are using them to shoot hands-free B-roll at events like CES. Parents are using them to record their kids playing without the weird “I am holding my phone up at the playground” vibe. Pet owners are capturing spontaneous moments with dogs and cats that would be impossible to get with a phone. These are not sci-fi use cases or metaverse fantasies. They are boring, real-world problems that the glasses solve immediately, and that is why they are selling. Meta has spent a decade chasing grand visions of the future, and it accidentally built a product that people want right now. The challenge is whether it can resist the urge to over-complicate it before Google, Samsung, and Apple catch up.

The Real Lesson Is About Focus

Meta has spent the last five years oscillating between grand visions, metaverse and AI, and neglecting the products that actually work. The Ray-Ban Meta glasses are proof that when Meta focuses on solving real problems with tangible products, it can still build things people want. The metaverse failed because it was a solution in search of a problem, and the AI push is struggling because Meta is shipping features rather than products. Smart glasses, by contrast, are succeeding because they make everyday tasks easier without requiring users to change their behavior or buy into a futuristic narrative.

If Zuckerberg can internalize that lesson, Meta might actually have a shot at owning the next platform. But that requires a level of product discipline and restraint that Meta has not shown in years. It means resisting the urge to turn every product into a platform, admitting when a bet has failed rather than pouring another $10 billion into it, and focusing on iteration over narration. The irony is that Meta already has the right product. It just needs to stop looking past it.

The post Meta Misread the Future Twice. Now They’re Sitting on a Golden Egg, But Don’t Know It first appeared on Yanko Design.

These 95g AR Glasses Replace VR Headsets with a 300-Inch Screen

Portable entertainment has split into two unsatisfying extremes. AR glasses feel like oversized phone screens floating in front of your face, and VR headsets are immersive but too heavy, bulky, and isolating for everyday use. There is a desire for something that feels like a real cinema experience but can be used on a couch, in bed, on a plane, or in a café without suiting up or strapping a helmet to your face.

Xynavo is a pair of lightweight AR glasses built around lightweight immersion, private audio, and expandable functionality. It offers a 70-degree field of view and dual 4K micro-OLED displays, creating a virtual screen equivalent to more than 300 inches, yet weighs only 95g. The goal is to turn whatever you already own into a cinema-scale display you can wear, without the weight and noise of a full headset.

Designer: Xynavo

Click Here to Buy Now: $299 $499 ($200 off). Hurry, only a few units left! Raised over $199,200.

Xynavo fits into evenings at home, where couples can use a multi-device adapter to connect two pairs and share the same screen, playing on a Nintendo Switch or Steam Deck together or watching films and series side by side. Parents and children can share animated movies and family comedies, or connect a game console for interactive play, with private audio and a huge virtual screen.

Late nights or quiet weekends alone, you put on Xynavo and relax on the couch or in bed watching NBA, NFL, or UEFA Champions League games, or diving into action movies and sci-fi series. The dual 4K clarity and private audio turn it into a theater experience made just for you, without needing to dedicate a room or disturb anyone else in the house.

1

On planes, high-speed trains, or in hotel rooms, you connect a laptop via USB-C or the included HDMI adapter, pair a wireless keyboard, and handle email or browsing. Then you switch seamlessly to movies or games, all while the glasses stay light enough to wear for full episodes or matches without headband fatigue. The 95 g weight makes hours-long sessions feel manageable instead of exhausting.

Most AR glasses offer a narrow field of view that feels like a big phone, while Xynavo’s 70-degree FOV and dual 4K panels fill your vision with a cinema-scale scene. The high pixel density keeps text crisp and motion smooth, avoiding screen-door effects. A +2D to -6D diopter adjustment range lets many users dial in crystal-clear focus without wearing prescription glasses underneath, making the fit more comfortable.

1

Open-ear AR audio often leaks sound and struggles in noisy or very quiet spaces. Xynavo uses magnetic in-ear modules designed for noise isolation and zero sound leakage, keeping audio clear on trains and planes and private next to someone sleeping. That makes shared spaces and late-night use realistic, without headphones or disturbing people nearby.

Two built-in 3D split-screen modes, 3840×1080 and 1920×1080, let you watch a wider range of 3D content. A long press switches formats, while the dual 4K panels maintain depth and clarity across both modes. This flexibility means more 3D videos, apps, and playback sources work without workarounds or format hunting.

Xynavo connects to smartphones, handheld consoles, tablets, laptops, gaming systems, and PCs via its Type-C cable and included HDMI adapter, working as a plug-and-play external display without special apps or pairing. It is designed as an expandable Type-C vision platform, with support planned for external modules like cameras, night vision, and thermal imaging. That hints at a future where the same lightweight frame can grow with whatever you want to see next.

Click Here to Buy Now: $299 $499 ($200 off). Hurry, only a few units left! Raised over $199,200.

The post These 95g AR Glasses Replace VR Headsets with a 300-Inch Screen first appeared on Yanko Design.

This AR Ski Helmet Finally Lets Rescuers Control Tech By Eye

Imagine being a ski patrol responder racing toward an injured skier on a freezing mountain. Your hands are gripping poles, your attention is split between the terrain and the emergency ahead, and your radio crackles with critical information. Now imagine if you could access maps, communicate with your team, and log vital data without ever touching a device. That’s exactly what the Argus AR Helmet promises to deliver.

Designed by Hyeokwoo Kwon and Junho Park, Argus is a concept that reimagines what rescue technology can look like when you strip away everything unnecessary and focus on the moment that matters most. This isn’t just another gadget trying to cram features into a helmet. It’s a thoughtful response to a real problem: how do first responders stay connected and informed when their hands are literally full and seconds count?

Designers: Hyeokwoo Kwon and Junho Park

The helmet’s standout feature is its eye-tracking interface. Instead of fumbling with buttons or voice commands that get lost in howling wind, users control the AR display simply by looking at what they need. Want to view a map overlay of the ski area? Glance at the navigation icon. Need to send a message to base? Your eyes do the work. The system is built around the idea that in high-stress, time-critical situations, the fewer steps between thought and action, the better.

What makes this particularly clever is how it handles communication in one of the noisiest work environments imaginable. Mountains are loud. Wind, equipment, helicopters, and panicked voices create a constant wall of sound that makes radio communication frustrating at best and dangerous at worst. Argus addresses this with real-time conversation-to-text conversion. Spoken words are automatically transcribed and displayed on the visor, ensuring that critical information doesn’t get lost or misunderstood. In an emergency where “stop the area” versus “stop near the area” could mean completely different courses of action, that clarity is potentially lifesaving.

The design itself strikes a balance between futuristic and functional. The white shell with bold red accents and Swiss cross branding gives it a clean, authoritative look that fits naturally into the visual language of emergency services. The transparent visor integrates the AR display without creating the bulky, intrusive appearance that often plagues wearable tech. There’s a modularity to the system too, with a detachable power pack that ensures the helmet remains comfortable for long shifts while providing enough battery life to last through demanding rescue operations.

From a practical standpoint, Argus is designed to support ski patrol operations across experience levels. A rookie responder gets the same information overlay and guidance as a veteran, creating a more consistent standard of care. Route optimization, hazard warnings, victim location data, and communication logs all live within the user’s field of vision, accessible without breaking focus from the task at hand.

But beyond the specific use case of ski patrol, Argus represents something larger about where wearable technology is headed. We’re moving past the era of tech that demands our attention and toward interfaces that disappear into the background until we need them. Eye-tracking isn’t new, but applying it to life-or-death situations where gloves, weather, and adrenaline make traditional controls impractical shows how design thinking can solve problems that raw computing power can’t.

There’s also something refreshing about seeing concept design tackle unglamorous but essential work. We’re used to seeing AR prototypes aimed at gaming, shopping, or entertainment. Those have their place, but projects like Argus remind us that the most meaningful applications of emerging technology often happen in fields where people are doing difficult, dangerous work that most of us never see.

Will we see Argus helmets on mountains anytime soon? As a concept, it still needs to navigate the long road from design portfolio to production reality, including challenges around durability, battery life in extreme cold, and integration with existing rescue protocols. But as a vision of what’s possible when designers deeply understand the context they’re designing for, it’s compelling. It shows that the future of wearable tech might not be about adding more features, but about making the right information available at exactly the right moment, controlled by something as simple and intuitive as where you look.

The post This AR Ski Helmet Finally Lets Rescuers Control Tech By Eye first appeared on Yanko Design.

Leion Hey2 Brings First AR Glasses Built for Translation to CES 2026

Cross-language conversations create a familiar kind of friction. You hold a phone over menus, miss half a sentence while an app catches up, or watch a partner speak fast in a meeting while your translation lags behind. Even people who travel or work globally still juggle apps, hand-held translators, and guesswork just to keep up with what is being said in the room, which pulls attention away from the actual conversation.

Leion Hey2 is translation that lives where your eyes already are, in a pair of glasses that quietly turns speech into subtitles without asking you to look down or pass a device back and forth. The glasses were built for translation first, not as an afterthought on top of entertainment or social features, and they are meant to last through full days of meetings or classes instead of dying halfway through, when you need them most.

Designer: LLVision

Click here to know more.

Glasses That Care About Conversation, Not Spectacle

Leion Hey2 is a pair of professional AR translation glasses from LLVision, a company that has spent more than a decade deploying AR and AI in industrial and public-sector settings. Hey2 is not trying to be an all-in-one headset; it is engineered from the ground up for real-time translation and captioning, supporting more than 100 languages and dialects with bidirectional translation and latency under 500 ms in typical conditions, plus 6–8 hours of continuous translation on a single charge.

Hey2 is designed to wear like everyday eyewear rather than a gadget. The classic browline frame, 49g weight, magnesium-lithium alloy structure, and adjustable titanium nose pads are all chosen to make it feel like a normal pair of glasses you forget you are wearing. A stepless spring hinge adapts to different faces, and the camera-free, microphone-only design, which follows GDPR-aligned privacy principles and is supported by a secure cloud infrastructure built on Microsoft Azure, helping keep both wearers and bystanders more comfortable in sensitive environments.

Subtitles in Your Line of Sight

Hey2 uses waveguide optics and a micro-LED engine to project crisp, green subtitles into both eyes, with a 25-degree field of view and more than 90% passthrough so the real world stays bright. The optical engine is tuned to reduce rainbow artifacts by up to 98%, keeping text stable and readable in different lighting conditions, while three levels of subtitle size and position let you decide how prominently captions sit in your forward field of view.

The audio side relies on a four-microphone array that performs 360-degree spatial detection to identify who is speaking, while face-to-face directional pickup prioritizes the person within roughly a 60-degree cone in front of you. A neural noise-reduction algorithm uses beamforming and multi-channel processing to isolate the main voice, which helps in noisy restaurants, busy trade-show floors, or classrooms where questions come from different directions, without forcing you to constantly adjust settings.

Modes That Support Work, Learning, and Accessibility

In translation and Free Talk modes, foreign speech is converted into your language as subtitles in your line of sight, so you can mix languages freely and still follow long-form speech without constantly checking a screen. In Free Talk, Hey2 provides subtitles for what you hear and spoken translation for what you say, turning a two-language conversation into something that feels more like a normal chat than a tech demo, with the charging case extending total use to 96 hours across 12 recharges.

Teleprompter mode scrolls your script in your line of sight and advances it automatically as you speak, useful for lectures, pitches, or keynotes where you want to keep eye contact without glancing at notes. AI Q&A, triggered by a temple tap, taps into ChatGPT-powered answers for discreet look-ups, while Captions mode turns fast speech into clean text, helping students, professionals, and Deaf or hard-of-hearing users stay on top of what is being said, even in noisy environments where handheld devices struggle.

A Different Kind of AR Story

When Leion Hey2 steps onto the CES 2026 stage, it represents a quieter kind of AR story. Instead of chasing spectacle, it narrows the brief to something very human, helping people speak, listen, and be understood across languages and hearing abilities. For a show that often celebrates what technology can do, Hey2 is a reminder that sometimes the most interesting innovation is the one that simply lets you keep your head up and stay in the conversation.

Click here to know more.

The post Leion Hey2 Brings First AR Glasses Built for Translation to CES 2026 first appeared on Yanko Design.

Razer’s Project AVA Brings Holographic AI Companions to Your Desk

Remember watching sci-fi movies as a kid and dreaming about the day you’d have your own holographic assistant? Well, that future just arrived, and it’s cuter than we ever imagined. Razer unveiled Project AVA at CES 2026, and honestly, it’s giving us all the futuristic vibes we didn’t know we needed.

Picture this: a sleek cylindrical device sitting on your desk, projecting a 5.5-inch animated 3D hologram that actually talks to you, learns your habits, and becomes your daily companion. It sounds like something straight out of a Black Mirror episode, but in the best possible way.

Designer: Razer

What makes Project AVA so fascinating isn’t just the holographic technology itself (though let’s be real, that’s pretty spectacular). It’s how Razer has reimagined what AI companionship could look like in our physical spaces. Unlike Siri hiding in your phone or Alexa trapped in a speaker, AVA exists as a visible presence on your desk. She has facial expressions, tracks eye movement, and her lips actually sync when she talks. It’s the kind of detail that transforms a gadget into something that feels surprisingly alive.

The personality customization is where things get really interesting. You can choose from different avatars, each with their own distinct personality. There’s Kira, an anime-style character perfect for gaming enthusiasts. There’s Zane for those wanting a more professional vibe. And then, in what might be the most genius collaboration ever, there’s an avatar modeled after League of Legends legend Lee “Faker” Sang-hyeok, plus characters from Sword Art Online. Razer clearly understands its audience, and they’re leaning hard into gaming and anime culture in the best way possible.

But here’s what really sets AVA apart: she’s powered by xAI’s Grok engine, which gives her some seriously sophisticated AI capabilities. This isn’t just a voice assistant that sets timers and plays music. AVA learns from your interactions and evolves her personality based on how you communicate with her. She can help organize your schedule, brainstorm creative projects, analyze data, and even provide real-time gaming coaching by actually watching your screen and offering strategic advice.

The gaming features deserve special attention because they’re genuinely innovative. Through what Razer calls “PC Vision Mode,” AVA can analyze your gameplay in real-time and offer coaching tips. Before you worry, Razer has been clear that AVA is designed as a coach and trainer, not an automated playing tool, so she won’t get you banned from competitive games. She’s more like having a knowledgeable friend watching over your shoulder, offering helpful suggestions.

From a design perspective, the cylindrical unit houses impressive tech: dual far-field microphones, an HD camera with ambient light sensors, and of course, Razer’s signature Chroma RGB lighting because aesthetics matter. The device connects to your Windows PC via USB-C, ensuring the high-bandwidth data transfer needed for those real-time features to work smoothly.

What’s particularly clever about Project AVA is how it addresses something we’ve all experienced with traditional AI assistants: the disconnect. When you’re talking to a voice in a speaker, it feels transactional. But when there’s a holographic character making eye contact and responding with facial expressions, the interaction becomes more engaging and, dare I say, more human.

Razer is calling AVA a “Friend for Life,” which might sound like marketing hyperbole, but it hints at something bigger happening in tech culture. We’re moving beyond thinking about AI as tools and starting to explore how they might serve as companions in our daily lives. It’s a fascinating cultural shift that raises interesting questions about how we’ll interact with technology in the coming years.

For anyone interested in being part of this next wave of AI innovation, reservations are open now for a $20 deposit, with the device expected to launch in late 2026. Whether you’re a tech enthusiast, a collector of innovative gadgets, or just someone who’s always wanted their own holographic companion, Project AVA represents something genuinely new in the consumer tech space.

The post Razer’s Project AVA Brings Holographic AI Companions to Your Desk first appeared on Yanko Design.

Best Tech Gadgets of 2025: 10 Innovations You Need to See

Technology moves fast, but 2025 feels like a distinct era. This year brought gadgets that challenged convention rather than followed it. From keyboards that fold into phone cases to power banks that communicate through light, these innovations prove that great design starts with questioning what we’ve accepted as normal. The products ahead represent a shift in thinking about portability, interaction, and what our devices should actually do for us.

What makes these ten gadgets stand out isn’t just their novelty. Each one addresses a real frustration with current tech, offering solutions that feel both refreshingly simple and genuinely innovative. Whether you’re tired of touchscreen typing, craving better smartwatch docks, or looking for portable computing power, these designs rethink familiar categories from the ground up. They remind us that the future of technology lies in thoughtful problem-solving, rather than merely adding more features.

1. Plumage: The Keyboard-Case Hybrid That Actually Makes Sense

Typing on touchscreens has never felt right, and bolt-on keyboard solutions create phones that resemble small tablets. The Concept Plumage solves both problems by integrating a physical keyboard directly into a phone case without extending the device’s footprint. Originally designed by Jet Weng in 2013, this concept flips open like peeling a banana to reveal a Blackberry-style layout with a screen on top and tactile keys below. The phone stays compact when closed, transforms for serious typing when open.

What makes this design brilliant is its acknowledgment that screens don’t need to cover every inch of our phones. The half-screen approach feels counterintuitive until you realize most typing happens in apps where the keyboard covers half the display anyway. Flip it open for confident typing during emails or messaging, navigate with the touch-sensitive upper screen, then flip it shut for pocket-friendly portability. This concept deserves resurrection because it prioritizes how people actually use their phones over chasing edge-to-edge displays.

What we like

  • The keyboard integrates without adding bulk to the phone’s footprint
  • Physical keys enable fast, accurate typing without sacrificing screen real estate when closed

What we dislike

  • The half-screen design requires adjusting expectations about display size
  • The flip mechanism could introduce durability concerns with repeated daily use

2. MSI Gaming PC Watch: When Wearables Go Full Desktop

Smartwatches pretend to be tiny phones strapped to your wrist, but the MSI Gaming PC Watch takes a radically different approach. This concept treats your wrist as a platform for an actual computer, complete with visible fans, graphics components, cooling systems, and motherboard elements right through the watch face. The design features subtle analog watch hand annotations and four side pushers for navigation. The metal alloy case proudly displays the MSI logo at 3 o’clock, where a traditional crown would sit.

This wearable computer represents a philosophical departure from smartphone-on-your-wrist thinking. By embracing computer periphery ideology rather than mimicking phone interfaces, the Gaming PC Watch suggests an alternative path for wearable innovation. The transparent components aren’t just aesthetic flourishes; they telegraph the device’s identity as genuine computing hardware miniaturized for portability. Whether checking system performance, monitoring temperatures, or simply appreciating the engineering, this watch makes technology itself the main attraction rather than hiding it behind glossy screens.

What we like

  • The transparent design showcases actual computing components with visual appeal
  • It reimagines the smartwatch’s purpose beyond smartphone replication

What we dislike

  • The gaming aesthetic may not suit professional or formal settings
  • Visible internal components could raise questions about durability and water resistance

3. Nothing Power 1: The Battery Bank That Speaks Through Light

Power banks typically hide their technology behind opaque shells, but the Nothing Power 1 concept revives the glyph interface that made the Nothing Phone famous. This 20,000 mAh battery bank features transparent layers with bold light paths that transform illumination into precise information. Every light on the back panel serves a purpose, indicating battery levels, charging status, and even smartphone notifications when connected. The design language echoes the circuit pathways and physical logic of Nothing’s original phone, maintaining the brand’s commitment to meaningful transparency.

Fast charging at 65W means reaching 50% capacity in under 20 minutes, while the substantial battery capacity delivers at least three phone charges before needing a refill. The glyph interface goes beyond simple battery indication by connecting with your smartphone to display alerts and charging progress through purposeful light patterns. This approach makes waiting for your phone to charge more informative and visually engaging. The design proves that power banks don’t need to be boring rectangular slabs; they can communicate status elegantly while celebrating the technology inside.

What we like

  • The glyph interface turns light into precise, purposeful information
  • The 20,000 mAh capacity with 65W fast charging delivers both power and speed

What we dislike

  • The transparent design may show dirt and fingerprints more readily
  • The unique aesthetic might not appeal to users who prefer minimal, discreet accessories

4. Oakley Aether: The AR Glasses Google Should Have Built

Google once led the smart headset space before abandoning it for one-off experiments, but the Oakley Aether concept imagines an alternate timeline where Google remained committed. Modeled after ski goggles, these performance-driven glasses enclose your eyes in a protective bubble with 100% visibility enhanced by Android AR and Gemini AI integration. The design suggests what happens when you combine Oakley’s athletic expertise with Google’s software prowess, creating headsets that reimagine movement, insight, and precision through immersive technology.

The goggle format provides advantages traditional glasses can’t match: full environmental protection, expanded display real estate, and room for cameras, LiDAR, and other sensors essential for convincing AR. Pop them on and view the world through a heads-up display showing contextual information, notifications, and activity recordings for later analysis. Gemini AI integration enables natural conversation with your headset, creating interactions reminiscent of talking to JARVIS in Iron Man. This concept proves that AR glasses don’t need to look like traditional eyewear; embracing the goggle format opens new possibilities for capability and comfort.

What we like

  • The goggle format allows superior sensor integration and displays real estate
  • Gemini AI enables natural voice interaction for hands-free control

What we dislike

  • The ski goggle aesthetic may feel too sporty for everyday urban use
  • The enclosed design could cause comfort issues during extended wear

5. TWS ChatGPT Earbuds: AI That Sees What You See

Most wireless earbuds focus exclusively on audio, but this concept adds cameras to each stem, positioned near your natural sight line. Paired with ChatGPT, those lenses become a constant visual feed for an AI assistant living in your ears. The system can read menus, interpret signs, describe scenes, and guide you through unfamiliar cities without requiring you to hold up your phone. The form factor stays familiar while the capabilities feel genuinely new, making AI feel less like a demo and more like a daily habit.

The industrial design resembles a sci-fi inhaler in the best possible way. Each lens sits at the stem’s end like a tiny action camera, surrounded by a ring that doubles as a visual accent. The colored shells and translucent tips keep the aesthetic playful enough to read as audio gear first, camera second. This positioning matters because cameras in your ears feel less invasive than cameras on your face. You maintain eye contact during conversations, avoid the social stigma of face-mounted recording devices, and gain AI vision capabilities that activate only when needed.

What we like

  • The ear-mounted cameras feel less socially awkward than face-mounted alternatives
  • ChatGPT integration provides practical AI assistance for navigation and information

What we dislike

  • Privacy concerns may arise from cameras pointed at people during conversations
  • Battery life could suffer from powering both audio and visual processing

6. Gboard Dial: When Keyboard Design Gets Delightfully Absurd

Google Japan’s annual keyboard concepts embrace playful absurdity, and the Gboard Dial Version spins this tradition in a new direction. Released on October 1st to honor the classic 101-key layout, this 14th entry features a wonderfully over-engineered dial mechanism where users insert fingers into positioned keyholes and rotate to select characters. The three-layer dial structure supposedly delivers three times faster input with parallel operation capability. The nostalgic grinding sound becomes a feature rather than a bug, promoting what the team calls a calmer thinking and input experience.

This satirical concept follows memorable predecessors like the Gboard Teacup, Stick, Hat, and Double-Sided keyboards. While obviously impractical for actual productivity, the Dial Version raises interesting questions about input methods and the assumptions we make about efficiency. The deliberate slowness forces more thoughtful composition, and the physical interaction provides tactile satisfaction missing from touchscreens and flat keyboards. Sometimes the best tech concepts aren’t meant for production; they’re meant to make us reconsider what we’ve accepted as optimal.

What we like

  • The playful design challenges assumptions about keyboard efficiency and input methods
  • The tactile interaction provides satisfying physical feedback

What we dislike

  • The intentionally slow input method makes it impractical for actual work
  • The three-layer dial mechanism would likely be fragile with regular use

7. NightWatch: The Apple Watch Dock That Does Everything Right

Charging docks for smartwatches typically amount to simple stands with integrated power, but the NightWatch transforms your Apple Watch into a proper bedside alarm clock through clever design. This solid lucite orb magnifies your watch screen, making the time clearly legible from several feet away. Strategic channels under the speaker units amplify sound naturally, similar to cupping your hands around your mouth, ensuring your alarm actually wakes you. The entire transparent sphere is touch-sensitive, allowing a simple tap to wake the watch display.

The brilliance lies in its simplicity. There are no hidden components, no electronic trickery, just thoughtful application of physics and material properties. The lucite magnification works optically, the sound amplification happens through shaped channels, and the touch sensitivity uses the material’s properties. Your Apple Watch docks inside, charges overnight, and becomes infinitely more useful as a bedside timepiece. The transparent design lets you appreciate the watch itself, while the orb form creates an appealing sculptural presence on your nightstand.

What we like

  • The optical magnification makes the time readable from across the room
  • Natural sound amplification ensures alarms are actually audible

What we dislike

  • The large orb form takes up significant nightstand space
  • The design works exclusively with the Apple Watch, limiting its audience

8. Pironman 5-MAX: Turning Raspberry Pi Into a Desktop Powerhouse

The naked Raspberry Pi 5 board looks humble, but the Pironman 5-MAX case transforms it into a legitimate desktop computer packed with serious capabilities. This miniature rig features dual NVMe SSD slots for lightning-fast storage, support for AI accelerators like the Hailo-8L for machine learning workloads, and clever design features that maximize the Pi’s potential. The compact desktop form factor punches well above its weight, proving that mini machines can handle tasks once reserved for full-sized computers.

What makes this case special is how it treats the Raspberry Pi with the seriousness of proper desktop hardware. The dual NVMe support brings storage speeds and capacity that enable media servers, project development, and even AI experimentation within this tiny chassis. Adding AI acceleration capabilities means your Pi 5 can tackle machine learning tasks, opening possibilities that seemed absurd for single-board computers just years ago. This case doesn’t just protect your Pi; it unlocks its full potential as a capable, expandable desktop machine.

What we like

  • Dual NVMe SSD slots deliver professional-grade storage speed and capacity
  • Support for AI accelerators enables machine learning on a compact platform

What we dislike

  • The added hardware increases the overall cost beyond the base Pi 5 investment
  • The compact form factor may limit cooling efficiency under sustained heavy loads

9. Vetra Orbit One: Minimalism Meets Tactile Smart Technology

The Vetra Orbit One concept smartwatch steps away from attention-grabbing screens toward satisfying physical interaction blended with forward-thinking features. Imagine a rotating bezel providing nuanced control, textured surfaces offering rich sensory feedback, and design elements evoking classic timepiece pleasure. This approach integrates the satisfying feel of traditional watchmaking into modern smart technology without simply replicating the past. The minimalist aesthetics reject overwhelming visual noise in favor of clean lines, subtle details, and essential information presentation.

This philosophy prioritizes clarity and elegance, ensuring the watch functions as a sophisticated accessory rather than a distracting wrist billboard. The tactile nostalgia isn’t about rejecting progress; it’s about preserving what made traditional watches satisfying to wear and use. The concept combines physical interaction satisfaction with smart capabilities, creating a device that feels good to touch and operate. When every smartwatch chases more screen space and brighter displays, the Orbit One suggests that sometimes less really is more.

What we like

  • The tactile interface provides satisfying physical interaction, missing from touchscreen-only devices
  • Minimalist aesthetics create an elegant, unobtrusive accessory

What we dislike

  • Limited screen space may restrict app functionality compared to larger smartwatches
  • The focus on physical controls could slow certain interactions requiring screen input

10. OrigamiSwift: The Folding Mouse That Fits Anywhere

Most portable mice compromise on either size or comfort, but OrigamiSwift solves this dilemma through an origami-inspired folding design. This Bluetooth mouse delivers full-sized comfort and precision when deployed, then folds completely flat to slip into any bag or pocket. The transformation happens in under 0.5 seconds with a simple flip, instantly activating the device for use. At just 40 grams with an ultra-thin profile, it’s barely noticeable until you need it, making it ideal for digital nomads, frequent travelers, and anyone who works from multiple locations.

The triangular origami structure provides surprising durability despite its folding nature, maintaining shape through repeated daily use. Soft-click buttons and smooth gliding work across various surfaces for responsive, discreet operation. The USB-C rechargeable battery lasts up to three months per charge, eliminating disposable battery waste. Designed by Horace Lam, OrigamiSwift reflects the harmony between artistry and practicality, where intricate folds echo timeless elegance while sleek lines embody modern minimalism. This mouse becomes more than a tool; it’s a statement about refined portable tech.

Click Here to Buy Now: $79.00

What we like

  • The folding design offers full-sized comfort that collapses to pocket-portable dimensions
  • Three-month battery life provides long-term reliability between charges

What we dislike

  • The folding mechanism introduces potential durability concerns with intensive daily use
  • The origami-inspired form may not suit users who prefer traditional mouse shapes

The Future Feels Different This Year

These ten innovations share a common thread beyond their 2025 release timing. Each one questions assumptions we’ve made about how technology should look, feel, and function. They prove that innovation doesn’t always mean adding more features or making screens larger. Sometimes the most exciting advances come from designers willing to completely rethink categories we thought were settled.

What excites me most about these gadgets is their willingness to be different. They embrace tactile feedback when everyone else chases touchscreens, add cameras to earbuds while others focus solely on audio, and turn power banks into communication devices through light. These products suggest that the next decade of technology will be defined less by raw specifications and more by thoughtful design that genuinely improves daily experience. That’s a future worth getting excited about.

The post Best Tech Gadgets of 2025: 10 Innovations You Need to See first appeared on Yanko Design.