Mixed reality headset can teach kids about safety training

I don’t remember any safety education classes from my elementary or even my high school days. Or if there were, we probably had to learn through videos or through posters and so maybe that’s why I forgot we even had them. Kids these days are lucky as we have different kinds of technology to help them learn about all kinds of safety lessons through virtual reality and mixed reality.

Designers: Minjeong Kim, Yunseo jong, ju hwan lee, mingyeong choi, yujin jeong, minji sung, and Chaeeun Lee

Rendered on KeyShot: Click Here to Download Your Free Trial Now!

These designers came up with a concept for a mixed reality device to help students learn about disaster safety through a virtual and hands on approach. Edi is a MR device that looks like your typical VR headset but has a softer look so kids will be comfortable using it when learning. It has lighting and speakers at the bottom to give a complete experience for the user and there is even a vent for heat generation. There is a light at the top to check the battery level and the strap uses flexible fabric so the wearer will feel comfortable especially if it’s used for a longer period of time. There is also a dial to adjust the fit of the MR headset.

There is also a dial that is able to show the eyes of the user if the teacher needs to communicate with the students directly or if they need to see other users as well. The UI that the user will see when they use it is pretty simple since this is primarily for kids to use (although kids these days are much more tech savvy than most grown ups). They can choose their profile characters and enter their name and age for personalization. There are different safety training manuals and they include missions, quizes, and the actual practical course. There also seems to be a gamification function which kids should enjoy more than just taking straightforward lessons and tests.

 

Up to five people can participate in the training session since it’s better to learn in a group. The Edi MR headset has three different colorways: white, blue, and green. This seems to be a pretty interesting concept although there needs to be more distinguishing features from the usual VR or MR headset.

Rendered on KeyShot: Click Here to Download Your Free Trial Now!

The post Mixed reality headset can teach kids about safety training first appeared on Yanko Design.

5 Ways Spatial Computing Will Succeed and 5 Ways It Will Flop

As if we haven’t had our fill of buzz-worthy terms like “eXtended Reality” or the “Metaverse,” Apple came out with a new product that pushed yet another old concept into the spotlight. Although the theory behind spatial computing has been around for almost two decades now, it’s one of those technologies that needed a more solid implementation from a well-known brand to actually hit mainstream consciousness. While VR and AR have the likes of Meta pushing the technologies forward, Apple is banking more heavily on mixed reality, particularly spatial computing, as the next wave of computing. We’ve already explained what Spatial Computing is and even took a stab at comparing the Meta Quest Pro with the new kid on the block, the Apple Vision Pro. And while it does seem that Spatial Computing has a lot of potential in finally moving the needle forward in terms of personal computing, there are still some not-so-minor details that need to be ironed out first before Apple can claim complete victory.

Designer: Apple

Spatial Computing is the Future

As a special application of mixed reality, Spatial Computing blurs the boundaries between the physical world and the applications that we use for work and play. But rather than just having virtual windows floating in mid-air the way VR and AR experiences do it, Apple’s special blend of spatial computing lets the real world directly affect the way these programs behave. It definitely sounds futuristic enough, but it’s a future that is more than just fantasy and is actually well-grounded in reality. Here are five reasons Spatial Computing, especially Apple’s visionOS, is set to become the next big thing in computing.

Best of Both Realities

Spatial computing combines the best of VR and AR into a seamless experience that will make you feel as if the world is truly your computer. It doesn’t have the limitations of VR and lets you still see the world around you through your own eyes rather than through a camera. At the same time, it still allows you to experience a more encapsulated view of the virtual world by effectively dimming and darkening everything except your active application. It’s almost like having self-tinting glasses, except it only affects specific areas rather than your whole view.

More importantly, spatial computing doesn’t just hang around your vision the way AR stickers would. Ambient lighting affects the accents on windows, while physical objects can change the way audio sounds to your ears. Virtual objects cast shadows as if they’re physically there, even though you’re the only one that can see them. Given this interaction between physical and virtual realms, it’s possible to have more nuanced controls and devices in the future that will further blur the boundaries and make using these spatial apps feel more natural.

Clear Focus

The term “metaverse” has been thrown around a lot in the past years, in no small part thanks to the former Facebook company’s marketing, but few people can actually give a solid definition of the term, at least one that most people will be able to understand. To some extent, this metaverse is the highest point of virtual reality technologies, a digital world where physical objects can have some influence and effect. Unfortunately, the metaverse is also too wild and too amorphous, and everyone has their own idea or interpretation of what it can or should be.

In contrast, spatial computing has a narrower and more focused scope, one that adds a literal third dimension to computing. Apple’s implementation, in particular, is more interested in taking personal computing to the next level by freeing digital experiences from the confines of flat screens. Unlike the metaverse, which almost feels like the Wild West of eXtended reality (XR) these days, spatial computing is more content on doing one thing: turning the world into your desktop.

Relatable Uses

As a consequence of its clearer focus, spatial computing has more well-defined use cases for these futuristic-sounding features. Apple’s demo may have some remembering scenes from the Minority Report film, but the applications used are more mundane and more familiar. There are no mysterious and expensive NFTs, or fantastic walks around Mars, though the latter is definitely possible. Instead, you’re greeted by familiar software and experiences from macOS and iOS, along with the photos, files, and data that you hold dear every day.

It’s easy enough to take this kind of familiarity for granted, but it’s a factor that sells better over a longer period of time. When the novelty of VR and the metaverse wear off, people are left wondering what place these technologies will have in their lives. Sure, there will always be room for games and virtual experiences that would be impossible in the physical world, but we don’t live in those virtual worlds most of the time. Spatial computing, on the other hand, will almost always have a use for you, whether it’s entertainment or productivity because it brings the all-familiar personal computing to the three-dimensional physical world.

Situational Awareness

One of the biggest problems with virtual reality is that they can’t really be used except in enclosed or safe spaces, often in private or at least with a group of trusted people. Even with newer “passthrough” technologies, the default mode of devices like the Meta Quest Pro is to put you inside a 360-degree virtual world. On the one hand, that allows for digital experiences that would be impossible to integrate into the real world without looking like mere AR stickers. On the other hand, it also means you’re shutting out other people and even the whole world once you put on the headset.

The Apple Vision Pro has a few tricks that ironically make it more social even without the company mentioning a single social network during its presentation. You can see your environment, which means you’ll be able to see not only people but even the keyboard and mouse that you need to type an email or a novel. More importantly, however, other people will also be able to see your “eyes” or at least a digital twin of them. Some might consider it gimmicky, but it shows how much care Apple gives to those subtle nuances that make human communication feel more natural.

Simpler Interactions

The holy grail of VR and AR is to be able to manipulate digital artifacts with nothing but your hands. Unfortunately, current implementations have been stuck in the world of game controllers, using variants of joysticks to move things in the virtual world. They’re just a step away from using keyboards and mice, which creates a jarring disjunct between the virtual objects that almost look real in front of our eyes and the artificial way we interact with them.

Apple’s spatial computing device simply uses hand gestures and eye tracking to do the same, practically taking the place of a touchscreen and a pointer. Although we don’t actually swipe to pan or pinch to zoom real-world objects, some of these gestures have become almost second nature thanks to the popularity of smartphones and tablets. It might get a bit of getting used to, but we are more familiar with the direct movements of our hands compared to memorizing buttons and triggers on a controller. It simplifies the vocabulary considerably, which places less burden on our minds and helps reduce anxiety when using something shiny and new.

Spatial Computing is Too Much into the Future

Apple definitely turned heads during its Vision Pro presentation and has caused many people to check their bank accounts and reconsider their planned expenses for the years ahead. As expected of the iPhone maker, it presented its spatial computing platform as the next best thing since the invention of the wheel. But while it may indeed finally usher in the next age of personal computing, it might still be just the beginning of a very long journey. As they say, the devil is in the details, and these five are those details that could see spatial computing and the Apple Vision Pro take a back seat for at least a few more years.

Missing Haptics

We have five (physical) senses, but most of our technologies are centered around visual experiences primarily, with audio coming only second. The sense of touch is often taken for granted as if we were disembodied eyes and ears that use telekinesis to control these devices. Futuristic designs that rely on “air gestures” almost make that same assumption, disregarding the human need to touch and feel, even if just a physical controller. Even touch screens, which have very low tactile feedback, are something physical that our fingers can touch, providing that necessary connection that our brains need between what we see and what we’re trying to control.

Our human brains could probably evolve to make the need for haptic feedback less important, but that’s not going to happen in time to make the Apple Vision Pro a household item. It took years for us to even get used to the absence of physical keys on our phones, so it might take even longer for us to stop looking for that physical connection with our computing devices.

Limited Tools

The Apple Vision Pro makes use of simpler hand gestures to control apps and windows, but one can also use typical keyboards and mice with no problem at all. Beyond these, however, this kind of spatial computing takes a step back to the different tools that are already available and in wide use on desktop computers and laptops. Tools that take personal computing beyond the typical office work of preparing slides, typing documents, or even editing photos. Tools that empower creators who design both physical products as well as the digital experiences that will fill this spatial computing world.

A stylus, for example, is a common tool for artists and designers, but unless you’re used to non-display drawing tablets, a spatial computing device will only get in the way of your work. While having a 3D model that floats in front of you might be easier to look at compared to a flat monitor, your fingers will be less accurate in manipulating points and edges compared to specialized tools. Rather than deal breakers, there are admittedly things that can be improved over time. But at the launch of the Apple Vision Pro, spatial computing applications might be a bit limited to those more common use cases, which makes it feel like a luxurious experiment.

Physical Strain

Just as our minds are not used to it, our bodies are even more alien to the idea of wearing headsets for long periods of time. Apple has made the Vision Pro as light and as comfortable as it can, but unless it’s the size and weight of slightly large eyeglasses, they’ll never really be that comfortable. Companies have been trying to design such eyewear with little success, and we can’t really expect them to make a sudden leap in just a year’s time.

Other parts of our bodies might also feel the strain over time. Our hands might get sore from all the hand-waving, and our eyes could feel even more tired with the high-resolution display so close to our retinas. These health problems might not be so different from what we have today with monitors and keyboards, but the ease of use of something like the Vision Pro could encourage longer periods of exposure and unhealthy lifestyles.

Accessibility

As great as spatial computing might sound for most of us, it is clearly made for the majority of able-bodied and clear-seeing people. Over the years, personal computing has become more inclusive, with features that enable people with different disabilities to still have an acceptable experience, despite some limitations. Although spatial computing devices like the Vision Pro do make it easier to use other input devices such as accessibility controllers, the very design of headsets makes them less accessible by nature.

Affordability

The biggest drawback of the first commercial spatial computing implementation is that very few people will be able to afford it. The prohibitive price of the Apple Vision Pro marks it as a luxury item, and its high-quality design definitely helps cement that image even further. This is nothing new for Apple, of course, but it does severely limit how spatial computing will grow. Compared to more affordable platforms like the Meta Quest, it might be seen as something that benefits only the elite, despite the technology having even more implications for the masses. That, in turn, is going to make people question whether the Vision Pro would be such a wise investment, or whether they should just wait it out until prices become more approachable.

The post 5 Ways Spatial Computing Will Succeed and 5 Ways It Will Flop first appeared on Yanko Design.

Apple Vision Pro vs. Meta Quest Pro: The Design Perspective

Apple finally took off the veils from its much-anticipated entry into the mixed reality race, and the Internet was unsurprisingly abuzz with comments on both sides. Naturally, comparisons were made between this shiny newcomer and the long-time market leader, which is now Meta, whether you like it or not. Given their already tenuous relationship, the launch of the Apple Vision Pro only served to increase the rivalry between these frenemies. It’s definitely not hard to paint some drama between the two tech giants vying for the same mixed reality or spatial computing market, whichever buzzword you prefer to use. But is there really a direct competition between these two products, or do they have very different visions with almost nothing in common except for having to put a screen over our eyes? We take a deeper look into the Apple Vision Pro and the Meta Quest Pro to see where they differ not only in their design but also in their vision.

Designer: Apple, Meta

What is the Meta Quest Pro

Let’s start with the older of the two, one that dates back to the time when Facebook was also the name of the company. Originally created by Oculus, the Quest line of VR headsets soon bore the Meta name, though not much else has changed in its core focus and the way it works. In a nutshell, the Meta Quest Pro, along with its siblings and predecessors, falls under the category of virtual reality systems, which means it gives you a fully enclosed experience confined within virtual walls. It practically blocks off the rest of the real world while you’re wearing it, but the Quest Pro now has a “passthrough” feature that lets you see the world around you through the headset’s cameras, but the quality is definitely lower than what your eyes could naturally see.

In terms of product design, the Quest Pro doesn’t stray too far from the typical formula of consumer electronics, which is to say that there’s plenty of plastic material all around. To be fair, Meta aimed to make the Quest hardware more accessible to more people to help spread its adoption, so it naturally had to cut a few corners along the way. The choice of materials was also made to lighten the gear that might be sitting on your head for hours, but it also doesn’t remove the less-than-premium feel, nor does it completely alleviate that heft.

To its credit, the design of the Quest Pro does help make the headset feel a little less burdensome by balancing the weight between the front and back parts. While the front has most of the hardware and optics that make the Quest Pro work, the back has the battery that powers the device. Having that battery present still adds to the overall weight of the machine, but Meta opted to prioritize mobility and convenience over lightening the load.

What is the Apple Vision Pro

The Apple Vision Pro, in comparison, takes an almost completely opposite approach from the Meta Quest Pro or all other headsets in general. In typical Apple fashion, the company paid special attention to design details that make the hardware both elegant and comfortable. The Vision Pro makes use of premium materials like laminated glass and woven fabrics, as well as heavier components like aluminum alloy. It’s a device that looks elegant and fashionable; an undeniable part of Apple’s hardware family.

Apple’s answer to the battery problem is both simple and divisive. The Vision Pro simply doesn’t have a battery, at least not on the headset itself. You’d have to connect an external power source via a cable, though that battery can be shoved inside your pocket to get it out of the way. It doesn’t completely hinder mobility and even opens the doors for third-party designs to come up with other ideas on how to solve this puzzle.

The biggest difference between Apple’s and Meta’s headsets, however, is in their use and purpose. The Vision Pro is closer to being an augmented reality headset compared to the Quest Pro, blending both virtual and real worlds in a single, seamless view. The Vision Pro also has the ability to block out or at least dim everything aside from the virtual window you’re using, but that’s only a side feature rather than a core function.

VR/AR vs. Spatial Computing

At its most basic, the Meta Quest Pro is really a virtual reality headset while the Apple Vision Pro is designed for a form of mixed reality now marketed as “spatial computing.” To most people, the two are almost interchangeable, but those sometimes subtle differences set these two worlds apart, especially in how they are used. It’s certainly possible to mix and match some features and use cases, but unless they’re specifically designed to support those, the experience will be subpar.

The Meta Quest Pro, for example, is the first in its line that can be truly considered to have AR functionality thanks to its higher fidelity “passthrough” feature, allowing you to see virtual objects overlaid on top of the real world. That said, its core focus is still on virtual reality, which, by nature, closes off the rest of the world from your sight. Looking at the world through cameras is really only a stopgap measure and can be a little bit disorienting. That’s not even considering how most of the Quest ecosystems experiences happen in virtual reality, including the use of “normal” computer software, particularly ones that require using a keyboard and a mouse.

On the other hand, the Apple Vision Pro was made specifically for mixed reality, specifically spatial computing, where the real and the digital are blended seamlessly. In particular, it puts those applications, including familiar ones from macOS and iOS, in floating windows in front of you. visionOS’s special trick is to actually have the real world affect those virtual objects, from having them cast shadows to tweaking the audio to sound as if they’re bouncing off the furniture in the room. The Vision Pro can emulate the enclosed view of a VR headset by darkening everything except the virtual window you’re using, but it’s unavoidable that you’ll still see some of the real world “bleeding” through, especially in bright ambient light.

The Vision Pro’s and visionOS’s capability to blend the real and the virtual is no small feat. Not only does it enable you to use normal applications with normal computer peripherals, it also makes better use of real-world space. It lets you, for example, assign specific applications and experiences to parts of the house. Apple’s technologies also create more natural-looking interactions with people, even if your actual body parts are invisible or even absent. All these don’t come without costs, though, and it remains to be seen if people will be willing to pay that much for such a young technology.

Controls and Interaction

The Meta Quest Pro hails from a long line of VR and AR headsets, and nowhere is this more obvious than in the way you interact with virtual objects. The headset is paired with two controllers, one for each hand, which are pretty much like joysticks with buttons and motion sensors. Make no mistake, the technology has come a long way and you no longer need to have external beacons stationed elsewhere in the room just to make the system aware of your location or that of your hands. Still, holding two pieces of plastic all the time is a very far cry from how we usually manipulate things in the real world or even from the way we use computers or phones.

Apple may have acquired the holy grail of virtual computing with its more natural input method of using hand gestures without controllers or even gloves. There’s still a limited vocabulary of gestures available, but we’re almost used to that given how we have been using touch screens for the past decade or so. At the same time, however, the Vision Pro doesn’t exclude the use of more precise input instruments, including those controllers, if necessary. The fact that you can actually see the real objects makes it even easier to use any tool, which expands the Vision Pro’s uses considerably.

Philosophy and Vision

Although it’s easy to paint the Apple Vision Pro and Meta Quest Pro as two sides of the same eXtended Reality (XR) coin, the philosophies that drive their design are almost as opposed to each other as the companies themselves are. Meta CEO Mark Zuckerberg was even quoted to have pretty much said that while downplaying the Vision Pro’s innovations. In a nutshell, he doesn’t share Apple’s vision of the future of computing.

It shouldn’t come as a surprise that Zuckerberg’s vision revolves around social experiences, something that might indeed be better served by a fully virtual reality. Not only does it make out-of-this-world experiences like the Metaverse possible, it can also make inaccessible real-world places more accessible to groups of people. Meta’s marketing for the Quest Pro mostly revolves around fun and engaging experiences, content consumption, and a bit of creativity on the side.

The Apple Vision Pro, on the other hand, seems to be about empowering the individual by breaking computing free from the confines of flat and limited screens. There are, of course, features related to connecting with other people, but most of the examples have been limited to FaceTime chats more than huddling around a virtual campfire. It has already been noted repeatedly how Apple’s presentation was bereft of any mention of social media, which some have taken as a knock against Facebook. Of course, social media is now an unavoidable part of life, but it exists only as just another app in visionOS rather than as a core focus.

Ironically, the Vision Pro is perhaps even more social than the Quest Pro, at least as far as more natural connections are concerned. Instead of fun yet comical avatars, people will get to see a life-like semblance of your bust during meetings, complete with eye movements and facial expressions. And when someone needs your attention in the meatspace, the Vision Pro will project your eyes through the glass, making sure that the other person knows and feels that you’re actually paying attention to them.

Pricing

It’s hard to deny how impressive all the technologies inside the Vision Pro are, and it’s easy to understand why Apple took this long to finally let the cat out of the bag. As mentioned, however, these innovations don’t come without a cost, and in this case, it is a very literal one. Right off the bat, Apple’s inaugural spatial computing gear is priced at $3,499, making it cost twice as much as the average MacBook Pro. It might be destined to replace all your Apple devices in the long run, but it’s still a very steep price for an unproven piece of technology.

The Meta Quest Pro is, of course, just a third of that, starting at $1,000. Yes, it uses less expensive materials, but its technologies are also more common and have stood the test of time. The Quest platform has also gone through a few iterations of polish, with developers creating unique applications that play to the hardware’s strengths. That said, although the Quest Pro sounds more dependable, insider insights at Meta have painted a somewhat uncertain future for the company’s Metaverse ambitions. Apple’s announcement might then serve to light a fire under Meta’s seat and push it to pick up the pace and prove that its vision is the right one.

Final Thoughts

As expected of the Cupertino-based company, Apple turned heads when it announced the Vision Pro. It blew expectations not just because of the quality of its design but also because of the ambitious vision that Apple revealed for the next wave of computing. Right now, it may all sound novel and gimmicky, and it will take some time before the technology truly takes root and bears fruit. Spatial computing has the potential to truly revolutionize computing, but only if it also becomes more accessible to the masses.

The Vision Pro isn’t a death knell for the Meta Quest but more of a wake-up call. There will definitely be a need for an alternative to Apple’s technologies, especially for those who refused to live in that walled garden. Meta definitely has a lot of work to do to reach the bar that Apple just raised. Whether those alternatives come from Meta or it might come from other vendors, there’s no doubt that the extended reality market just burst to life with a single “One More Thing” from Apple.

The post Apple Vision Pro vs. Meta Quest Pro: The Design Perspective first appeared on Yanko Design.

What is Spatial Computing: Design in Apple’s Vision of the Future

Apple made waves earlier this month when it finally revealed its long-awaited foray into the world of mixed or extended reality. That the company has had its eyes on this market is hardly any secret. In fact, the delayed (at least by market standards) announcement has had some wondering if it was all just wishful thinking. At WWDC 2023, Apple definitely showed the world that it means serious business, perhaps too serious even. The Apple Vision Pro headset itself is already a technological marvel, but in typical Apple fashion, it didn’t dwell too much on the specs that would make many pundits drool. Instead, Apple homed in on how the sleek headset almost literally opens up a whole new world and breaks down the barriers that limited virtual and augmented reality. More than just the expensive hardware, Apple is selling an even more costly new computing experience, one that revolves around the concept of “Spatial Computing.” But what is Spatial Computing, and does it have any significance beyond viewing photos, browsing the Web, and walking around in a virtual environment? As it turns out, it could be a world-changing experience, virtually and really.

Designer: Apple

Making Space: What is Spatial Computing?

Anyone who has been keeping tabs on trends in the modern world will have probably already heard about virtual reality, augmented reality, or even extended reality. Although they sound new to our ears, their origins actually go far, far back, long before Hollywood has even gotten whiff of them. At the same time, however, we’ve been hearing about these technologies so much, especially from certain social media companies, that you can’t help but roll your eyes at “yet another one” coming our way. Given its hype, it’s certainly understandable to be wary of all the promises that Apple has been making, but that would be underselling the concept of what makes Spatial Computing really feel like THE next wave in computing.

It’s impossible to discuss Spatial Computing without touching base with VR and AR, the granddaddies of what is now collectively called “eXtended Reality” or XR. Virtual Reality (VR) is pretty much the best-known of the two, especially because it is easier to implement. Remember that cardboard box with a smartphone inside that you strap to your head? That’s pretty much the most basic example of VR, which practically traps you inside a world full of pixels and intangible objects. Augmented Reality (AR) frees you from that made-up world and instead overlays digital artifacts on real-world objects, much like those Instagram filters everyone seems to love or love to hate. The catch is that these are still intangible virtual objects, and nothing you do in the real world really changes them. Mixed Reality (MR) fixes that and bridges the two so that a physical knob can actually change some virtual configuration or that a virtual switch can toggle a light somewhere in the room.

In that sense, Spatial Computing is the culmination of all these technologies but with a very specific focus, which you can discern from its name. In a nutshell, it turns the whole world into your computer, making any available space into an invisible wall you can hang up your apps’ windows. Yes, there will still be windows (with a small “w”) because of how our software is currently designed, but you can hang up as many as you want in the available space you have. Or you can just have one super gigantic video player taking up your vision. The idea also makes use of our brain’s innate ability to associate things with spaces (which is the theory behind the “Memory Palace”) to have us organize our room-sized computer desktop. In a sense, it makes the computer practically invisible, allowing you to directly interact with applications as if they existed physically in front of you because they practically are.

Apple Reality

Of course, you could say that even Microsoft’s HoloLens already did all that. What makes Spatial Computing and Apple’s implementation different is how the virtual and the real affect each other, much like in mixed reality. There is, for example, the direct way we can control the floating applications using nothing but our own bodies, whether it’s with hand gestures or even just the movement of our eyes. This is the fulfillment of all those Minority Report fantasies, except you don’t even need to wear gloves. Even your facial expressions can have an effect on your FaceTime doppelganger, a very useful trick since you won’t have a FaceTime camera available while wearing the Apple Vision Pro.

Apple’s visionOS Spatial Computing, however, is also indirectly affected by your physical environment, and this is where it gets a little magical and literally spatial. According to Apple’s marketing, your virtual windows will cast shadows on floors or walls, and that they’ll be affected by ambient light as well. Of course, you’ll be the only one who sees those effects, but they make the windows and other virtual objects feel more real to you. The Vision Pro will also dim its display to mimic the effect of dimming your lights when you want to watch a movie in the dark. It can even analyze surrounding objects and their textures to mix the audio so that it sounds like it’s really coming from all directions and bouncing off those objects.

The number of technologies to make this seamless experience possible is quite staggering; that’s why Apple didn’t focus too much on the optics, which is often the key selling point of XR headsets. From the sensors to the processors to the AI that interprets all that data, it’s no longer surprising that it took Apple this long to announce the Vision Pro and its Spatial Computing. It is, however, also its biggest gamble, and it could very well ruin the company if it crashes and burns.

Real Design

Spatial Computing is going to be a game-changer, but it’s not a change that will happen overnight, no matter how much Apple wants it to. This is where computing is heading, whether we like it or not, but it’s going to take a lot of time as well. And while it may have “computing” in its name, its ramifications will impact almost all industries, not just entertainment and, well, computing. When Spatial Computing does take off, it will even change the way we design and create things.

Many designers are already using advanced computing tools like 3D modeling software, 3D printers, and even AI to assist their creative process. Spatial Computing will take it up a notch by letting designers have a more hands-on approach to crafting. Along with “digital twins” and other existing tools, it will allow designers and creators to iterate over designs much faster, letting them measure a hundred times and print only once, saving time, resources, and money in the long run.

Spatial Computing also has the potential to change the very design of products themselves, but not in the outlandish way that the Metaverse has been trying to do. In fact, Spatial Computing flips the narrative and gives more importance to physical reality rather than having an expensive, one-of-a-kind NFT sneaker you can’t wear in real life. Spatial Computing highlights the direct interaction between physical and virtual objects, and this could open up a new world of physical products designed to interact with apps or, at the very least, influence them by their presence and composition. It might be limited to what we would consider “computing,” but in the future, computing will pretty much be the way everyone will interact with the world around them, just like how smartphones are today.

Human Nature

As grand as Apple’s Vision might be, it will be facing plenty of challenges before its Spatial Computing can be considered a success, the least of which is the price of the Vision Pro headset itself. We’ve highlighted those Five Reasons Why the Apple Vision Pro Might Fail, and the biggest reason will be the human factor.

Humans are creatures of habit as well as tactile creatures. It took years, maybe even decades, for people to get used to keyboards and mice, and some people struggle with touch screens even today. While Apple’s Spatial Computing promises the familiar controls of existing applications, the way we will interact with them will be completely gesture-based and, therefore, completely new. Add to the fact that even touch screens give something our fingers can feel, and you can already imagine how alien those air hand gestures might be for the first few years.

Apple surely did its due diligence in ergonomic and health studies and designs, but it’s not hard to see how this won’t be the most common way people will do computing, even if you make the Vision Pro dirt cheap. Granted, today’s computers and mobile devices are hardly ergonomic by design, but there have been plenty of solutions developed by now. Spatial Computing is still uncharted territory, even after VR and AR have long blazed a trail. It will definitely take our bodies getting used to before Spatial Computing almost becomes second nature, and Apple will have to stay strong until then.

Final Thoughts

As expected, Apple wasn’t content to just announce just another AR headset to join an uncertain market. The biggest surprise was its version of Spatial Computing, formally marketed as visionOS. Much of what we’ve seen is largely marketing and promises, but this is Apple we’re talking about. It might as well be reality, even if it takes a while to fully happen.

Unlike the entertainment-centric VR or the almost ridiculous Metaverse, Spatial Computing definitely feels like the next evolution of computing that will be coming sooner rather than later. It’s definitely still at an early stage, even if the seeds were planted nearly two decades ago, but it clearly shows potential to become more widely accepted because of its more common and general applications. It also has the potential to change our lives in less direct ways, like changing the way we learn or even design products. It’s not yet clear how long it will take, but it’s not hard to see how Apple’s Vision of the future could very well be our own.

The post What is Spatial Computing: Design in Apple’s Vision of the Future first appeared on Yanko Design.

Nothing Voyage (1) is an outdoor mixed-reality headset concept with the Phone (1) inspired Glyph Interface

Maybe AR/VR isn’t meant for homes… this conceptual pair of Nothing MR goggles transform the outdoors, immersing you in new worlds while keeping you aware of your current one.

Dubbed the Voyage (1), this ski-goggle-shaped headset enriches outdoor experiences, bringing you into a new world. Most MR devices find themselves being used in highly technical fields like medicine or engineering – the Voyage (1) doesn’t take that approach. Instead, it finds the ‘killer app’ of the MR world, just like health monitoring became the ‘killer app’ of the Apple Watch. Quite like how Pokemon GO used AR to push people outside their homes, the Voyage (1) enables people to experience a new reality layered over their own existing reality. It transforms mundane streets into foreign destinations, a boring highway into a mountainous drive, and a bland sky into an aurora-filled one in the arctic circle.

Designer: Junha Kam

The Voyage (1) sits on your eyes, with a sleek design that doesn’t weigh you down or look awkward on your face. A built-in Glyph Interface helps you be aware of your surroundings as well as the world around you be aware of your movements, and depending on your use, a pair of handheld controllers let you navigate your MR experience.

The glasses are unusually sleek, in a way that keeps in line with Nothing’s catalog of products. The only thing that stands apart is the lack of a transparent housing anywhere on the device.

The Voyage (1) is designed to be worn while moving. The mixed-reality ability gives you pass-through features that let you see the world around you so you’re fully aware of your surroundings, and the Glyph Interface ends up being an indicator of sorts, letting others know where you’re looking or turning as you cycle, skateboard, hoverboard, or jog with the MR headset on.

Although primed for outdoor use, the headset’s made to be worn indoors too, with a pair of controllers that help you use the Voyage (1) like a traditional VR headset for browsing the web, playing games, or engaging in indoor-based VR experiences.

Ultimately, the Voyage (1) tries to do what every metaverse company’s been trying to do too – figure out what’s the killer app for AR/VR/MR experiences. Zuckerberg and Tim Cook have been pushing the metaverse pretty hard for the past 5 years (Meta’s focus has been on VR, Apple’s on AR), but even though these devices have existed for quite long now, they feel like a novelty. Everyone who buys an Oculus Quest ends up letting it sit on a shelf and gather dust after 2-3 months of intense use. Maybe with a focus on reinventing the outdoors, the Voyage (1) will be able to help boost mass adoption for the metaverse. Sadly though, this device is entirely conceptual – but if Carl Pei is reading this…

The post Nothing Voyage (1) is an outdoor mixed-reality headset concept with the Phone (1) inspired Glyph Interface first appeared on Yanko Design.

This mixed reality headset gamifies your fitness regime, trigers healthy habits in a fun way

Mixed reality is transforming the way we perceive and experience the world around us. We can virtually step into an immersive environment that feels almost like the real thing. One useful application of this technology is health gamification. For those who are unaware, gamification is the process of incorporating game elements such as points, rewards, and achievements, into non-game contexts.

The Portalverse VR headset concept is a thoughtful iteration of how virtual reality can be used to promote health and wellness. It’s a sleek and lightweight VR headset designed for comfort and equipped with advanced sensors to track head and eye movements – interacting with the virtual environment naturally.

Designer: Marko Filipic and Mati Papalini

One key feature of Portalverse VR is its ability to gamify health and wellness in one’s daily routine with an avatar that behaves as a real person would. By gamifying these activities, the headset and its accompanying interface (smart mirror) make for an engaging and motivating regime for health-conscious people. The designers envision this headset to be used at home with the Portal smart mirror or outdoors using a smartwatch.

You can use the mixed reality wearable to participate in a virtual exercise class, wherein, real-time will be used to provide feedback on the form and technique. As the user (via the avatar) progresses through the exercise routine, they would earn points and unlock achievements for reaching certain milestones, such as completing a set number of repetitions. The accompanying app customizes the coaching levels and gives important feedback based on the user’s performance.

Another example of how Portalverse VR can be used for health gamification is through meditation. The headset transports the user to a peaceful, virtual environment, for instance, a beach or forest. The app guides the user through a meditation routine, for proper breathing and relaxation techniques. As the user progresses through the routine, they could earn points and rewards for achieving deeper levels of relaxation and mindfulness.

By using advanced VR technology to gamify health and wellness activities, headsets like the Portalverse VR could make it more engaging and motivating for users to adopt healthy lifestyles. VR startups better get some inspiration from this concept mixed reality headset!

The post This mixed reality headset gamifies your fitness regime, trigers healthy habits in a fun way first appeared on Yanko Design.

ZTE nubia NeoVision Glass AR eyewear hides in plain sight as oversized sunglasses

Although it does have the word “mobile” in it, MWC has long ceased to just be about smartphones and tablets. These days, anything you can pick up and use on the go is labeled as mobile, sometimes including laptops. When it comes to portability, however, wearables have become quite the fad, and this category isn’t just limited to smartwatches or “hearables” like earbuds and hearing aids. One strong presence at MWC 2023 this year seems to be headsets and eyewear, particularly those designed for augmented and virtual reality applications. Not to be left behind, ZTE’s nubia is showcasing its first-ever AR eyewear, and it seems to be trying to be a bit more fashionable at the expense of a bit of freedom of movement.

Designer: ZTE

As far as mixed reality headgear and eyewear are concerned, the trend seems to be going in the direction of cramming all the necessary hardware inside the device, unlike the first-gen Oculus Quest and HTC Vive headsets that needed to be connected to a powerful PC with a cable. A standalone headset does have tradeoffs, though, especially when you consider the weight of the hardware and the built-in battery. That’s why some devices still try to aim for a completely lightweight and comfortable design, even if it means offloading the brunt of the work to external devices.

The new ZTE nubia NeoVision Glass is one such type of device. It’s incredibly lightweight at 79g, but it’s not lacking when it comes to display quality. It boasts Micro-OLED screens with 3500 PPI and a binocular resolution of 1080p, giving the wearer the equivalent of a 120-inch screen floating before their eyes. It doesn’t skimp on the audio either, with two omnidirectional speakers and a cyclonic sound tank. All in all, it promises a full range of multimedia experiences for both your ears and your eyes.

The nubia NeoVision Glass also advertises high compatibility with a wide range of devices, including phones, computers, and consoles. It’s “plug and play,” which suggests that it doesn’t come with its own computer inside, though ZTE wasn’t exactly clear on that part. It does mean that you can use any device or platform you want, though it also means you’ll be rooted on the spot near that device unless it’s something you can carry around.

ZTE does, however, pay special attention to both the looks of the eyewear as well as its accessibility. Magnetic lenses make it trivial to swap out different sunglasses designs, and it supports zero to 500-degree myopic adjustment for those that need to wear prescription glasses. It’s still relatively bulky compared to typical sunglasses, but few will realize that you’re viewing the world through a different set of lenses, figuratively and literally.

The post ZTE nubia NeoVision Glass AR eyewear hides in plain sight as oversized sunglasses first appeared on Yanko Design.

Samsung XR wearable could become an industry response to Apple’s MR headset

Samsung just announced quite a number of new devices, including its usual Galaxy S flagship smartphone trio. While this is normal fare for Samsung this time of the year, it made a few choice statements that suddenly got heads turning and, to some extent, scratching. Samsung practically revealed that it is working on an “extended reality” or XR wearable device, pretty much a headset, something that it hasn’t done in half a decade. While it was mostly an announcement of intent rather than a teaser of an actual product, it name-dropped a few big names in the tech industry as its partners in this endeavor. While the fact that Samsung is again making a headset isn’t really a world-shattering revelation, the timing of all these hints seems to be a little bit too convenient not to put it in light of Apple’s own upcoming mixed reality device.

Designer: Samsung (via The Washington Post)

Samsung is really no stranger to such headsets and is probably too familiar with their problems as well. It started out with the smartphone-powered GearVR, which it worked on together with pre-Facebook Oculus back in 2015. And then there was the HMD Odyssey which was one of the few Windows Mixed Reality headsets that launched and sputtered out. In both cases, the tech giant has taken a step back along with the rest of its peers, making this announcement all the more intriguing and suspicious.

These days, there are very few notable players in the VR and AR space, with Meta (formerly known as Facebook) and HTC Vive still competing for top slots. Microsoft has pretty much forgotten about its HoloLens, and Google is being typically Google-ish about its remaining ARCore platform. Surprisingly, these are the very same companies that Samsung will be working with for its XR wearable, bringing the who’s who of Big Tech together with a single mission.

Details about the device itself are scant, but Samsung did let it out that it will be powered by a Qualcomm chipset and run an unannounced version of Android made specifically for headsets. More important than the hardware, though, Samsung’s name-dropping is meant to suggest that it is establishing a more stable ecosystem before it actually launches the product. The reason why many attempts at this niche market failed was that they were too focused on the product without an ecosystem giving it a reason to exist in the first place.

Apple isn’t going to have that problem when it launches its own MR device this spring, given how all its products pretty much live within Apple’s universe. Its rivals, however, don’t have something like it and will have to join forces to deliver something worthwhile. Of course, these companies, Apple included, still need to make a convincing argument about why you would want to wear a screen on your face. And as these same companies experienced, that’s not a particularly easy proposition to sell.

The post Samsung XR wearable could become an industry response to Apple’s MR headset first appeared on Yanko Design.

Lenovo goes beyond computing with Tab Extreme, Smart Paper, and Project Chronos at CES 2023

We’ve seen plenty of new laptops and desktops so far at CES 2023, especially since silicon giants Intel, AMD, and NVIDIA have all announced their newest, shiniest, and most powerful processors to date. While these cover the majority of computing needs, especially gaming, they aren’t the only personal computers available to us today. Thanks to advancements in technology, we have a variety of devices available today that make computing more mobile, more efficient, and even more personal. At CES 2023, Lenovo is showing off how it goes beyond regular computers with its most powerful tablet yet, a new e-Paper solution, and the next step in bringing mixed reality to your living room.

Designer: Lenovo

Lenovo Tab Extreme

There was a point in time when it seemed that tablets were on the way out. As smartphones became bigger, the need for tablets with big screens became almost pointless. Recently, however, the tablet has taken on a new mission as a true mobile computer, almost like a laptop replacement, and Lenovo is taking that to the extreme with its biggest and most powerful tablet yet, clearly designed for more than just watching videos.

Of course, it’s also a delight to do that on the Lenovo Tab Extreme, thanks to its spacious 14.5-inch 3K 120Hz OLED screen and eight high-performance JBL 4-channel speakers. The tablet shines brightest, however, when used for more than just consumption but also for creation, whether it’s a work of art or a work document. The MediaTek Dimensity 9000, 12GB of RAM, and Android 13 all work together to deliver this productivity experience on the go, and the gigantic 12,300mAh battery ensures you’ll have as little downtime as possible.

Beyond just the specs, the Lenovo Tab Extreme is designed to be flexible and stylish in any situation. A magnetic dual-mode stand makes it convenient to prop up the tablet horizontally or vertically, while the optional dual-hinge keyboard lifts it up for a more ergonomic position while you type your next great novel. The tablet can even be used as a digital sketchpad by plugging it in via its DP-in USB-C port or as a second monitor through the DP-out port. The Lenovo Tab Extreme will be available later this year with a starting price of $1,199.99.

Lenovo Smart Paper

Not everyone needs a full-blown tablet, though. There are some who just need the digital equivalent of a notebook, one that blends the conveniences of mobile tech with the universality of pen and paper. Fortunately, there is a new breed of devices that promise exactly that, and the new Lenovo Smart Paper takes the lead in combining digital and analog in a smart and meaningful way.

Powered by a 10.3-inch E-Ink display, the Lenovo Smart Paper presents the perfect size for a paper notebook, one that does away with wasteful paper without giving up on the experience of using a pen. It’s all digital, of course, but the feeling and texture of guiding the Smart Paper Pen over the e-Paper display are as close as you can get to the real thing. And like a normal pen or pencil, you don’t need to worry about batteries or charging the pen as well.

The device is specifically designed to make taking notes not only enjoyable but also efficient. With two integrated microphones, you can record a meeting or lecture while you’re scribbling down notes. And when it’s time to review those notes, simply select the written text to hear a playback of the recorded audio to help jog your memory. The Lenovo Smart Paper is expected to launch sometime this year for $399.99.

Lenovo Project Chronos

Most of the computing devices we have today come in the form of something we can touch, be it a laptop, a tablet, or a smartphone. The future of computing, however, might be less tangible. Buzzwords aside, the metaverse and mixed reality represent an inescapable future, but it’s a future that’s hindered by clunky headsets and devices. Lenovo is taking a plunge into this still-gestating market with an innovative concept device that removes the need to weigh your body down just so that you can move your virtual avatar.

Project Chronos is basically a box with a camera that can keep track of your body movement to control a virtual character, often your avatar. It uses advanced depth cameras and algorithms to recreate your movement within a 3D environment without having to wear glasses or mocap sensors. It can even track your facial expressions so that your avatar can truly express your emotions, just as your own body can sometimes betray you in the physical world.

Despite that seemingly magical capability, Project Chronos is designed for home use, and its simple and discreet design can easily blend with the rest of your entertainment system. You simply connect the box to a TV or monitor, and you’re good to go. And once you’re done, you can rotate the camera down to ensure your privacy. This gear-free mixed reality device opens a whole new world of applications, from creating content with a VTuber avatar to getting personalized real-time coaching from a fitness expert. The Lenovo Project Chronos, however, is still a concept and work in progress, and its full name and capabilities will be disclosed when it’s ready to launch.

The post Lenovo goes beyond computing with Tab Extreme, Smart Paper, and Project Chronos at CES 2023 first appeared on Yanko Design.

Camera-maker Canon enters the metaverse game with their mixed-reality headset MREAL X1

Canon seems to be following its competitor Sony’s suit by betting big on the metaverse.

There was a time when Canon dominated the camera space. Now, with every smartphone having its own computationally-optimized camera system, it seems like Canon’s parade is getting a little rained on. The company’s finding new avenues for its imaging technologies and systems, however, and it seems like the metaverse may just be the best new territory. At CES this year, Canon announced a few VR/AR focused devices, a notable one being the MREAL X1, their mixed-reality headset and technology that Canon is betting on to revolutionize a variety of sectors, like retail, exhibition, medical, and other experiences. “MREAL is unlike anything Canon has ever developed, a premium visualizer/simulator that helps account for limits of scale, perception, analysis, and participation, and provides superb, almost life-like image clarity and color accuracy,” Canon mentions.

Designer: Canon

The MREAL X1 is a relatively slim headset that doesn’t cover your entire peripheral vision. Instead, it presents you with virtual elements just within your area of focus (58° x 60°), so you can see important elements in front of you, rather than all around you. Elements within the virtual space are interactive too, as the video above demonstrates how customers can visualize cars without there actually being a physical car in the space. They can tap on the car to have it change color, and even sit inside it, experiencing the interiors in an incredibly immersive way.

The MREAL X1 is currently in its market research phase, and it isn’t entirely clear if this will ever be released as a consumer device, or if it’ll be reserved for enterprise use. Consumers, however, can get a taste of the MREAL X1 at Canon’s booth in CES. The company collaborated with M Night Shyamalan to create a mixed-reality experience around the filmmaker’s upcoming movie Knock at the Cabin. “Visitors to the booth will be able to experience a break-in scene from the movie Knock at the Cabin as if they are a character in the movie”, Canon says.

The post Camera-maker Canon enters the metaverse game with their mixed-reality headset MREAL X1 first appeared on Yanko Design.