This modular bracelet concept lets you choose how smart you want your jewelry to be

The popularity of the Apple Watch has finally given smartwatches their place in the market, making them understandable and even desirable. Of course, that doesn’t mean that everyone now wants a smartwatch, especially those who prefer mechanical watches or have different aesthetic tastes. Unfortunately, the majority of wrist-worn smart trackers seem to be made with sporty and rugged designs in mind. Given hardware requirements, that’s not exactly surprising, but that shouldn’t stop designers from imagining what’s possible. One such dream is reflected in this minimalist yet distinctive bracelet that throws all smart wearable design conventions out the window, offering a modular piece of jewelry that is smart in more ways than one.

Designers: Akasaki Vanhuyse, Astrid Vanhuyse

If you remove the actual time-keeping function of a smartwatch or a fitness tracker, all you’re really left with are the sensors that actually do the work of keeping tabs on different metrics of your health, directly or indirectly. A display isn’t even necessary since you can always check those figures on a smartphone. In fact, a display might even be detrimental because of the distractions it pushes your way or how it clashes with some fashion styles. Smartwatch designs are primarily constricted by hardware such as displays and big batteries, but what if you could be free of those restrictions?

That’s what the BEAD concept seems to be proposing, offering the same health and wellness monitoring functionalities but in a form that is a bit more universal and, at the same time, more personal. At the heart of the design are the beads, actually tiny cylinders that each hide a single sensor used to track a specific biometric like a pulse oximeter or an accelerometer. Each bead is an independent unit, free from displays or large batteries, performing a single task and performing it to perfection.

The idea is that you can combine any number of these beads on a string or wire to achieve the same collective effect as a fitness tracker. You wear it around your wrist like a bracelet, held together at the ends by magnets in the shape of half-spheres. The wire is white, plain, and unadorned, which puts a bigger visual focus on the beads. Those beads themselves carry a brushed metal finish that helps hide whatever scratches they may incur over time while also giving them unique characters.

You can add or remove as many of these modular beads as you need, only paying for the functionality you actually use. It also makes repairing broken beads easier, since you only need to replace that single piece. Admittedly, the industrial aesthetic might not appeal to everyone’s tastes, but the concept opens the possibility of using different, perhaps more stylish designs that will truly create a fusion of fashion and technology in a simple smart bracelet.

The post This modular bracelet concept lets you choose how smart you want your jewelry to be first appeared on Yanko Design.

Every Single Sensor inside the Apple Vision Pro and What It’s Individually Designed To Do

For a $3,499 USD device that’s designed to replace your phone, laptop, watch, tablet, television, and even your mouse, you bet that Apple’s Vision Pro is absolutely crammed with sensors that track you, your movements, eyesight, gestures, voice commands, and your position in space. As per Apple’s own announcement, the Vision Pro has as many as 14 cameras on the inside and outside, 1 LiDAR scanner, and multiple IR and invisible LED illuminators to help it get a sense of where you are and what you’re doing. Aside from this, the headset also has a dedicated R1 Apple Silicone chip that crunches data from all these sensors (and a few others) to help create the best representation of Apple’s gradual shift towards “Spatial Computing”.

What is “Spatial Computing”?

“Vision Pro is a new kind of Computer,” says Tim Cook as he reveals the mixed reality headset for the very first time. “It’s the first Apple product you look through, and not at,” he adds, marking Apple’s shift to Spatial Computing. What’s Spatial Computing, you ask? Well, the desktop was touted as the world’s first Personal Computer, or PC as we so ubiquitously call it today. The laptop shrank the desktop to a portable format, and the phone shrank it further… all the way down to the watch, that put your personal computer on your wrist. Spatial Computing marks Apple’s first shift away from Personal Computing, in the sense that you’re now no longer limited by a display – big or small. “Instead, your surroundings become a canvas,” Tim summarizes, as he hands the stage to VP of Design, Alan Dye.

Spatial Computing marks a new era of computing where the four corners of a traditional display don’t pose any constraints to your working environment. Instead, your real environment becomes your working environment, and just like you’ve got folders, windows, and widgets on a screen, the Vision Pro lets you create folders, windows, and widgets in your 3D space. Dye explains that in Spatial Computing, you don’t have to minimize a window to open a new one. Just simply drag one window to the side and open another one. Apple’s VisionOS turns your room and your visual periphery into an OS, letting you create multiple screens/windows wherever you want, move them around, and resize them. Think Minority Report or Tony Stark’s holographic computer… but with a better, classier interface.

How the M2 and R1 Chips Handle Spatial Computing

At the heart of the Vision Pro headset are two chips that work together to help virtuality and reality combine seamlessly. The Vision Pro is equipped with Apple’s M2 silicon chip to help with computing and handling multitasking, along with a new R1 silicon chip that’s proprietary to the headset, which works with all the sensors inside and outside the headset to track your eyesight, control input, and also help virtual elements exist seamlessly within the real world, doing impressive things like casting shadows on the world around you, changing angles when you move around, or disappearing/fading when someone walks into your frame.

The R1 chip is pretty much Apple’s secret sauce with the Vision Pro. It handles data from every single sensor on the device, simultaneously tracking your environment, your position in it, your hands, and even your eye movements with stunning accuracy. Your eye movements form the basis of how the Vision Pro knows what elements you’re thinking of interacting with, practically turning them into bonafide cursors. As impressive as that is, the R1 also uses your eye data to know what elements of the screen to render, and what not to. Given that you can only focus on a limited area at any given time, the R1 chip knows to render just that part of your visual periphery with crisp clarity, rather than spending resources rendering out the entire scene. It’s a phenomenally clever way to optimize battery use while providing a brilliantly immersive experience. However, that’s not all…

Apple Engineer Reveals the (Scary) Powerful Capabilities of the R1 Chip

A neurotechnology engineer at Apple lifted the veil on exactly how complex and somewhat scary the Vision Pro’s internal tech is. While bound by NDA, Sterling Crispin shared in a tweet how the Vision Pro tracks your eyesight and knows how you’re navigating its interface so flawlessly. Fundamentally, the R1 chip is engineered to be borderline magical at predicting a user’s eye journey and intent. “One of the coolest results involved predicting a user was going to click on something before they actually did […] Your pupil reacts before you click in part because you expect something will happen after you click,” Crispin mentions. “So you can create biofeedback with a user’s brain by monitoring their eye behavior, and redesigning the UI in real time to create more of this anticipatory pupil response.”

“Other tricks to infer cognitive state involved quickly flashing visuals or sounds to a user in ways they may not perceive, and then measuring their reaction to it,” Crispin further explains. “Another patent goes into details about using machine learning and signals from the body and brain to predict how focused, or relaxed you are, or how well you are learning. And then updating virtual environments to enhance those states. So, imagine an adaptive immersive environment that helps you learn, or work, or relax by changing what you’re seeing and hearing in the background.” Here’s a look at Sterling Crispin’s tweet.

A Broad Look at Every Sensor on the Apple Vision Pro

Sensors dominate the Vision Pro’s spatial computing abilities, and here’s a look at all the sensors Apple highlighted in the keynote, along with a few others that sit under the Vision Pro’s hood. This list isn’t complete, since the Vision Pro isn’t available for a tech teardown, but it includes every sensor mentioned by Apple.

Cameras – The Vision Pro has an estimated 14 cameras that help it capture details inside and outside the headset. Up to 10 cameras (2 main, 4 downward, 2 TrueDepth, and 2 sideways) on the outer part of the headset sense your environment in stereoscopic 3D, while 4 IR cameras inside the headset track your eyes as well as perform 3D scans of your iris, helping the device authenticate the user.

LiDAR Sensor – The purpose of the LiDAR sensor is to use light to measure distances, creating a 3D map of the world around you. It’s used in most self-driving automotive systems, and even on the iPhone’s FaceID system, to scan your face and identify it. On the Vision Pro, the LiDAR sensor sits front and center, right above the nose, capturing a perfect view of the world around you, as well as capturing a 3D model of your face that the headset then uses as an avatar during FaceTime.

IR Camera – The presence of an IR camera on any device plays a key role in being able to do the job of a camera when the camera can’t. IR sensors work in absolute darkness too, giving them a significant edge over conventional cameras. That’s why the headset has 4 IR Cameras on the inside, and an undisclosed number of IR cameras/sensors on the outside to help the device see despite lighting conditions. The IR cameras inside the headset do a remarkable job of eye-tracking as well as of building a 3D scan of your iris, to perform Apple’s secure OpticID authentication system.

Illuminators – While these aren’t sensors, they play a key role in allowing the sensors to do their job perfectly. The Vision Pro headset has 2 IR illuminators on the outside that flash invisible infrared dot grids to help accurately scan a person’s face (very similar to FaceID). On the inside, however, the headset has invisible LED illuminators surrounding each eye that help the IR cameras track eye movement, reactions, and perform detailed scans of your iris. These illuminators play a crucial role in low-light settings, giving the IR cameras data to work with.

Accelerator & Gyroscope – Although Apple didn’t mention the presence of these in the headset, it’s but obvious that the Vision Pro has multiple accelerators and gyroscopes to help it track movement and tilt. Like any good headset, the Vision Pro enables tracking with 6 degrees of freedom, being able to detect left, right, forward, backward, upward, and downward movement. The accelerator helps the headset capture these movements, while the gyroscope helps the headset understand when you’re tilting your head. These sensors, along with the cameras and scanners, give the R1 chip the data it needs to know where you’re standing, moving, and looking.

Microphones – The Vision Pro has an undisclosed number of microphones built into the headset that perform two broad activities – voice detection and spatial audio. Voice commands form a core part of how you interact with the headset, which is why the Vision Pro has microphones that let you perform search queries, summon apps/websites, and talk naturally to Siri. However, the microphones also need to perform an acoustic scan of your room, just the way the cameras need to do a visual scan. They do this so that they can match the sound to the room you’re in, delivering the right amount of reverb, tonal frequencies, etc. Moreover, as you turn your head, sounds still stay in the same place, and the microphones help facilitate that, creating a sonic illusion that allows your ears to believe what your eyes see.

Other Key Components

Aside from the sensors, the Vision Pro is filled with a whole slew of tech components, from screens to battery packs. Here’s a look at what else lies underneath the Vision Pro’s hood.

Displays – Given its name, the Vision Pro obviously focuses heavily on your visual sense… and it does so with some of the most incredible displays ever seen. The Vision Pro has two stamp-sized displays (one for each eye) with each boasting more pixels than a 4K television. This gives the Vision Pro’s main displays a staggering 23 million pixels combined, capable of a 12-millisecond refresh rate (making it roughly 83fps). Meanwhile, the outside of the headset has a display too, which showcases your eyes to people around you. While the quality of this display isn’t known, it is a bent OLED screen with a lenticular film in front of it that creates the impression of a 3D display, so people see depth in your eyes, rather than just a flat image.

Audio Drivers – The headset’s band also has audio drivers built into each temple, firing rich, environmentally-responsive audio into your ears as you wear the headset. Apple mentioned that the Vision Pro has dual audio drivers for each ear, which could possibly indicate quality that rivals the AirPods Max.

Fans – To keep the headset cool, the Vision Pro has an undisclosed number of fans that help maintain optimal temperatures inside the headset. The fans are quiet, yet incredibly powerful, cooling down not one but two chips inside the headset. A grill detail on the bottom helps channel out the hot air.

Digital Crown – Borrowing from the Apple Watch, the Vision Pro has a Digital Crown that rotates to summon the home screen, as well as to toggle the immersive environment that drowns out the world around you for a true VR experience.

Shutter Button – The Digital Crown is also accompanied by a shutter button that allows you to capture 3-dimensional photos and videos, that can be viewed within the Vision Pro headset.

Battery – Lastly, the Vision Pro has an independent battery unit that attaches using a proprietary connector to the headset. The reason the headset has a separate battery pack is to help reduce the weight of the headset itself, which already uses metal and glass. Given how heavy batteries are, an independent battery helps distribute the load. Apple hasn’t shared the milliamp-hour capacity of the battery, but they did mention that it gives you 2 hours of usage on a full charge. How the battery charges hasn’t been mentioned either.

The post Every Single Sensor inside the Apple Vision Pro and What It’s Individually Designed To Do first appeared on Yanko Design.

This seemingly simple arm patch lets you know how much water to drink and when

It isn’t just our planet that’s mostly made up of water. Our bodies, too, have large quantities of water and require a regular intake of fluids to keep them functioning properly. We all know this for a fact, yet few of us take the advice to heart. Those with more active lifestyles are often more conscious of this and hydrate more regularly, but they might not always know the most optimal strategy for the time and amount of water or other fluids to drink. Without sophisticated equipment to monitor your body’s fluid output, you might not know how much to take in. That’s the kind of information that this sensor offers, and it involves nothing more than just slapping a patch on your arm before your run or exercise.

Designer: Nix

The common advice is to drink eight glasses or two liters of water a day, which is fine for most cases and individuals. When you live a more active lifestyle, though, especially if you work out often, when and how you hydrate involves more than that simple metric. Making it a bit more complicated is that different people have different hydration needs and rates of dehydration depending on their activity.

Rather than leaving that important health factor up to guesswork or generic figures, the Nix Hydration Biosensor uses science and technology to actually give you a better picture of your body’s hydration state. It’s almost similar to how our smartwatches constantly keep track of our heart rate, except that it uses a different biomarker and part of your body. This simple device sticks to your skin and tracks your fluid and electrolyte loss through the sweat you produce during activities.

There is already equipment that can measure that data, but these are often bulky machines that can only be used inside labs and involve connecting multiple sensors to your body. This Hydration Biosensor, in contrast, whittles it down to just a hexagon-shaped pod that clips onto a sweat patch that you stick to your arm. Because of the sensor’s lightweight construction and the absence of any wires, the Hydration Biosensor won’t get in the way of any kind of workout or activity you might engage in.

Similar to a smartwatch, the sensor sends its data to a companion device, usually a smartphone, which does the heavy work of analyzing that data to notify you when it’s time to drink up and how much. This might sound like micromanaging, but it’s the kind of efficiency that many athletes live by in order to get the best out of their workout as well as their recovery. At the very least, it will get people more aware of just how important hydration is to our well-being, especially when you start to notice how you get dehydrated more often than not.

The post This seemingly simple arm patch lets you know how much water to drink and when first appeared on Yanko Design.

Sony Mocopi wearable sensors let you control avatars with your whole body

Not everyone might be buying that whole metaverse spiel, but many might have been enamored by the idea of having a virtual version of themselves in certain spaces. Imaging ourselves in a different form inhabiting different worlds goes back farther than VR and AR, but the technologies to enable such an experience haven’t exactly been available until now. Sure, you can already have a Mii or a Bitmoji to represent you today, but having them actually move like you is a completely different thing. For that, your avatar will need to be able to read and copy your body’s movements, and Sony’s latest wearable tech is going to make that as easy as wearing six sensors on your body.

Designer: Sony

Motion capture, or mocap, has been around for decades and is primarily used in the entertainment industry to make 3D models move more realistically. At first, only large studios were able to utilize this technology due to the sheer size and costs of the equipment needed to make it happen. Today, there are more affordable forms of mocap systems, but they’re still way out of reach of ordinary people who just want a virtual avatar to mirror their moves.

Sony’s new mocopi, short for “motion copy,” was designed to cater to this crowd. The entire system is composed of nothing more than six sensors that look like Apple AirTags, as well as five straps and a clip to attach them to different parts of your body. Four sensors go around your wrists and ankles, one clips behind your lower back, and another wraps around your head. As far as hardware goes, that’s really all there is to it.

The magic unsurprisingly happens on the software side, particularly with a companion mobile app that displays your live avatar of choosing. Using Bluetooth technology, the app is able to read the sensor’s motion data and translate that into the avatar’s movement in real-time. This video can later be used in different applications, like live streaming, VRChat, and more. At the time of launch, the only way you can use mocopi is with that smartphone app, but Sony plans on making a software development kit (SDK) available so that it can be integrated into other applications as well.

mocopi isn’t going to be as detailed and as fluid as professional mocap systems, but at around $360, it is significantly more affordable. It’s designed for more casual use, targeting an audience of content creators that are more interested in creating fun ways to express themselves than professional animated avatars. If it takes off, it could at least make such affordable mocap systems more common. Sony mocopi is launching in Japan in late January 2023, and it will be coming in zero-plastic packaging to boot.

The post Sony Mocopi wearable sensors let you control avatars with your whole body first appeared on Yanko Design.

Krado Plant Sensor will help you get information about your plant babies

Over the past couple of years, my social media feed has been filled with friends becoming plant mommies and daddies during the pandemic. Of course, I tried to do a bit of greening my apartment but very minimally since I knew my capability or, rather, lack of capability of taking care of plants. Sure enough, a couple of weeks later, both plants died. Now, if I had some physical and digital tools to help me out, I probably may have done better.

Designer: Hatch Duo

If something like the Krado Plant Sensor actually exists, then maybe my poor two plants had a better chance of survival. It’s the hardware component of the Leaflet Plant Care System, whose main purpose is to help people grow healthy plants. The sensor is something you put in the soil with your plants, and it will be able to transmit information to the mobile app so you will be able to adjust how you’re taking care of them.

The plant sensor is able to monitor things like soil moisture, ambient temperature, humidity, and light. These are critical factors that will affect the health of your plants, and if you’re like me, that’s pretty clueless about these things, then it might give me helpful information. The app connected to it will also give you actionable guidance based on these factors like buying and shipping fertilizer, potting soil, pesticides, etc.

The sensor itself looks like a thermometer but with a leaf at the top. There are different colored lights that may indicate specific conditions that will alert you (well if you’ve memorized what the colors stand for). In terms of sustainability, it is 100% 3D printed and it also uses the latest additive manufacturing practices. Another added bonus to this is that all the information collected through the sensor will contribute to botanical research. The research will tell us what’s the best environment for specific plans to grow.

I don’t know if having this sensor will definitely improve my still non-existent plant growing skills. But it might actually let them live beyond the average of two weeks life cycle that they get with me.

The post Krado Plant Sensor will help you get information about your plant babies first appeared on Yanko Design.

Google announces new radar software that reads and responds to human body language

Google’s Advanced Technology and Products division recently announced a new round of research that aims to refine the radar technology of Soli, sensor-integrated software that responds to human behavior and movement.

Proxemics is the study of human use of space and how changes in population density can affect behavior, communication, and social interaction. More specifically, proxemics inform branches of design that deal with ergonomics and space mediation. On one hand, proxemics aid in the configuration of floor plans to harmonize instinctive human behavior with spatial experiences. In a different light, proxemics further develop technology to respond to our behavior and needs with human-like responses. Google’s Advanced Technology and Products division (ATAP) recently took to proxemics to refine the Soli sensor, a sensor with embedded radar technology that uses electromagnetic waves to pick up on even subtle human body language and movements.

Designer: Google’s Advanced Technology and Products (ATAP)

Used in modern appliances like the Nest Hub smart display and Google Pixel 4, Soli’s radar has contributed to sleep-tracking and contactless, remote control technology. This new round of research spearheaded by Google’s ATAP team finds the sensor data gathered by Soli being used to enable computers to recognize and respond to our daily movements. Leonardo Giusti, head of design for ATAP, says, “We believe as technology becomes more present in our life, it’s fair to start asking technology itself to take a few more cues from us.”

In response, the team at Google hoped to develop Soli to capture the same energy as your mom turning the television off and covering you in a throw after you doze off on the couch. The integrated radar software is designed to detect a user’s proximity to computers and personal smart devices, turning off as we walk from its screen and turning back on once we’re in front of it again. In addition to proximity sensors, the radar technology recognizes changes in body orientation, which signals to the device whether a user will soon interact with it.

While we may not wake up swaddled in a warm blanket, this new round of research finds computers and smart devices acknowledging and responding to when we are in front of the screen and when we walk away from it or doze off for a bit. Noting the process behind this, Lauren Bedal, senior interaction designer at ATAP, explains, “We were able to move in different ways, we performed different variations of that movement, and then—given this was a real-time system that we were working with—we were able to improvise and kind of build off of our findings in real-time.”

The post Google announces new radar software that reads and responds to human body language first appeared on Yanko Design.

This soap-shaped device is actually a super stethoscope that could save your life

There has been a great deal of interest and even obsession over keeping tabs on one’s health at home, but there are some things that not even an Apple Watch can detect.

Fitness trackers have been around for years, succeeding the single-purpose pedometers that were en vogue among some athletes in the past. Smartwatches, in turn, have started to offer more hardware to measure and monitor a variety of vital signs, including heart rate, blood oxygen levels, or even heart rhythms. The Apple Watch has often been praised for its life-saving features because it alerts the wearer of potential problems they might not have otherwise known.

Designer: SEIKI DESIGN STUDIO

It’s not that people don’t want to go to doctors per se. Many are just dissuaded because of the costs and the uncomfortable procedures involved. If there was a convenient and easier way they could keep track of their health at home, they probably wouldn’t mind more regular check-ins with healthcare professionals. That’s exactly the kind of situation where this “Super Stethoscope” is designed to shine.

Calling it a stethoscope is a bit confusing because it doesn’t look anything like the iconic medical tool. That’s intentional, of course, because of the emotional and psychological barriers that might come with seeing a conventional stethoscope. Additionally, there are only a few things you can do with a stethoscope alone and without training, and most of its other uses require other tools, like a blood pressure apparatus.

In contrast, this Super Stethoscope should be all that you need to read a variety of body signals, specifically those coming from the heart. This device would be able to take ECG and heart sound measurements to detect a variety of cardiac disorders. The science and technology behind this device currently hides behind a few patents filed both in Japan and in the US.

The design of the Super Stethoscope is almost a contrast to its name. Shaped like a pebble or a soap, the form is intended to convey feelings of comfort and gentleness from a device that is supposed to be used for clinical purposes. It could definitely get rid of some hesitation and fear from people who shy away from checkups, though a wearable device is probably still more convenient compared to something that you have to lie down to use.

The post This soap-shaped device is actually a super stethoscope that could save your life first appeared on Yanko Design.

This reusable face-mask comes with a built-in sensor that tells you when to change the filters

The design community was quick to rise to the challenge of helping the world overcome the Coronavirus, but this came at a cost. Human consumption of plastic tripled in 2020 with the use of surgical face-masks, so designer Ollie Butt decided to combat both the virus and the trail of plastic trash the pandemic left behind. Ollie’s Face Mask (although conceptual) paves the way forward for an aesthetic, efficient, reusable gas mask that can actively filter air coming from the outside, while continuously measuring the quality of the air inside the mask. Equipped with a circuit board on the inside and a bunch of sensors (including one for sensing humidity), and an outward-facing LED strip, the Reusable Face Mask looks and feels cutting-edge.

The LED strip plays a dual role, adding a futuristic flair to the device while also allowing the mask to tell you when to change your filters. The humidity sensor on the inside can detect when the filters need replacing, and a simple plug-in-plug-out design detail lets you swap out old filters for new ones. The reusable mask comes with a silicone seal around the mouth, allowing it to fit comfortably while creating a tight seal, and around-the-head straps ensure you can wear the mask for long hours without worrying about ear-fatigue.

Designer: Ollie Butt