Teenage Engineering-inspired Music Sampler Uses AI In The Nerdiest Way Possible

The T.M-4 looks like it escaped from Teenage Engineering’s design studio with a specific mission: teach beginners how to make music using AI without making them feel stupid, or without creating slop. Junho Park’s graduation concept borrows all the right cues from TE’s playbook, that modular control layout, the single bold color, the mix of knobs and buttons that practically beg to be touched, but redirects them toward a gap in the market. Where Teenage Engineering designs for people who already understand synthesis and sampling, the T.M-4 targets people who have ideas but no vocabulary to express them. The device handles the technical translation automatically, separating audio into layers and letting you manipulate them through physical controls. It feels like someone took the OP-1’s attitude and wired it straight into an AI stem separator.

The homage succeeds because Park absorbed what makes Teenage Engineering products special beyond their appearance. TE hardware feels different because it removes friction between intention and result, making complex technology feel approachable through thoughtful interface design and immediate tactile feedback. The T.M-4 brings that same thinking to AI music generation. You’re manipulating machine learning model parameters when you adjust texture, energy, complexity, and brightness, but the physical controls make it feel like direct manipulation of sound rather than abstract technical adjustment. An SD card system lets you swap AI personalities like you would game CDs from a gaming console – something very hardware, very tactile, very TE. Instead of drowning in model settings, you collect cards that give the AI different characters, making experimentation feel natural rather than intimidating.

Designer: Junho Park

What makes this cool is how it attacks the exact point where most beginners give up. Think about the first time you tried to remix a track and realized you had no clean drums, no isolated vocals, nothing you could really move around without wrecking the whole thing. Here, you feed audio in through USB-C, a mic, AUX, or MIDI, and the system just splits it into drum, bass, melody, and FX layers for you. No plugins, no routing, no YouTube rabbit hole about spectral editing. Suddenly you are not wrestling with the file, you are deciding what you want the bass to do while the rest of the track keeps breathing.

The joystick and grid display combo help simplify what would otherwise be a fairly daunting piece of gear. Instead of staring at a dense DAW timeline, you get a grid of dots that represent sections and layers, and you move through them like you are playing with a handheld console. That mental reframe matters. It turns editing into navigation, which is far less intimidating than “production.” Tie that to four core parameters, texture, energy, complexity, brightness, and you get a system that quietly teaches beginners how sound behaves without ever calling it a lesson. You hear the track get busier as you push complexity, you feel the mood shift when you drag energy down, and your brain starts building a map.

Picture it sitting next to a laptop and a cheap MIDI keyboard, acting as a hardware front end for whatever AI engine lives on the computer. You sample from your phone, your synth, a YouTube rip, whatever, then sculpt the layers on the T.M-4 before dumping them into a DAW. It becomes a sort of AI sketchpad, a place where ideas get roughed out physically before you fine tune them digitally. That hybrid workflow is where a lot of music tech is quietly drifting anyway, and this concept leans straight into it.

Of course, as a student project, it dodges the questions about latency, model size, and whether this thing would melt without an external GPU. But as a piece of design thinking, it lands. It treats AI as an invisible assistant, not the star of the show, and gives the spotlight back to the interface and the person poking at it. If someone like Teenage Engineering, or honestly any brave mid-tier hardware company, picked up this idea and pushed it into production, you would suddenly have a very different kind of beginner tool on the market. Less “click here to generate a track,” more “here, touch this, hear what happens, keep going.”

The post Teenage Engineering-inspired Music Sampler Uses AI In The Nerdiest Way Possible first appeared on Yanko Design.

These 5 AI Modules Listen When You Hum, Tap, or Strum, Not Type

AI music tools usually start on a laptop where you type a prompt and wait for a track. That workflow feels distant from how bands write songs, trading groove and chemistry for text boxes and genre presets. MUSE asks what AI music looks like if it starts from playing instead of typing, treating the machine as a bandmate that listens and responds rather than a generator you feed instructions.

MUSE is a next-generation AI music module system designed for band musicians. It is not one box but a family of modules, vocal, drum, bass, synthesizer, and electric guitar, each tuned to a specific role. You feed each one ideas the way you would feed a bandmate, and the AI responds in real time, filling out parts and suggesting directions that match what you just played.

Designers: Hyeyoung Shin, Dayoung Chang

A band rehearsal where each member has their own module means the drummer taps patterns into the drum unit, the bassist works with the bass module to explore grooves, and the singer hums into the vocal module to spin melodies out of half-formed ideas. Instead of staring at a screen, everyone is still moving and reacting, but there is an extra layer of AI quietly proposing fills, variations, and harmonies.

MUSE is built around the idea that timing, touch, and phrasing carry information that text prompts miss. Tapping rhythms, humming lines, or strumming chords lets the system pick up on groove and style, not just genre labels. Those nuances feed the AI’s creative process, so what comes back feels more like an extension of your playing than a generic backing track cobbled together from preset patterns.

The modules can be scattered around a home rather than living in a studio. One unit near the bed for late-night vocal ideas, another by the desk for quick guitar riffs between emails, a drum module on the coffee table for couch jams. Because they look like small colorful objects rather than studio gear, they can stay out, ready to catch ideas without turning the house into a control room.

Each module’s color and texture match its role: a plush vocal unit, punchy drum block, bright synth puck, making them easy to grab and easy to live with. They read more like playful home objects than intimidating equipment, which lowers the barrier to experimenting. Picking one up becomes a small ritual, a way to nudge yourself into making sound instead of scrolling or staring at blank sessions.

MUSE began with the question of how creators can embrace AI without losing their identity. The answer it proposes is to keep the musician’s body and timing at the center, letting AI listen and respond rather than dictate. It treats AI as a bandmate that learns your groove over time, not a replacement, and that shift might be what keeps humans in the loop as the tools get smarter.

The post These 5 AI Modules Listen When You Hum, Tap, or Strum, Not Type first appeared on Yanko Design.

Cambridge Just Designed the Voice Device Every Stroke Survivor Wanted

There’s something almost poetic about a piece of technology that looks like a fashion accessory but can fundamentally change someone’s life. That’s exactly what researchers at the University of Cambridge have created with Revoice, a soft, flexible choker that helps stroke survivors speak again.

Around 200,000 people in the U.S. experience speech difficulties after a stroke each year. Many lose the ability to form words clearly or struggle to express complete thoughts, a condition called dysarthria. For years, the options have been limited to speech therapy, typing on communication boards, or experimental brain implants that require surgery. Revoice offers something different: a wearable device you can put on like jewelry and throw in the wash when you’re done.

Designer: scientists from the University of Cambridge

What makes this device fascinating is how it works. The choker sits comfortably against your throat and does two things at once. First, it picks up the tiniest vibrations from your throat muscles when you mouth words, even if no sound comes out. Second, it tracks your heart rate, which gives clues about your emotional state, whether you’re frustrated, anxious, or calm.

These signals get sent to two AI systems working together. The first AI agent focuses on reconstructing what you’re trying to say based on those throat vibrations. It’s essentially reading the intention behind silent or partial speech. The second agent takes things further by expanding short phrases into full, natural sentences. So if you manage to mouth “need help,” the system might generate “I need help with something, can you come here?” complete with the right emotional tone based on your heart rate data.

Think about what this means. Instead of laboriously spelling out every word on a screen or pointing at pictures on a board, you can have fluid conversations again. Your family hears full sentences. You can express nuance and emotion, not just basic needs. The device aims to give people back something invaluable: their natural communication style. The technology builds on recent advances in AI and sensor miniaturization. These aren’t the bulky medical devices of the past. The choker is designed to be discreet and comfortable enough to wear all day. It’s washable, which means it fits into normal life without requiring special care or maintenance. You’re not announcing to everyone that you’re using assistive technology unless you want to.

What’s particularly clever is how the system learns. Current speech assistance tools often require extensive training periods where users must adapt to the technology’s limitations. Revoice flips this approach by using AI that can understand variations in how people try to speak. It works with what you can do rather than forcing you to work around what it can’t. The emotional intelligence aspect shouldn’t be overlooked either. When the device detects an elevated heart rate, it can adjust the tone of generated speech to reflect urgency or stress. This might seem like a small detail, but emotional expression is fundamental to human communication. Being able to convey that you’re upset or excited transforms a conversation from transactional to genuinely human.

Right now, Revoice is still in development and will need more extensive clinical trials before it reaches the market. The research team published their findings in the journal Nature Communications. They’re also planning to expand the system to support multiple languages and a wider range of emotional expressions, which would make it accessible to diverse populations worldwide. For the design and tech communities, Revoice represents a perfect intersection of form, function, and empathy. It’s a reminder that the best innovations don’t just solve problems technically, they solve them in ways that respect dignity and daily life. No surgery, no stigma, just a well-designed tool that helps people communicate.

The post Cambridge Just Designed the Voice Device Every Stroke Survivor Wanted first appeared on Yanko Design.

Apple’s Secret AI Pin Looks Like an AirTag and it Might Just Kill The Smartwatch

Representational Image

Apple’s wearable future might not be strapped to your wrist at all. According to new reports, the company is developing an AI-powered pin about the size of an AirTag, complete with dual cameras, microphones, and a speaker. The device would clip onto clothing or bags, marking a deliberate shift away from the smartwatch form factor that has dominated wearable tech for the past decade.

If the rumors prove accurate, this circular aluminum-and-glass device could launch as early as 2027, running Apple’s upcoming Siri chatbot and leveraging Google’s Gemini AI models. The company appears to be betting that consumers want ambient AI assistance without constantly pulling out their phones or glancing at their watches. Whether this gamble pays off remains to be seen, especially given the struggles of similar devices like Humane’s now-defunct AI Pin.

Designer: Apple

Representational Image

The hardware specs sound modest on paper but reveal something about Apple’s thinking. Two cameras sit on the front: one standard lens, one wide-angle. Three microphones line the edge for spatial audio pickup. A speaker handles output. Physical button for tactile control. Magnetic inductive charging on the back, identical to the Apple Watch system. The whole thing supposedly stays thinner than you’d expect from something packing this much capability. What strikes me most is the screenless design, which tells you Apple learned something from watching Humane crash and burn trying to replace phones with projectors and awkward gesture controls.

Representational Image

Because here’s the thing about AI wearables so far: they’ve all suffered from identity crisis. The Humane AI Pin wanted to be your phone replacement but couldn’t handle basic tasks without overheating or dying within hours. Motorola showed off something similar at CES 2026, and demonstrated a level of agentic control that was still in its beta stages but was impressive nevertheless. Apple seems to be taking notes from both the failure of the former as well as the potential success of the latter. A screenless pin that relies entirely on voice, environmental awareness, and audio feedback has clear limitations, which paradoxically might be its greatest strength.

Motorola’s AI Pendant at CES 2026

The timing lines up with Apple’s Siri overhaul coming in iOS 27. They’re rebuilding the assistant from scratch as a proper conversational AI, and they’ve partnered with Google to tap into Gemini models for the heavy lifting. Smart move, actually. Apple’s in-house AI efforts have been mediocre at best, and licensing Google’s tech lets them skip years of expensive catch-up work. This pin becomes the physical embodiment of that strategy: a purpose-built device for ambient AI that doesn’t pretend to be anything else. You clip it on, it listens and watches, you talk to it, it responds. Simple interaction model.

But I keep circling back to the same question: who actually wants this? Your iPhone already has cameras, microphones, and Siri access. Your Apple Watch gives you wrist-based notifications and quick voice commands. AirPods put computational audio directly in your ears. Apple’s ecosystem already covers every conceivable wearable surface area. Adding a clip-on camera pin feels like solving a problem nobody has, or worse, creating a new product category just because the technology allows it. The 38.5-gram weight of competing devices like Rokid’s AI glasses shows manufacturers obsess over comfort, but comfort alone doesn’t justify purchase.

Representational Image

The 2027 timeline is far enough out that Apple can quietly kill this project without anyone noticing, exactly like they did with the Apple Car. They’ve got a pattern of floating ambitious ideas internally, letting engineers explore possibilities, then axing things that don’t meet their standards or market conditions. Sometimes that discipline saves them from embarrassing product launches. Sometimes it means we never get to see genuinely interesting experiments. This AI pin could go either way, and frankly, Apple probably hasn’t decided yet either. They’re watching how the market responds to early AI wearables, gauging whether spatial computing takes off with Vision Pro, and waiting to see if their Siri rebuild with Google’s Gemini actually works before committing manufacturing resources.

The post Apple’s Secret AI Pin Looks Like an AirTag and it Might Just Kill The Smartwatch first appeared on Yanko Design.

5 AI Devices That Just Made Smartphones Look Obsolete in 2026

The year 2026 marks a historic pivot in personal technology. We are moving past the era of the “AI chatbot” trapped inside a website and entering the age of ambient hardware. While 2025 was defined by software experimentation, 2026 is the year when specialized AI silicon, smart glasses, and wearable pins have matured into indispensable daily companions.

These next-gen devices aren’t just faster smartphones; they represent a fundamental shift in how we interact with the digital world. By integrating intelligence directly into our physical presence, the “AI in your pocket” has evolved from a reactive tool into a proactive partner that anticipates our needs before we even voice them.

1. The Post-Smartphone Device

The traditional glass rectangle is no longer the sole gateway to the internet. In 2026, we are seeing the rise of screenless interfaces and augmented reality glasses that prioritize voice and gesture over scrolling. Devices like AI-powered rings and lightweight smart glasses have moved from niche gadgets to mainstream essentials, offering a “heads-up” lifestyle that keeps users engaged with the real world.

A desire for frictionless interaction drives this hardware shift. Instead of pulling out a phone to navigate or translate, users simply look at a sign or speak to their lapel pin. These devices are designed to disappear into our daily attire, making technology an invisible but powerful layer of our human experience rather than a constant distraction.

The Acer FreeSense Ring represents a refined advancement in wearable technology, offering continuous health monitoring in a compact, stylish form. Crafted from lightweight titanium alloy, the ring is slim, measuring 2.6mm in thickness and 8mm in width, and weighs only 23 grams. Its design balances elegance and practicality, available in finishes such as rose gold and glossy black, and water-resistant up to 5 ATM. With seven size options, it ensures a comfortable fit for a wide range of users. The ring is intended to complement traditional watches, providing wellness tracking without overwhelming the wearer with bulk or complexity.

Equipped with advanced biometric sensors, the FreeSense Ring tracks heart rate, heart rate variability, blood oxygen saturation, and sleep quality. Data is processed through a dedicated mobile application, which transforms readings into actionable, AI-driven wellness insights and personalized recommendations. Its detailed sleep analysis and continuous monitoring enable users to manage health proactively. By integrating sophisticated design with advanced biometric intelligence, the FreeSense Ring delivers an elegant and practical solution for modern wellness management.

2. On-Device Intelligence Systems

One of the biggest breakthroughs in 2026 is the move away from the cloud, made possible by massive leaps in Neural Processing Units (NPUs). As a result, your device no longer requires a constant internet connection to “think.” Complex reasoning and language processing now happen directly on the hardware in your pocket, resulting in near-zero latency.

This shift to “Edge AI” means your personal assistant is faster and more reliable than ever. Whether you are in a remote hiking spot or a crowded subway, your device can translate languages and organize your schedule offline. By keeping the “brain” of the AI on the device, manufacturers have finally solved the lag issues that plagued early generations of AI hardware.

The CL1 by Cortical Labs is the world’s first commercially available biological computer, integrating living human neurons with silicon hardware in a compact, self-contained system. Rather than relying on conventional software models, the CL1 uses lab-grown neurons cultured on an electrode array, allowing them to form, modify, and strengthen connections in real time. This enables the device to process information biologically, learning dynamically through interaction instead of pre-trained algorithms or large datasets.

At the core of the CL1 is Synthetic Biological Intelligence (SBI), a hybrid computing approach that combines biological adaptability with machine precision. The neurons respond to electrical stimulation by reorganizing their connections, closely mirroring natural learning processes in the human brain. This results in exceptional energy efficiency and high responsiveness compared to traditional AI systems. Designed as a research-grade platform, the CL1 offers scientists a new way to study neural behavior, test compounds, and explore adaptive intelligence, positioning it as a foundational product in the emerging field of biological computing.

3. Rethinking App-Centric UX

We are witnessing the slow death of the traditional app icon grid. In 2026, next-gen devices utilize Agentic AI, which allows your pocket companion to navigate services on your behalf. Instead of you opening a travel app, a hotel app, and a calendar app to book a trip, you give one command. Your AI agent handles the cross-platform logistics autonomously.

This transition from “apps” to “actions” has redefined the user interface. Our devices have become executive assistants that understand our preferences across every service we use. The friction of toggling between dozens of different interfaces is being replaced by a single, unified conversation that gets things done, effectively turning the operating system into a proactive worker rather than a static menu.

The TB1’s defining feature is its AI-powered LightGPM 2.0 system, developed using principles of color psychology and professional lighting design. The system is capable of generating refined lighting scenes from billions of possible combinations, delivering precise, task-appropriate illumination without requiring manual configuration. Through simple voice commands such as “Hey Lepro,” users can activate lighting modes tailored for activities including gaming, or social gatherings. The AI interprets intent in real time and produces a balanced, professional-grade ambience with minimal user intervention.

The product also incorporates a built-in microphone and LightBeats technology, enabling lighting to synchronize dynamically with music, while segmented control allows detailed customization across different sections of the lamp. By combining intelligent scene generation, hands-free interaction, and a distinctive sculptural form, the TB1 positions itself as a forward-looking lighting solution. It enhances modern living environments through responsive, adaptive illumination that prioritizes ease of use and functional design.

4. Sensory-Driven Artificial Intelligence

Next-gen devices in 2026 are no longer blind to their surroundings. Equipped with high-fidelity microphones and low-power cameras, these pocket companions possess contextual awareness. They can “see” the ingredients on your kitchen counter to suggest a recipe or “hear” the tone of a meeting to provide real-time talking points or summaries that capture subtle emotional cues.

This sensory integration allows the AI to offer help that is actually relevant to your current environment. It isn’t just processing text; it is understanding your physical reality. By merging visual, auditory, and biometric data, your 2026 device acts as a second set of eyes and ears, providing a level of personalized support that was previously confined to science fiction.

The Humane AI Pin was introduced as a bold vision of screenless, context-aware computing, promising an AI-powered future worn discreetly on the body. For many early adopters, however, the device quickly lost functionality after the discontinuation of its cloud services, rendering its advanced features inoperative. What remained was a piece of thoughtfully engineered hardware—complete with a miniature projector, sensors, microphones, and cameras—stranded without a viable software ecosystem. As a result, the Pin became a notable example of how tightly coupled hardware and proprietary services can limit a product’s long-term relevance.

This narrative has begun to shift with the emergence of PenumbraOS, an experimental software platform developed through extensive reverse engineering. By reimagining the AI Pin as a specialized Android-based device, PenumbraOS unlocks privileged system access and introduces a modular assistant framework to replace the original interface. This effort reframes the Humane AI Pin not as a failed product, but as a capable development platform with renewed potential. Through open-source collaboration, the device now serves as a case study in how community-led innovation can extend the life and value of forward-thinking hardware.

5. Data in Your Pocket

As AI becomes more personal, the demand for “Data Sovereignty” has reached a fever pitch. 2026 hardware solves the “creepy” factor through hardware-level privacy vaults. Because the majority of AI processing now happens locally, your most sensitive conversations, health data, and private photos never have to leave the physical device to be processed in a distant corporate data center.

This “Privacy by Design” approach has built a new level of trust between users and their machines. With encrypted local storage and physical kill switches for sensors, next-gen devices ensure that your digital twin remains yours alone. In a world where data is the most valuable currency, the 2026 device serves as a secure fortress that protects your personal identity while amplifying your capabilities.

The Light Phone III is a purpose-built device designed around simplicity, privacy, and intentional use. It features a 3.92-inch black-and-white OLED display that replaces the earlier e-ink screen, offering sharper visuals, faster response, and improved legibility across lighting conditions. The interface is minimal and distraction-free, supporting essential functions such as calls, messages, navigation, music, podcasts, and notes. Powered by a Qualcomm SM4450 processor with 6GB of RAM and 128GB of storage, the device delivers smooth performance while remaining firmly limited to core tasks.

The product introduces a single, straightforward camera with a fixed focal length and a physical two-stage shutter button, emphasizing documentation over content creation. Its compact, solid form factor includes a user-replaceable battery, fingerprint sensor integrated into the power button, stereo speakers, USB-C charging, NFC, and GPS that prioritizes user privacy. Every design decision reflects a restrained, ethical approach to personal technology, positioning the Light Phone III as a secure, focused alternative to conventional smartphones.

The “AI in your pocket” is no longer a futuristic promise but the standard for 2026. By moving intelligence to the edge, embracing agentic workflows, and prioritizing local privacy, next-gen devices have successfully bridged the gap between human intent and digital execution. We are no longer using technology as we are living alongside it.

The post 5 AI Devices That Just Made Smartphones Look Obsolete in 2026 first appeared on Yanko Design.

UGREEN built an AI Recorder into its 10,000mAh Power Bank and I don’t know if that’s genius or crazy…

Representational Image

At CES 2026, where every tech company seemed legally obligated to add AI to something, Ugreen announced a power bank with voice recording. The MagFlow AI Voice Recording Magnetic Power Bank packs 10,000 mAh, wireless charging, and AI-powered note-taking into one device. It’s either brilliantly practical or completely unnecessary, depending on how often you find yourself needing both a dead phone and a voice memo at the exact same moment.

The real question is what market Ugreen’s actually targeting. Dedicated AI recorders like Plaud and Limitless offer superior transcription and integration with productivity tools. Meanwhile, power bank buyers are mostly obsessed with capacity, charging speed, and MagSafe compatibility. Ugreen’s product sits awkwardly between these worlds, somehow simultaneously targeting both the serious note-taker as well as the charging purist. Maybe that’s the genius: creating a category where none existed, or maybe it’s just feature creep with good intentions.

Designer: Ugreen

Representational Image

You’ve got 10,000 mAh, which is respectable but standard for MagSafe-compatible power banks in 2026. Wireless charging is included, though the company hasn’t confirmed whether there’s a USB-C port for wired fast charging. A digital display shows battery level and presumably real-time charging stats. Then there’s the voice recording hardware with built-in AI for translation and summarization, which sounds impressive until you realize Ugreen hasn’t explained how you’ll actually access these recordings. Is there an app? Does it sync to your phone? Do you have to plug it into a computer and dig through files like it’s 2015?

Representational Image

Compare this to something like the Plaud NotePin, which costs around $169 and is purpose-built for recording. It connects seamlessly to your phone, transcribes in real time, integrates with LLMs like ChatGPT for summaries, and weighs practically nothing. Or look at the power bank side of things. Ugreen’s own Qi2 25W MagFlow Power Bank retails for $89.99 (currently $69.99 on Amazon) and does one thing exceptionally well: charges your devices fast. This new AI version will almost certainly cost more, probably around $120 to $150 if I had to guess, which puts it in direct competition with premium power banks that offer higher capacity or faster charging speeds. Not to mention most AI services do come with the looming threat of a subscription fee at some point. Imagine subscribing to a power bank…

Jokes aside, the bundling makes sense if you’re the kind of person who carries too much stuff and wants to consolidate. A journalist running between interviews could theoretically use this to charge their phone while recording background audio for articles. Students might appreciate having one device that keeps their laptop alive during lectures while capturing notes they can summarize later. But these use cases feel niche, and niche products need exceptional execution to justify their existence. Ugreen hasn’t shown us that yet. The company has a solid track record with GaN charging technology and their NASync NAS series crushed it on Kickstarter with $6.6 million raised. They know how to build hardware. Whether they can build software that makes voice recording feel natural on a battery pack is the real test.

The post UGREEN built an AI Recorder into its 10,000mAh Power Bank and I don’t know if that’s genius or crazy… first appeared on Yanko Design.

DuRoBo Krono Brings an AI-Powered Pocket ePaper Focus Hub to the US

Trying to read or think on a phone never quite works. Notifications interrupt articles halfway through, feeds wait one swipe away from whatever you were concentrating on, and even long reads become just another tab competing for attention. E‑readers tried to solve this, but most stopped at books and stayed locked into one ecosystem. DuRoBo, a Dutch e‑paper specialist, is bringing Krono to CES 2026 in Las Vegas with a different ambition, treating focus, reflection, and idea capture as equally important.

Krono is a pocket‑sized smart ePaper focus hub that has made waves in Europe and is now entering the US market. It wraps a 6.13‑inch E Ink Carta 1200 display with 300 PPI clarity into a minimalist, mechanical‑inspired body that measures 154 × 80 × 9 mm and weighs about 173 g. It is for capturing and shaping thoughts with on‑device AI, ambient audio, and a Smart Dial that feels more tactile than tapping glass.

Designer: DuRoBo

The paper‑like screen, anti‑glare etching, and dual‑tone frontlight make it comfortable for long reads, whether books, saved articles, or PDFs. The compact body feels closer to a large phone than a tablet, which encourages carrying it everywhere as a dedicated space for slower content. The display mimics paper well enough that you can read for hours without the eye strain from backlit screens.

The Smart Dial and Axis bar are the main interaction story. The dial lets you flip pages, adjust brightness or volume, and, with a long‑press, open Spark, Krono’s idea vault. The Axis along the top rear houses eight breathing lights that glow subtly while you read or work, reinforcing the sense of a calm, separate device. The dial and lights give Krono a more analog feel, turning navigation and focus into something you do with your hand.

Spark is where AI enters. Press and hold the dial to dictate a thought, meeting note, or passing idea, and Krono records it, transcribes it with speech‑to‑text, and runs an AI summary that turns it into a structured note. Text Mode lets you refine that note on the e‑paper screen. The whole process happens on‑device, keeping ideas private and the interface calm.

Libby AI is the on‑device assistant that answers prompts and helps with outlines or clarifications without dragging you into a browser. Krono runs Android 15 with full Google Play Store access, powered by an octa‑core processor, 6 GB of RAM, and 128 GB of storage, so it can run Kindle, Notion, or other tools. DuRoBo’s own interface keeps the experience geometric and minimal.

The built‑in speaker and Bluetooth audio are part of the focus story. You can listen to music, podcasts, or audiobooks while reading or writing, turning Krono into a self‑contained environment for commutes or late‑night sessions. The 3,950 mAh battery and tuned refresh algorithms support long stretches of use, not constant app‑hopping, which is what you want from a device that is supposed to be a reprieve from the usual screen.

Krono’s CES 2026 appearance is more than just another e‑reader launch. It is DuRoBo’s attempt to give US readers and thinkers a pocketable device that treats focus, reflection, and idea capture as first‑class design problems. The specs matter, but the real promise is a small, quiet object that can sit between a book and a phone, borrowing the best of both without inheriting their worst habits.

The post DuRoBo Krono Brings an AI-Powered Pocket ePaper Focus Hub to the US first appeared on Yanko Design.

Govee’s Gaming Pixel Light now lets you generate 8-bit animated GIFs using AI Prompts: Hands-on at CES 2026

We have quickly grown accustomed to asking AI to write our emails or create stunning headshots for our LinkedIn. This incredible interaction has lived almost exclusively on our computer and phone screens, a fascinating but ultimately contained experience. The real question has always been when this creative AI would break free from the flat display and start interacting with our physical environment. That moment appears to be arriving now, and it is starting with, of all things, a desk lamp that can generate its own art.

Govee’s implementation of its AI Lighting Bot 2.0 in products like the Gaming Pixel Light is a clever and surprisingly practical application of generative AI. It transforms a simple smart light into an intelligent art creator that anyone can use. The ability to generate custom GIF animations just by typing what you want to see is a game-changer for ambient lighting and personalization. This technology moves far beyond simple color cycling or pre-programmed scenes, offering a clear glimpse into a future where our smart devices are not just responsive, but genuinely creative partners.

Designer: Govee

And let’s be honest, the idea initially sounds a bit like a solution searching for a problem. But the hardware itself makes a compelling case. The Gaming Pixel Light is a dedicated 52 by 32 pixel canvas, which is a perfect, low-stakes resolution for the kind of quirky, lo-fi art that generative models excel at creating. It is not trying to render a photorealistic scene; it is built for the exact brand of retro, 8-bit nostalgia that defines so many gaming setups. The fact that it can run these animations at a smooth 30 frames per second means your text prompts result in genuinely dynamic visuals, not some clunky, stuttering slideshow. Govee’s dual-plane pixel engine even allows for layered designs, so the AI has a surprisingly deep toolkit to play with.

We saw a demo of a campfire GIF on the Gaming Pixel Light and it really did look like something out of a Game Boy Color or an 8-bit game come to life. We even tested the feature on Govee’s curtain lights although the Gaming Pixel Light’s compact form factor (and targeting towards a gaming audience) made it a perfect canvas for this feature. All you do is enter a prompt and Govee’s AI Lighting Bot 2.0 not only creates the image, it renders an animation, and applies it to the lights seamlessly. Everything happens through an app, and for the most part, there are certain limitations/censorships in place so that you don’t generate images that are offensive or inappropriate. Govee hasn’t capped the number of generations per month, but they did mention that future versions will allow iterative tweaking of the GIFs. For now, it’s very WYSIWYG and an image that’s generated can’t be ‘edited’. Govee’s tip, however, is to be as thoroughly detail-oriented with your prompting.

What makes this system particularly interesting is how Govee has tailored the AI interaction to the hardware. For graphic displays like this pixel light, it is a “single-turn” interaction: you type a prompt, you get a GIF. It is direct, fast, and avoids the conversational baggage that would feel tedious for a purely visual output. This is a smart distinction from how the AI works with their linear strip lights, which allows for more complex, multi-turn conversations about mood and color. It shows a level of thoughtful design that recognizes different products demand different interfaces. This is the kind of ambient computing that actually feels useful, turning a passive decorative object into an active, personalized art station that constantly evolves with your imagination.

The post Govee’s Gaming Pixel Light now lets you generate 8-bit animated GIFs using AI Prompts: Hands-on at CES 2026 first appeared on Yanko Design.

Razer’s Project AVA Brings Holographic AI Companions to Your Desk

Remember watching sci-fi movies as a kid and dreaming about the day you’d have your own holographic assistant? Well, that future just arrived, and it’s cuter than we ever imagined. Razer unveiled Project AVA at CES 2026, and honestly, it’s giving us all the futuristic vibes we didn’t know we needed.

Picture this: a sleek cylindrical device sitting on your desk, projecting a 5.5-inch animated 3D hologram that actually talks to you, learns your habits, and becomes your daily companion. It sounds like something straight out of a Black Mirror episode, but in the best possible way.

Designer: Razer

What makes Project AVA so fascinating isn’t just the holographic technology itself (though let’s be real, that’s pretty spectacular). It’s how Razer has reimagined what AI companionship could look like in our physical spaces. Unlike Siri hiding in your phone or Alexa trapped in a speaker, AVA exists as a visible presence on your desk. She has facial expressions, tracks eye movement, and her lips actually sync when she talks. It’s the kind of detail that transforms a gadget into something that feels surprisingly alive.

The personality customization is where things get really interesting. You can choose from different avatars, each with their own distinct personality. There’s Kira, an anime-style character perfect for gaming enthusiasts. There’s Zane for those wanting a more professional vibe. And then, in what might be the most genius collaboration ever, there’s an avatar modeled after League of Legends legend Lee “Faker” Sang-hyeok, plus characters from Sword Art Online. Razer clearly understands its audience, and they’re leaning hard into gaming and anime culture in the best way possible.

But here’s what really sets AVA apart: she’s powered by xAI’s Grok engine, which gives her some seriously sophisticated AI capabilities. This isn’t just a voice assistant that sets timers and plays music. AVA learns from your interactions and evolves her personality based on how you communicate with her. She can help organize your schedule, brainstorm creative projects, analyze data, and even provide real-time gaming coaching by actually watching your screen and offering strategic advice.

The gaming features deserve special attention because they’re genuinely innovative. Through what Razer calls “PC Vision Mode,” AVA can analyze your gameplay in real-time and offer coaching tips. Before you worry, Razer has been clear that AVA is designed as a coach and trainer, not an automated playing tool, so she won’t get you banned from competitive games. She’s more like having a knowledgeable friend watching over your shoulder, offering helpful suggestions.

From a design perspective, the cylindrical unit houses impressive tech: dual far-field microphones, an HD camera with ambient light sensors, and of course, Razer’s signature Chroma RGB lighting because aesthetics matter. The device connects to your Windows PC via USB-C, ensuring the high-bandwidth data transfer needed for those real-time features to work smoothly.

What’s particularly clever about Project AVA is how it addresses something we’ve all experienced with traditional AI assistants: the disconnect. When you’re talking to a voice in a speaker, it feels transactional. But when there’s a holographic character making eye contact and responding with facial expressions, the interaction becomes more engaging and, dare I say, more human.

Razer is calling AVA a “Friend for Life,” which might sound like marketing hyperbole, but it hints at something bigger happening in tech culture. We’re moving beyond thinking about AI as tools and starting to explore how they might serve as companions in our daily lives. It’s a fascinating cultural shift that raises interesting questions about how we’ll interact with technology in the coming years.

For anyone interested in being part of this next wave of AI innovation, reservations are open now for a $20 deposit, with the device expected to launch in late 2026. Whether you’re a tech enthusiast, a collector of innovative gadgets, or just someone who’s always wanted their own holographic companion, Project AVA represents something genuinely new in the consumer tech space.

The post Razer’s Project AVA Brings Holographic AI Companions to Your Desk first appeared on Yanko Design.

Artly Robots Master Latte Art and Drinks for CES 2026 Debut

People gather around a robot arm in a café, half for the drink and half for the performance. Most automation in food and beverage still feels either like a vending machine or a novelty, and the real challenge is capturing the craft of a skilled barista or maker, not just the motion of pushing buttons. The difference between a decent latte and a great one often comes down to subtle pressure, timing, and feel.

Artly treats robots less like appliances and more like students in a trade school, learning from human experts through motion capture, multi-camera video, and explanation. At CES 2026, that philosophy shows up in two compact robots, the mini BaristaBot and the Bartender, both built on the same AI arm platform but trained for different kinds of counters. Together, they make a case for automation that respects the shape of the work instead of flattening it.

Designer: Artly AI

Click here to know more.

mini BaristaBot: A 4×4 ft Café That Learns from Champions

The mini BaristaBot is a fully autonomous café squeezed into a 4 × 4 ft footprint, designed for high-traffic, labor-constrained spaces like airports, offices, and retail corners. One articulated arm handles the entire barista workflow, from grinding and tamping to brewing, steaming, and pouring, with the same attention to detail you would expect from a human who has spent years behind a machine. “At first, I thought making coffee was easy, but after talking to professional baristas, we realized it is not simple at all. There are a lot of details and nuances that go into making a good cup of coffee,” says Meng Wang, CEO of Artly.

The arm is trained on demonstrations from real baristas, including a U.S. Barista Champion, with Artly’s Skill Engine breaking down moves into reusable blocks like grabbing, pouring, and shaping. Those blocks are recombined into recipes, so the robot can reproduce nuanced techniques such as milk texturing and latte art, and adapt to different menus without rewriting code from scratch or relying on rigid workflows. “Our goal is not to automate for its own sake. Our goal is to recreate an authentic, specific experience, whether it is specialty coffee or any other craft, and to build robots that can work like those experts,” Wang explains.

“The training in our environment is not just about action: it is about judgment, and a lot of that judgment is visual. You have to teach the robot what good frothing or good pouring looks like, and sometimes you even have to show it bad examples so it understands the difference.” That depth of teaching separates Artly’s approach from simpler automation. The engineering layer uses food-grade stainless steel and modular commercial components, wrapped in a warm, wood-clad shell that looks more like a small kiosk than industrial equipment.

A built-in digital kiosk handles ordering, while Artly’s AI stack combines real-time motion planning, computer vision, sensor fusion, and anomaly detection to keep quality consistent and operation safe in public spaces where people stand close and watch the whole process. “Our platform is like a recording machine for skills. We can record the skills of a specific person and let the robot repeat exactly that person’s way of doing things,” which means a café chain can effectively bottle a champion’s technique and deploy it consistently across multiple sites.

The ecosystem supports plug-and-play deployment, with remote monitoring, over-the-air updates, and centralized fleet management. A larger refrigerator and modular countertops in finishes like maple, white oak, and walnut let operators match different interiors. For a venue, that means specialty coffee without building a full bar, and for customers, it means a consistent drink and a bit of theater every time they walk up.

Bartender: The Same Arm, Trained for a Different Counter

The Bartender is an extension of the same idea, using the Artly AI Arm and Skill Engine to handle precise, hand-driven tasks behind a counter. Instead of focusing on espresso and milk, the robot learns careful measurement, shaking, or stirring techniques, and finishing touches that depend on timing and presentation, all captured from human experts and turned into repeatable workflows. “If the robot learns the technique of a champion, it can repeat that same pattern at different locations. No matter where it performs, it will always create the same result that person did,” Wang notes.

Dexterity is the key differentiator. The Bartender uses a dexterous robotic hand and wrist-mounted vision to pick up delicate garnishes, handle glassware, and move through sequences that normally require a trained pair of hands. The same imitation-learning approach that taught the BaristaBot to pour latte art is now applied to more complex motions, so the arm can execute them smoothly and consistently in a busy environment.

For a hospitality space, the Bartender offers a way to standardize recipes, maintain quality during peak hours, and free human staff to focus on conversation and creativity rather than repetitive prep. Because it shares hardware and software with the BaristaBot, it fits into the same remote monitoring and fleet-management framework, making it easier to run multiple robotic stations across locations without reinventing operational infrastructure for each new skill type.

Artly AI at CES 2026: From Robot Coffee to a Skill Engine for Craft

The mini BaristaBot and the Bartender are not just two clever machines; they are early examples of what happens when a universal skill engine and a capable arm are pointed at crafts that usually live in human hands. For designers and operators, that means automation that respects the shape of the work, and for visitors at CES 2026, it is a glimpse of a future where robots learn from experts and then quietly keep that craft alive, one cup or glass at a time, without demanding that every venue become bigger or that every drink become simpler just to fit a machine.

Click here to know more.

The post Artly Robots Master Latte Art and Drinks for CES 2026 Debut first appeared on Yanko Design.