JBL’s AI Wireless Speakers Can Remove Vocals, Guitars, or Drums From Any Song While You’re Jamming

Walk into any rehearsal space and you will see the usual suspects. A combo amp in the corner, a Bluetooth speaker on a shelf, maybe a looper pedal on the floor. Each tool has a single job. One makes your guitar louder, one plays songs, one repeats whatever you feed it. You juggle them to build something that feels like a band around you.

JBL’s BandBox concept asks a different question. What if one box could understand the music it is playing and reorganize it around you in real time. The Solo and Trio units use AI to separate vocals, guitars, and drums inside finished tracks, so you can mute, isolate, or replace parts on the fly. Suddenly the speaker is not just a playback device. It becomes the drummer who never rushes, the backing guitarist who never complains, and the invisible producer nudging you toward tighter practice.

Designer: JBL

This ability to deconstruct any song streamed via Bluetooth is the core of the BandBox experience. The AI stem processing happens locally, inside the unit, without needing an internet connection or a cloud service. You can pull up a track, instantly mute the original guitar part, and then step in to play it yourself over the remaining bass, drums, and vocals. This is a fundamental shift in how musicians can practice. Instead of fighting for space in a dense mix, you create a pocket for yourself, turning passive listening into an interactive rehearsal.

The whole system is self-contained, designed to work straight out of the box without a pile of extra gear. Both models come equipped with a selection of built-in amplifier models and effects, so you can shape your tone directly on the unit. Essentials like a tuner and a looper are also integrated, which streamlines the creative process. You can lay down a rhythm part, loop it, and then practice soloing over it without ever touching an external pedal. It is this thoughtful integration that makes the BandBox feel less like a speaker and more like a complete, portable music-making environment.

The BandBox Solo is the most focused version of this idea, built for the individual. It is a compact, easily carried device with a single combo input that accepts either a guitar or a microphone. This makes it an obvious choice for singer-songwriters or any musician practicing alone. The form factor is all about convenience, with a solid build and a top-mounted handle. A battery life of around six hours means you could take it to a park for an afternoon busking session or just move it around the house without being tethered to a wall outlet. It is a self-sufficient creative station in a small package.

When practice involves more than one person, the BandBox Trio provides the necessary expansion. It is built on the same AI-powered platform but scales up the hardware for group use. The most significant change is the inclusion of four instrument inputs, which transforms the unit into a miniature, portable PA system. A small band or a duo can plug in multiple guitars, a bass, and a microphone, all running through the same box. This is a clever solution for impromptu jam sessions, stripped-down rehearsals, or music classrooms where setting up a full mixer and multiple amps is too cumbersome.

Both units share a clean, modern design that aligns with JBL’s broader product family. The controls seem to be laid out for quick, intuitive access, a must for musicians who need to make adjustments without interrupting their flow. Connectivity extends beyond just playing music; a USB-C port allows the BandBox to double as an audio interface. You can connect it directly to a computer or tablet to record your sessions or lay down a demo, adding a layer of studio utility that makes the device even more versatile. It is not just for practice, it is for capturing the ideas that come from it.

Of course, none of this would matter if the sound was not up to par. JBL’s reputation in audio engineering creates a high expectation, and the BandBox aims to meet it by delivering a full-range sound that can handle both a dynamic instrument and a complex backing track simultaneously. The goal is to provide a clear, responsive guitar tone that cuts through, while the underlying track remains rich and detailed. This dual-functionality is key, ensuring it performs just as well as a high-quality Bluetooth speaker for casual listening as it does as a dedicated practice amp.

The JBL BandBox series has started its rollout in Southeast Asian markets, with promotions and availability already noted in the Philippines and Malaysia. A wider international release is expected to follow. While pricing will fluctuate by region, the BandBox Solo appears to be positioned competitively against other popular smart amps on the market. The Trio, with its expanded inputs and group-oriented features, will naturally sit at a higher price point, offering a unique proposition as an all-in-one portable rehearsal hub.

The post JBL’s AI Wireless Speakers Can Remove Vocals, Guitars, or Drums From Any Song While You’re Jamming first appeared on Yanko Design.

TWS Earbuds With Built-In Cameras Puts ChatGPT’s AI Capabilities In Your Ears

Everyone is racing to build the next great AI gadget. Some companies are betting on smartglasses, others on pins and pocket companions. All of them promise an assistant that can see, hear, and understand the world around you. Very few ask a simpler question. What if the smartest AI hardware is just a better pair of earbuds?

This concept imagines TWS earbuds with a twist. Each bud carries an extra stem with a built in camera, positioned close to your natural line of sight. Paired with ChatGPT, those lenses become a constant visual feed for an assistant that lives in your ears. It can read menus, interpret signs, describe scenes, and guide you through a city without a screen. The form factor stays familiar, the capabilities feel new. If OpenAI wants a hardware foothold, this is the kind of product that could make AI feel less like a demo and more like a daily habit. Here’s why a camera in your ear might beat a camera on your face.

Designer: Emil Lukas

The industrial design has a sort of sci fi inhaler vibe that I weirdly like. The lens sits at the end of the stem like a tiny action cam, surrounded by a ring that doubles as a visual accent. It looks deliberate rather than tacked on, which matters when you are literally hanging optics off your head. The colored shells and translucent tips keep it playful enough that it still reads as audio gear first, camera second.

The cutaway render looks genuinely fascinating. You can see a proper lens stack, a sensor, and a compact board that would likely host an ISP and Bluetooth SoC. That is a lot of silicon inside something that still has to fit a driver, battery, microphones, and antennas. Realistically, any heavy lifting for vision and language goes straight to the phone and then to the cloud. On device compute at that scale would murder both battery and comfort.

All that visual data has to be processed somewhere, and it is not happening inside the earbud. On-device processing for GPT-4 level vision would turn your ear canal into a hotplate. This means the buds are basically streaming video to your phone for the heavy lifting. That introduces latency. A 200 millisecond delay is one thing; a two second lag is another. People tolerate waiting for a chatbot response at their desk. They will absolutely not tolerate that delay when they ask their “AI eyes” a simple question like “which gate am I at?”

Then there is the battery life, which is the elephant in the room. Standard TWS buds manage around five to seven hours of audio playback. Adding a camera, an image signal processor, and a constant radio transmission for video will absolutely demolish that runtime. Camera-equipped wearables like the Ray-Ban Meta glasses get about four hours of mixed use, and those have significantly more volume to pack in batteries. These concept buds look bulky, but they are still tiny compared to a pair of frames.

The practical result is that these would not be all-day companions in their current form. You are likely looking at two or three hours of real-world use before they are completely dead, and that is being generous. This works for specific, short-term tasks, like navigating a museum or getting through an airport. It completely breaks the established user behavior of having earbuds that last through a full workday of calls and music. The utility would have to be incredibly high to justify that kind of battery trade-off.

From a social perspective, the design is surprisingly clever. Smartglasses failed partly because the forward-facing camera made everyone around you feel like they were being recorded. An earbud camera might just sneak under the radar. People are already accustomed to stems sticking out of ears, so this form factor could easily be mistaken for a quirky design choice rather than a surveillance device. It is less overtly aggressive than a lens pointed from the bridge of your nose, which could lower social friction considerably.

The cynical part of me wonders about the field of view. Ear level is better than chest level, but your ears do not track your gaze. If you are looking down at your phone while walking, those cameras are still pointed forward at the horizon. You would need either a very wide angle lens, which introduces distortion and eats processing power for correction, or you would need to train yourself to move your whole head like you are wearing a VR headset. Neither is ideal, but both are solvable with enough iteration. What you get in return is an AI that can actually participate in your environment instead of waiting for you to pull out your phone and aim it at something. That shift from reactive to ambient is the entire value proposition, and it only works if the cameras are always positioned and always ready.

The post TWS Earbuds With Built-In Cameras Puts ChatGPT’s AI Capabilities In Your Ears first appeared on Yanko Design.

Stickerbox: Kids Say an Idea, AI Prints It as a Sticker in Seconds

Smart speakers for kids feel like a gamble most parents would rather skip. The promise is educational content and hands-free help, but the reality often involves screens lighting up at bedtime, algorithms deciding what comes next, and a lingering suspicion that someone is cataloging every question your child shouts into the room. The tension between letting kids explore technology and protecting their attention spans has never felt sharper, and most connected toys lean heavily toward the former without much restraint.

Stickerbox by Hapiko offers a quieter trade. It looks like a bright red cube, measures 3.75 inches on each side, and does one thing when you press its white button. Kids speak an idea out loud, a dragon made of clouds or a broccoli superhero, and the box prints it as a black-and-white sticker within seconds. The interaction feels less like talking to Alexa and more like whispering to a magic printer that happens to understand imagination.

Designer: Hapiko

The design stays deliberately simple. A small screen shows prompts like “press to talk,” while a large white button sits below, easy for small hands to press confidently. Stickers emerge from a slot at the top, fed by thermal paper rolls. The starter bundle includes three BPA-free paper rolls, eight colored pencils, and a wall adapter, turning the cube into a complete creative kit rather than just another gadget waiting for accessory purchases to feel useful.

The magic happens in three beats. A kid presses the button and speaks their prompt, as silly or specific as they want. The box sends audio over Wi-Fi to a generative AI model that turns phrases into line art. Within seconds, a thermal printer traces the image onto sticker paper, and the finished piece emerges from the top, ready to be torn, peeled, and stuck onto notebooks, walls, or comic book pages at home.

What keeps this from feeling like surveillance is the scaffolding Hapiko built around the AI. The microphone only listens when the button is pressed, so there’s no ambient eavesdropping happening in the background. Every prompt runs through filters designed to block inappropriate requests before reaching the image generator. Voice recordings are processed and discarded immediately, not stored for training. The system is kidSAFE COPPA certified, meaning it passed third-party audits for data handling and child privacy standards.

Thermal printing sidesteps ink cartridge mess entirely. Each paper roll holds material for roughly sixty stickers, and refill packs of three cost six dollars. The catch is that Stickerbox only accepts its own branded paper; using generic rolls will damage the mechanism. The bigger design choice is that every sticker is printed in monochrome, which is intentional. It forces kids to pick up pencils and spend time coloring, turning a quick AI trick into a slower, more tactile ritual.

Stickerbox gestures toward a version of AI-infused play that feels less anxious. The algorithm works quietly, translating spoken prompts into something kids can hold, cut, and trade, but the most important part happens after the sticker prints. It ends up taped inside homemade comic books, stuck on bedroom doors, or colored during rainy afternoons. The box becomes forgettable infrastructure, which might be the kindest thing you can say about a piece of children’s technology designed for creative independence.

The post Stickerbox: Kids Say an Idea, AI Prints It as a Sticker in Seconds first appeared on Yanko Design.

These Ceramic Cups Were Designed And Manufactured Entirely By An Algorithm

Sure, we could sit and fearmonger about how AI will one day replace designers, but here’s an alternative reality – what if AI didn’t replace us, it just created a parallel reality? Like you’ve got Japanese ceramics, Italian ceramics, and Turkish ceramics, what if you could have AI ceramics? Not a replacement, not a substitute, just another channel. That’s what BKID envisioned with ‘Texture Ware’, a series of cups designed entirely by AI and manufactured using 3D printing. Minimal human intervention, and minimal human cultural input.

The AI feeds itself a vast repository of data and uses its own database to make textural products that humans then use. BKID’s results look nothing like anything we’ve seen before, each cup of the Texture Ware series looks almost alien, an exaggeration of textures found in nature taken to an extreme. You wouldn’t find such cups in a handicrafts bazaar or your local IKEA. They’re so different that they exist as a separate entity within the industry, not a replacement of the industry itself.

Designer: BKID co

The workflow uses different AI services to go from prompt to cup. The only real input is a text prompt from a human specifying what sort of texture they want. The AI generates the texture image using ChatGPT’s Dall-E, creates  a cup out of it in Midjourney, and then translates the 2D image of a cup to 3D using Vizcom. The 3D file then gets 3D printed, eliminating pretty much any actual human intervention as the machine models and manufactures the design from start to end.

“What would normally require a considerable amount of time if crafted entirely by hand was instead realized through two to three generative tools and a process of repeated trial and error,” says BKID. “The exaggerated expressions and omitted forms that emerge in each stage invite the audience to experience the subtle differences in sensibility between traditional handcraft and craft shaped by generative software.”

Users can make cups inspired by brutalist textures of concrete, fuzzy textures of moss, rustic textures of wood-bark, wrinkled textures of crumpled paper, raw textures of coal, columnar textures of basalt rock, porous-like textures of coral, or even alien-like textures of fungi. Each cup looks unique and the AI never repeats itself, which means even cups within the same texture category could be wildly different.

The result truly feels alien, because the AI approaches design using an entirely different set of parameters. Their imperfections become design details, their lack of ergonomics or awareness become a unique design DNA. The result isn’t like any cup you’ve ever seen before, and that’s the point – it’s created by an AI that hasn’t ‘seen’ cups, hasn’t used cups, and doesn’t test its output. That being said, the cups are still usable because of the parameters set by the human. The cups don’t have holes, and contain enough volume to hold liquid efficiently. They’re perfect for espresso, saké, or green tea, something that’s savored in tiny quantities in vessels that feel less utilitarian and more ritualistic.

What BKID’s experiment proves is that AI (at least in this case) won’t replace designers, it’ll exist independent of them. Can an AI make a cup exactly like a regular designer would? Absolutely… but there’s a better case to be made to have AI make things beyond human creativity and culture. These cups contort nature and textures into something that feel extremely new, in a way that allows AI-made cups and human-made cups to coexist peacefully.

The post These Ceramic Cups Were Designed And Manufactured Entirely By An Algorithm first appeared on Yanko Design.

The Cheapest Personal AI Device You Can Own: $50 Raspberry Pi Whisplay Runs Gemini, Claude, and ChatGPT

Smartphones were never really meant to be your AI sidekick. They juggle notifications, social feeds, and a dozen background services before they ever get around to being “smart.” Meanwhile, the first wave of dedicated AI gadgets from companies like Humane and Rabbit showed up with big promises, closed ecosystems, and short lifespans. When the money dried up, so did the hardware. A little Raspberry Pi Zero 2 W with a Whisplay HAT quietly sidesteps all of that. It is a DIY AI chat device that you own outright, that you can fix, reflash, or repurpose, and that can talk to Gemini, Claude, or ChatGPT without caring which startup is still solvent this quarter.

Instead of betting on a single company’s cloud, Whisplay treats AI as a replaceable part. The hardware gives you a screen, mic, speaker, and buttons, and leaves the “brain” up to you. If Gemini changes pricing, Claude adds features, or ChatGPT pulls ahead again, you can swap backends with a config file or a bit of code, not a new gadget. In a landscape where AI hardware keeps arriving as disposable, subscription-tethered experiments, this little open, modular box feels like the first honest attempt at a personal AI terminal that will not vanish the moment a runway spreadsheet turns red.

Designer: Jdaie

At its very core, the Whisplay HAT is a clever little I/O board designed to give a Pi a face and a voice… simply put. It bolts directly onto the 40-pin GPIO header and provides everything needed for a conversational interface. You get a surprisingly crisp 1.96-inch color LCD for displaying text or animations, a WM8960 audio codec driving an onboard microphone and speaker, an RGB status LED, and a few programmable buttons for user input. It is not a standalone computer, but a purpose-built terminal that turns the Pi Zero into something you can actually talk to. The entire package matches the Pi Zero’s footprint, making for a compact and tidy build that feels intentional, not like a messy science fair project.

The choice of the Raspberry Pi Zero 2 W as the platform is telling. With its quad-core 1 GHz ARM Cortex-A53 CPU and just 512MB of RAM, it is nobody’s idea of a powerhouse. That is precisely the point. The Pi is not running the large language model; it is just a client. Its job is to capture audio, send a request over Wi-Fi, and then play back the response. This thin-client architecture is incredibly efficient, requiring minimal power and processing, which is perfect for an always-on desk companion. The heavy lifting is outsourced to the cloud API of your choice, leaving the Pi to handle the simple, tangible task of being the physical interface between you and the AI.

The actual magic is a simple, elegant pipeline that you control completely. Your code on the Pi captures audio from the Whisplay’s microphone, uses a speech-to-text engine to transcribe it, and then packages that text into an API call to Gemini or another LLM. When the response comes back, a text-to-speech engine converts it back into audio and plays it through the onboard speaker, while the LCD can show the text or a thinking animation. You can point it at Google’s Gemini API today and switch to a local Ollama server running on a spare Raspberry Pi 5 tomorrow if you feel like it. What’s so perfect about the Whisplay HAT is that it assumes companies and models will come and go, so it treats the LLM as a pluggable component. Today, that might be Gemini, Claude, or ChatGPT. Tomorrow, it might be some open model running on your own server. Either way, the little chatting device on your desk stays the same, happily piping audio in and out while you swap brains on the backend.

That brings us to the real kicker. The Whisplay HAT costs about thirty-five dollars. Paired with a fifteen-dollar Pi Zero 2 W, you have the core of a highly capable, endlessly customizable AI device for fifty bucks. Compare that to the seven-hundred-dollar Humane Ai Pin or the two-hundred-dollar Rabbit R1, both of which are functionally just API clients tied to a single, proprietary service. This DIY approach is not just cheaper; it represents a fundamentally different, more sustainable philosophy. It is a platform for tinkering and ownership, not a sealed product designed to be consumed and eventually discarded. It is a starting point, and in a field moving this fast, a good starting point is infinitely more valuable than a dead end.

The post The Cheapest Personal AI Device You Can Own: $50 Raspberry Pi Whisplay Runs Gemini, Claude, and ChatGPT first appeared on Yanko Design.

Phone-sized 138g E-Reader Answers Questions About What You’re Reading

E-readers typically force a trade-off between portability and capability. Compact models fit easily in bags but often lack processing power or features beyond basic reading. Larger tablets offer more functionality but become awkward to carry daily. Most devices focus on storage capacity and screen size while ignoring the need for smarter tools that support active reading and deeper engagement with content rather than just passive consumption.

The Viwoods AIPaper Reader measures roughly six inches diagonally and weighs just one hundred thirty-eight grams, making it slim enough to slip into coat pockets or small bags without adding noticeable bulk. Running Android 16 with 4G cellular support, the device combines traditional E Ink reading comfort with AI-powered features that answer questions, highlight key passages, and help build personal knowledge bases directly from whatever you’re reading at the moment.

Designer: Viwoods

The device’s profile measures 6.7mm thin, which makes it feel more like carrying a smartphone than a dedicated reading device. The minimalist design uses slim bezels around the 6.13-inch Carta 1300 E Ink display, keeping the footprint compact while maintaining enough screen real estate for comfortable reading without constant page turns. Available in black and white or color display versions, the aesthetic stays clean and understated throughout.

The three hundred PPI resolution keeps text crisp across different font sizes and formats, while the adjustable front light means reading happens comfortably whether you’re outside in daylight or in bed at midnight. The E Ink display eliminates the eye strain that comes from staring at backlit phone or tablet screens for extended periods, which matters during long reading sessions or when your eyes already feel tired.

Integrated AI runs through ChatGPT-5, Gemini, or DeepSeek, depending on preference, offering instant answers to questions about content without leaving the page or opening separate apps. Highlight a passage and ask for clarification. The AI responds contextually based on what you’re reading. Save interesting excerpts to the knowledge basement feature, which organizes captured passages into a searchable personal library that builds over time.

The octa-core processor and 4GB of RAM keep navigation smooth despite E Ink’s inherent display refresh limitations. Multiple refresh modes adjust speed versus clarity depending on whether you’re reading static text or navigating menus. The device handles PDF, EPUB, MOBI, and other common formats without requiring conversion software or workarounds before loading files onto the reader.

4G cellular connectivity separates this from most E Ink devices, enabling cloud library access, book downloads, and AI features anywhere cell service reaches without hunting for Wi-Fi networks. The 2580mAh battery supports weeks of typical reading between charges, given E Ink’s minimal power consumption when displaying static pages. Android 16 and Google Play support mean standard reading apps install alongside specialized ones, giving users flexibility beyond proprietary ecosystems that lock you into specific bookstores or formats.

The Viwoods AIPaper Reader sits between simple e-readers that only display text and full tablets that introduce too many distractions through notifications and competing apps demanding attention. It delivers AI-assisted reading, organized knowledge capture, cellular connectivity for anywhere access, and genuine portability within a form factor slim enough to disappear into daily carry routines without demanding the pocket space or mental bandwidth that smartphones and larger tablets constantly require.

The post Phone-sized 138g E-Reader Answers Questions About What You’re Reading first appeared on Yanko Design.

Your Desk Lamp Just Got Smarter (And Took Notes from Inception)

Remember that spinning top from Inception? The one that determined whether you were in a dream or reality? Well, a design team called SUPD took that iconic object and asked themselves a pretty interesting question: what if a product could help you enter a state of deep focus the same way that totem granted entry into the dream world? The result is DEEP, an AI-powered desk stand that’s making me rethink everything I thought I knew about workspace lighting.

Let’s be real for a second. We’re all drowning in distractions. Between notifications pinging, emails flooding in, and the constant pull of social media, achieving genuine focus feels like a superpower these days. And if you’ve ever tried to create the perfect work environment, you know the drill. You need your desk lamp positioned just right, white noise playing at the perfect volume, maybe some aromatherapy going, and oh yeah, all those tangled cables creating visual chaos that breaks your concentration every time you glance at them. DEEP tackles this modern problem with a surprisingly elegant solution: why scatter your focus tools across multiple devices when you could integrate them into one sleek package?

Designer: SUPD

The product itself looks like it stepped out of a near-future sci-fi film. It’s a desk lamp, sure, but it’s also packing a camera, speakers, and AI capabilities that work together to create what the designers call “optimized immersion environments.” The best part? Getting started is wonderfully simple. You turn the main power button, which is designed to mimic that spinning top from Inception (a detail that definitely made me smile), and then you just talk to it. Tell DEEP what you’re about to do, whether that’s studying, coding, reading, or creative work, and it automatically adjusts your environment to match.

Think about that for a moment. No more fiddling with multiple apps, no more adjusting three different devices, no more breaking your concentration before you’ve even started working. You speak, it listens, and your workspace transforms itself.

But DEEP doesn’t stop at automation. The designers clearly thought about the reality of personal preferences. Those side buttons let you fine-tune the lighting and sound to your exact specifications, and here’s where it gets smart: the system asks if you want to save your adjustments. Over time, DEEP learns your preferences for different activities, becoming more personalized the more you use it. The camera positioned at eye level isn’t just there for show. It’s analyzing you, checking your immersion status, and providing feedback to help maintain your focus. It’s like having a productivity coach built into your desk lamp, minus the awkward small talk.

I’m particularly taken with the attention to physical design details. Those red lines running along the top and front of the product aren’t just aesthetic choices. They help you maintain your preferred angles after adjusting the lamp’s position, creating a visual reference that makes it easier to remember your ideal setup. It’s the kind of thoughtful detail that separates good design from great design. The four-directional speakers at the base create spatial audio for immersion, whether that’s white noise, nature sounds, or whatever helps you slip into that flow state. And that integrated approach means no more cable spaghetti cluttering your desk, no more hunting for the right device, no more mental overhead just to start working.

What strikes me most about DEEP is how it recognizes that deep focus isn’t a luxury anymore. It’s a core skill, maybe even a competitive advantage in our attention-fractured world. The difference between weak concentration and strong concentration directly translates to productivity, creativity, and the quality of our work. DEEP doesn’t just acknowledge this reality; it builds an entire product philosophy around supporting it.

Is this the future of workspace design? Possibly. At minimum, it’s a fascinating glimpse at how AI integration can solve real problems without adding complexity. Sometimes the best technology is the kind that gets out of your way and just lets you work.

The post Your Desk Lamp Just Got Smarter (And Took Notes from Inception) first appeared on Yanko Design.

AI Lantern Speaker Designed to Reduce Anxiety With Light and Sound

Most home gadgets are designed for function, not feeling or emotional connection. Lamps and speakers fill their roles effectively enough, but rarely do they offer comfort or companionship during quiet nights or moments when you need a little extra calm to soothe anxiety. Finding a device that addresses both practical needs and emotional well-being remains surprisingly difficult in modern home technology.

Calmtern reimagines what a home object can be by blending a portable lantern with an AI speaker in one thoughtful package. It turns light and sound into a source of emotional support, making every room feel a little more welcoming and a lot more personal. The concept is simple yet powerful: bring comfort wherever you go in your home, whenever you need it most.

Designer: Hyun Jin Oh

Calmtern’s silhouette is inspired by classic lanterns, with a translucent upper body for soft, diffused light and a ribbed base that houses the speaker and controls. The integrated handle makes it easy to carry from room to room, hang on a minimalist stand, or set on a bedside table wherever comfort is needed. The portable form invites movement and flexibility throughout your daily routine.

The minimalist design, matte white finish, and lack of visible branding let Calmtern blend into any space seamlessly, from modern apartments to cozy bedrooms and hallways. The ribbed texture provides visual interest and tactile grip, while the clean silhouette feels timeless rather than trendy. It’s a device that looks as good on display as it does tucked away when not in use.

The lantern emits a gentle, warm glow that reduces anxiety and creates a cozy atmosphere perfect for late-night reading, winding down before bed, or simply making a dark room feel safe and inviting. Touch controls on the top panel make it easy to adjust brightness or volume without fumbling for switches or apps in the dark when you’re half asleep.

Calmtern is designed to move with you throughout your daily life and routines. Use it as a reading lamp beside your favorite chair, a bedside companion that plays calming sounds for sleep, or a portable speaker for music and podcasts in any room. The rechargeable design means it’s just as useful on a patio as in a hallway, and the gentle light is ideal for nighttime trips.

Beyond practical functionality, Calmtern is a calming presence that helps reduce feelings of loneliness or anxiety when living alone, making the home feel warmer and more inviting during difficult moments. The combination of soft light, smart sound, and intuitive controls creates a daily ritual of comfort and relaxation that goes beyond what typical smart home devices offer users.

The sculptural form and ambient glow turn Calmtern into a visual anchor for any room, sparking conversation and encouraging moments of pause in otherwise hectic days. For anyone who wants their home to feel as good as it looks while maintaining simplicity and emotional comfort, this concept offers a compelling vision of design where technology and well-being move together naturally.

The post AI Lantern Speaker Designed to Reduce Anxiety With Light and Sound first appeared on Yanko Design.

AI-powered smart tea set creates narratives from stories shared by friends

AI can almost be found everywhere these days, but most people will probably be familiar with generative AI like ChatGPT. These are mostly encountered in computers and phones because that’s where they make the most sense, but their applications can definitely go beyond that limited scope. These conversational AI can, for example, be embedded anywhere that has a computer, a microphone, and a speaker, which can literally be any object you can imagine.

Yes, it might result in an odd combination that challenges your notions of what AI chatbots can do for you. This smart tea set concept, for example, is a rather intriguing example of this idea, weaving technology, tea-drinking rituals, and social bonds in an unexpected way.

Designers: Kevin Tang, Kelly Fang

ChatGPT and others like it have started to approach the so-called “uncanny valley” in a totally non-visual way. The responses they give sound or read so naturally that it really takes an expert to distinguish it from human output. Talking to these chatbots almost feels like talking to someone, perhaps a friend who is willing to hear how your day went.

That’s the kind of experience that gpTea, a play on the brewed drink and this type of generative AI, wants to bring in a rather novel way. As a smart tea set, it not only brews tea but even tips the kettle forward to automatically pour the drink into a specially designed cup. Impressive as that may seem, that’s not even its most notable feat.

1

gpTea’s key feature is actually in interactive storytelling that weaves the responses of friends and family separated by distance and connected only through the Internet using this smart tea set. It asks you how your day went and, depending on your response, it might share a similar story given by another friend or loved one in the past. The more people use it, the bigger and longer the narrative grows. It’s almost like developing an oral tradition or history, except one that’s stored in the memory of an AI.

1

Another interesting feature of gpTea is the glass cup itself, which has a circular display at the bottom. The AI also generates images related to the story it’s telling, making it feel like you’re using magic to see the scene inside the cup. Admittedly, it’s a rather convoluted and complex way of sharing stories with friends when you can just talk to each other, but it’s still an interesting application of AI that actually tries to build connections between humans who are physically far apart.

The post AI-powered smart tea set creates narratives from stories shared by friends first appeared on Yanko Design.

EliteMini AI370: The Tiny Windows Mini-PC Built to Outperform Apple’s M4 Mac mini

You know how every time Apple launches a feature on the iPhone, Android people rush to point out they did it first, or they did it better? If you’re a Windows fan, this post might just be perfect for you. At the end of last month, Apple debuted the M4 Mac mini, surprising us not with just a chip upgrade, but a size downgrade too. A fraction of its original size, this newer Mac mini was tailored for Apple’s AI (or Apple Intelligence), and was designed to be a functional power-house. Not to be outdone, however… it seems like MiniForum has a Windows-based answer to the new Mac mini.

The EliteMini AI370 may be a bit of a handful name-wise, but it’s a handful when it comes to performance, ports, and portability too. Powered by AMD’s latest AI-ready Ryzen processor and the Radeon 890M, the EliteMini has 12 cores, 24 threads, and 50 TOPS of AI processing, ready to easily handle any demanding task from gaming to video editing or even working with AI models without breaking a sweat. The entire device measures just 5 inches across, making it exactly as small as the Mac mini, albeit with way more ports… and perhaps the most important feature – a front-facing power button.

Designer: MinisForum

Under the hood, the EliteMini AI370 boasts AMD’s Ryzen AI 9 HX 370 processor, which makes multitasking a breeze. With 12 cores and 24 threads, this chip is engineered for the heavy workloads you’d typically assign to a full-sized desktop, handling everything from advanced editing to 3D rendering with ease. Thanks to AMD’s XDNA2 architecture, this processor includes a dedicated Neural Processing Unit (NPU), delivering up to 50 TOPS (trillion operations per second) in AI power. If you’re working with AI applications, real-time rendering, or advanced editing software, this kind of performance is a huge asset, enhancing productivity while keeping things running smoothly.

Graphics enthusiasts and gamers will appreciate the Radeon 890M integrated graphics. Unlike many compact PCs that struggle with graphical processing, the EliteMini is geared for high-quality visuals, with frame rates above 60 FPS. This makes it more than capable for gaming and intensive creative applications. Having this level of integrated GPU performance means you won’t need to invest in an external GPU—perfect if you’re tight on desk space or don’t want extra hardware cluttering your setup.

Memory and storage are equally robust in the EliteMini AI370, with 32GB of DDR5 memory clocked at a fast 7500 MHz. This speed is a lifesaver for multitasking, allowing you to work across several applications without stalling. Storage options are equally impressive, supporting up to 4TB of PCIe NVMe SSD. That’s plenty of space for large project files, software libraries, and extensive media, while the SSD’s high-speed access means you won’t be stuck waiting around for files to load. For everything else, there are ports on both the front as well as the back.

All this power and performance gets packed into a compact and accessible device, fitting neatly on any desk setup. The 5-inch form factor is easy to overlook, but don’t let its size fool you—this mini-PC holds its own. For users who need a flexible and minimal setup, the EliteMini offers a front-facing USB-C setup and headphone jack, while ports on the back include three USB 4.0, HDMI, and an Ethernet connection that’s upgradable to 10GbE. As a (probably) unintentional jab to Apple, the EliteMini puts its power button smack-dab on the front of the mini PC too, making it MUCH more accessible than the Mac mini’s awkwardly placed power button.

Of course, all these features come at a price. The EliteMini AI370 starts at an introductory $1,099, with a regular price of $1,399, reflecting the high-end components and capabilities. For comparison, Apple’s Mac Mini M4 starts at $599, but it lacks the EliteMini’s integrated AI capabilities and has fewer configuration options. For Windows users who prioritize performance and customization, the EliteMini’s added capabilities and that compact design make it a perfect alternative to the Mac mini. Besides, if you’re going to be working with AI models, you’d want a computer that’s AI ready too, no?

The post EliteMini AI370: The Tiny Windows Mini-PC Built to Outperform Apple’s M4 Mac mini first appeared on Yanko Design.