How to Spot Fake AI Products at CES 2026 Before You Buy

Merriam-Webster just named “slop” its word of the year, defining it as “digital content of low quality that is produced usually in quantity by means of artificial intelligence.” The choice is blunt, almost mocking, and it captures something that has been building for months: a collective exhaustion with AI hype that promises intelligence but delivers mediocrity. Over the past three months, that exhaustion has started bleeding into Wall Street. Investors, analysts, and even CEOs of AI companies themselves have been openly questioning whether we are living through an AI bubble. OpenAI’s Sam Altman warned in August that investors are “overexcited about AI,” and Google’s Sundar Pichai admitted to “elements of irrationality” in the sector. The tech industry is pouring trillions into AI infrastructure while revenues lag far behind, raising fears of a dot-com-style correction that could rattle the entire economy.

CES 2026 is going to be ground zero for this tension. Every booth will have an “AI-powered” sticker on something, and a lot of those products will be genuine innovations built on real on-device intelligence and agentic workflows. But a lot of them will also be slop: rebranded features, cloud-dependent gimmicks, and shallow marketing plays designed to ride the hype wave before it crashes. If you are walking the show floor or reading coverage from home, knowing how to separate real AI from fake AI is not just a consumer protection issue anymore. It is a survival skill for navigating a market that feeds on confusion and a general lack of awareness around actual Artificial Intelligence.

1. If it goes offline and stops working, it was never really AI

The simplest test for fake AI is also the most reliable: ask what happens when the internet connection drops. Real AI that lives on your device will keep functioning because the processing is happening locally, using dedicated chips and models stored in the gadget itself. Fake AI is just a thin client that calls a cloud API, and the moment your Wi-Fi cuts out, the “intelligence” disappears with it.

Picture a laptop at CES 2026 that claims to have an AI writing assistant. If that assistant can still summarize documents, rewrite paragraphs, and handle live transcription when you are on a plane with no internet, you are looking at real on-device AI. If it gives you an error message the second you disconnect, it is cloud-dependent marketing wrapped in an “AI PC” label. The same logic applies to TVs, smart home devices, robot vacuums, and wearables. Genuine AI products are designed to think locally, with cloud connectivity as an optional boost rather than a lifeline.

The distinction matters because on-device AI is expensive to build. It requires new silicon, tighter integration between hardware and software, and real engineering effort. Companies that invested in that infrastructure will want you to know it works offline because that is their competitive edge. Companies that skipped that step will either avoid the question or bury it in fine print. At CES 2026, press the demo staff on this: disconnect the device from the network and see if the AI features still run. If they do not, you just saved yourself from buying rebranded cloud software in a shiny box.

If your Robot Vacuum has Microsoft Copilot, RUN!

2. If it’s just a chatbot, it isn’t AI… it’s GPT Customer Care

The laziest fake AI move at CES 2026 will be products that open a chat window, let you type questions, and call that an AI feature. A chatbot is not product intelligence. It is a generic language model wrapper that any company can license from OpenAI, Anthropic, or Google in about a week, then slap their logo on top and call it innovation. If the only AI interaction your gadget offers is typing into a text box and getting conversational responses, you are not looking at an AI product. You are looking at customer service automation dressed up as a feature.

Real AI is embedded in how the product works. It is the robot vacuum that maps your home, decides which rooms need more attention, and schedules itself around your routine without you opening an app. It is the laptop that watches what you do, learns your workflow, and starts suggesting shortcuts or automating repetitive tasks before you ask. It is the TV that notices you always pause shows when your smart doorbell rings and starts doing it automatically. None of that requires a chat interface because the intelligence is baked into the behavior of the device itself, not bolted on as a separate conversation layer.

If a company demo at CES 2026 starts with “just ask it anything,” probe deeper. Can it take actions across the system, or does it just answer questions? Does it learn from how you use the product, or is it the same canned responses for everyone? Is the chat interface the only way to interact with the AI, or does the product also make smart decisions in the background without prompting? A chatbot can be useful, but it is table stakes now, not a differentiator. If that is the whole AI story, the company did not build AI into their product. They rented a language model and hoped you would not notice.

3. If the AI only does one narrow thing, it is probably just a renamed preset

Another red flag is when a product’s AI feature is weirdly specific and cannot generalize beyond a single task. A TV that has “AI motion smoothing” but no other intelligent behavior is not running a real AI model; it is running the same interpolation algorithm TVs have had for years, now rebranded with an AI label. A camera that has “AI portrait mode” but cannot recognize anything else is likely just using a basic depth sensor and calling it artificial intelligence. Real AI, especially the kind built into modern chips and operating systems, is designed to generalize across tasks: it can recognize objects, understand context, predict user intent, and coordinate with other devices.

Ask yourself: does this product’s AI learn, adapt, or handle multiple scenarios, or does it just trigger a preset when you press a button? If it is the latter, you are looking at a marketing gimmick. Fake AI products love to hide behind phrases like “AI-enhanced” or “AI-optimized,” which sound impressive but are deliberately vague. Real AI products will tell you exactly what the system is doing: “on-device object recognition,” “local natural language processing,” “agentic task coordination.” Specificity is a sign of substance. Vagueness is a sign of slop.

The other giveaway is whether the AI improves over time. Genuine AI systems get smarter as they process more data and learn from user behavior, often through firmware updates that improve the underlying models. Fake AI products ship with a fixed set of presets and never change. At CES 2026, ask demo reps if the product’s AI will improve after launch, how updates work, and whether the intelligence adapts to individual users. If they cannot give you a clear answer, you are looking at a one-time software trick masquerading as artificial intelligence.

Don’t fall for ‘AI Enhancement’ presets or buttons that don’t do anything related to AI.

4. If the company cannot explain what the AI actually does, walk away

Fake AI thrives on ambiguity. Companies that bolt a chatbot onto a product and call it AI-powered know they do not have a real differentiator, so they lean into buzzwords and avoid specifics. Real AI companies, by contrast, will happily explain what their models do, where the processing happens, and what problems the AI solves that the previous generation could not. If a booth rep at CES 2026 gives you vague non-answers like “it uses machine learning to optimize performance” without defining what gets optimized or how, that is a warning sign.

Push for concrete examples. If a smart home hub claims to have AI coordination, ask: what decisions does it make on its own, and what still requires manual setup? If a wearable says it has AI health coaching, ask: is the analysis happening on the device or in the cloud, and can it work offline while hiking in the wilderness? If a laptop advertises an AI assistant, ask: what can it do without an internet connection, and does it integrate with other apps (agentic) or just sit in a sidebar? Companies with real AI will have detailed, confident answers because they built the system from the ground up. Companies with fake AI will deflect, generalize, or change the subject.

The other test is whether the AI claim matches the price and the hardware. If a $200 gadget promises the same on-device AI capabilities as a $1,500 laptop with a dedicated neural processing unit, somebody is lying. Real AI requires real silicon, and that silicon costs money. Budget products can absolutely have useful AI features, but they will typically offload more work to the cloud or use simpler models. If the pricing does not line up with the technical claims, it is worth being skeptical. At CES 2026, ask what chip is powering the AI, whether it has a dedicated NPU, and how much of the intelligence is local versus cloud-based. If they cannot or will not tell you, that is your cue to move on.

5. Check if the AI plays well with others, or if it lives in a silo

One of the clearest differences between real agentic AI and fake “AI inside” products is interoperability. Genuine AI systems are designed to coordinate with other devices, share context, and act on your behalf across an ecosystem. Fake AI products exist in isolation: they have a chatbot you can talk to, but it does not connect to anything else, and it cannot take actions beyond its own narrow interface. Samsung’s CES 2026 exhibit is explicitly built around AI and interoperability, with appliances, TVs, and smart home products all coordinated by a shared AI layer. That is what real agentic AI looks like: the fridge, washer, vacuum, and thermostat all understand context and can make decisions together without you micromanaging each one. Fake AI, by contrast, gives you five isolated apps with five separate chatbots, none of which talk to each other. If a product at CES 2026 claims to have AI but cannot integrate with the rest of your smart home, car, or workflow, it is not delivering the core promise of agentic systems.

Ask demo reps: does this work with other brands, or only within your ecosystem? Can it trigger actions in other apps or devices, or does it just respond to questions? Does it understand my preferences across multiple products, or does each device start from scratch? Companies that built real AI ecosystems will brag about cross-device coordination because it is hard to pull off and it is the whole point. Companies selling fake AI will either avoid the topic or try to upsell you on buying everything from them, which is a sign they do not have real interoperability.

6. When in doubt, look for the slop

The rise of AI-generated “slop” gives you a shortcut for spotting lazy AI products: if the marketing materials, product images, or demo videos look AI-generated and low-effort, the product itself is probably shallow too. Merriam-Webster defines slop as low-quality digital content produced in quantity by AI, and it has flooded everything from social media to advertising to product launches. Brands that cut corners on their own marketing by using obviously AI-generated visuals are signaling that they also cut corners on the actual product development.

Watch for telltale signs: weird proportions in product photos, uncanny facial expressions in lifestyle shots, text that sounds generic and buzzword-heavy with no real specifics, and claims that are too good to be true with no technical backing. Real AI products are built by companies that care about craft, and that care shows up in how they present the product. Fake AI products are built by companies chasing a trend, and the slop in their marketing is the giveaway. At CES 2026, trust your instincts: if the booth, the video, or the pitch feels hollow and mass-produced, the gadget probably is too.

The post How to Spot Fake AI Products at CES 2026 Before You Buy first appeared on Yanko Design.

This $2,899 Desktop AI Computer With RTX 5090M Lets You Cancel Every AI Subscription Forever

Look across the history of consumer tech and a pattern appears. Ownership gives way to services, and services become subscriptions. We went from stacks of DVDs to streaming movies online, from external drives for storing data and backups to cloud drives, from MP3s on a player to Spotify subscriptions, from one time software licenses to recurring plans. But when AI arrived, it skipped the ownership phase entirely. Intelligence came as a service, priced per month or per million tokens. No ownership, no privacy. Just a $20 a month fee.

A device like Olares One rearranges that relationship. It compresses a full AI stack into a desktop sized box that behaves less like a website and more like a personal studio. You install models the way you once installed apps. You shape its behavior over time, training it on your documents, your archives, your creative habits. The result is an assistant that feels less rented and more grown, with privacy, latency, and long term cost all tilting back toward the owner.

Designer: Olares

Click Here to Buy Now: $2,899 $3,999 (28% off) Hurry! Only 15/320 units left!

The pitch is straightforward. Take the guts of a $4,000 gaming laptop, strip out the screen and keyboard, put everything in a minimalist chassis that looks like Apple designed a chonky Mac mini, and tune it for sustained performance instead of portability. Dimensions are 320 x 197 x 55mm, weighs 2.15 kg without the PSU, and the whole package pulls 330 watts under full load. Inside sits an Intel Core Ultra 9 275HX with 24 cores running up to 5.4 GHz and 36 MB of cache, the same chip you would find in flagship creator laptops this year. The GPU is an NVIDIA GeForce RTX 5090 Mobile with 24 GB of GDDR7 VRAM, 1824 AI TOPS of tensor performance, and a 175W max TGP. Pair that with 96 GB of DDR5 RAM at 5600 MHz and a PCIe 4.0 NVMe SSD, and you have workstation level compute in a box smaller than most soundbars.

Olares OS runs on top of all that hardware, and it is open source, which means you can audit the code, fork it, or wipe it entirely if you want. Out of the box it behaves like a personal cloud with an app store containing over 200 applications ready to deploy with one click. Think Docker and Kubernetes, but without needing to touch a terminal unless you want to. The interface looks clean, almost suspiciously clean, like someone finally asked what would happen if you gave a NAS the polish of an iPhone. You get a unified account system so all your apps share a single login, configurable multi factor authentication, enterprise grade sandboxing for third party apps, and Tailscale integration that lets you access your Olares box securely from anywhere in the world. Your data stays on your hardware, full stop.

I have been tinkering with local LLMs for the past year, and the setup has always been the worst part. You spend hours wrestling with CUDA drivers, Python environments, and obscure GitHub repos just to get a model running, and then you realize you need a different frontend for image generation and another tool for managing multiple models and suddenly you have seven terminal windows open and nothing talks to each other. Olares solves that friction by bundling everything into a coherent ecosystem. Chat agents like Open WebUI and Lobe Chat, general agents like Suna and OWL, AI search with Perplexica and SearXNG, coding assistants like Void, design agents like Denpot, deep research tools like DeerFlow, task automation with n8n and Dify. Local LLMs include Ollama, vLLM, and SGIL. You also get observability tools like Grafana, Prometheus, and Langfuse so you can actually monitor what your models are doing. The philosophy is simple. String together workflows that feel as fluid as using a cloud service, except everything runs on metal you control.

Gaming on this thing is a legitimate use case, which feels almost incidental given the AI focus but makes total sense once you look at the hardware. That RTX 5090 Mobile with 24 GB of VRAM and 175 watts of power can handle AAA titles at high settings, and because the machine is designed as a desktop box, you can hook it up to any monitor or TV you want. Olares positions this as a way to turn your Steam library into a personal cloud gaming service. You install your games on the Olares One, then stream them to your phone, tablet, or laptop from anywhere. It is like running your own GeForce Now or Xbox Cloud Gaming, except you own the server and there are no monthly fees eating into your budget. The 2 TB of NVMe storage gives you room for a decent library, and if you need more, the system uses standard M.2 drives, so upgrades are straightforward.

Cooling is borrowed from high end laptops, with a 2.8mm vapor chamber and a 176 layer copper fin array handling heat dissipation across a massive 310,000 square millimeter surface. Two custom 54 blade fans keep everything moving, and the acoustic tuning is genuinely impressive. At idle, the system sits at 19 dB, which is whisper quiet. Under full GPU and CPU load, it climbs to 38.8 dB, quieter than most gaming desktops and even some laptops. Thermal control keeps things stable at 43.8 degrees Celsius under sustained loads, which means you can run inference on a 70B model or render a Blender scene without the fans turning into jet engines. I have used plenty of small form factor PCs that sound like they are preparing for liftoff the moment you ask them to do anything demanding, so this is a welcome change.

RAGFlow and AnythingLLM handle retrieval augmented generation, which lets you feed your own documents, notes, and files into your AI models so they can answer questions about your specific data. Wise and Files manage your media and documents, all searchable and indexed locally. There is a digital secret garden feature that keeps an AI powered local first reader for articles and research, with third party integration so you can pull in content from RSS feeds or save articles for later. The configuration hub lets you manage storage, backups, network settings, and app deployments without touching config files, and there is a full Kubernetes console if you want to go deep. The no CLI Kubernetes interface is a big deal for people who want the power of container orchestration but do not want to memorize kubectl commands. You get centralized control, performance monitoring at a glance, and the ability to spin up or tear down services in seconds.

Olares makes a blunt economic argument. If you are using Midjourney, Runway, ChatGPT Pro, and Manus for creative work, you are probably spending around $6,456 per year per user. For a five person team, that balloons to $32,280 annually. Olares One costs $2,899 for the hardware (early-bird pricing), which breaks down to about $22.20 per month per user over three years if you split it across a five person team. Your data stays private, stored locally on your own hardware instead of floating through someone else’s data center. You get a unified hub of over 200 apps with one click installs, so there are no fragmented tools or inconsistent experiences. Performance is fast and reliable, even when you are offline, because everything runs on device. You own the infrastructure, which means unconditional and sovereign control over your tools and data. The rented AI stack leaves you as a tenant with conditional and revocable access.

Ports include Thunderbolt 5, RJ45 Ethernet at 2.5 Gbps, USB A, and HDMI 2.1, plus Wi-Fi 7 and Bluetooth 5.4 for wireless connectivity. The industrial design leans heavily into the golden ratio aesthetic, with smooth curves and a matte aluminum finish that would not look out of place next to a high end monitor or a piece of studio equipment. It feels like someone took the guts of a $4,000 gaming laptop, stripped out the compromises of portability, and optimized everything for sustained performance and quietness. The result is a machine that can handle creative work, AI experimentation, gaming, and personal cloud duties without breaking a sweat or your eardrums.

Olares One is available now on Kickstarter, with units expected to ship early next year. The base configuration with the RTX 5090 Mobile, Intel Core Ultra 9 275HX, 96 GB RAM, and 2 TB SSD is priced at a discounted $2,899 for early-bird backers (MSRP $3,999). That still is a substantial upfront cost, but when you compare it to the ongoing expense of cloud AI subscriptions and the privacy compromises that come with them, the math starts to make sense. You pay once, and the machine is yours. No throttling, no price hikes, no terms of service updates that quietly change what the company can do with your data. If you have been looking for a way to bring AI home without sacrificing capability or convenience, this is probably the most polished attempt at that idea so far.

Click Here to Buy Now: $2,899 $3,999 (28% off) Hurry! Only 15/320 units left!

The post This $2,899 Desktop AI Computer With RTX 5090M Lets You Cancel Every AI Subscription Forever first appeared on Yanko Design.

How AI Will Be Different at CES 2026: On‑Device Processing and Actual Agentic Productivity

Last year, every other product at CES had a chatbot slapped onto it. Your TV could talk. Your fridge could answer trivia. Your laptop had a sidebar that would summarize your emails if you asked nicely. It was novel for about five minutes, then it became background noise. The whole “AI revolution” at CES 2024 and 2025 felt like a tech industry inside joke: everyone knew it was mostly marketing, but nobody wanted to be the one company without an AI sticker on the booth.

CES 2026 is shaping up differently. Coverage ahead of the show is already calling this the year AI stops being a feature you demo and starts being infrastructure you depend on. The shift is twofold: AI is moving from the cloud onto the device itself, and it is evolving from passive assistants that answer questions into agentic systems that take action on your behalf. Intel has confirmed it will introduce Panther Lake CPUs, AMD CEO Lisa Su is headlining the opening keynote with expectations around a Ryzen 7 9850X3D reveal, and Nvidia is rumored to be prepping an RTX 50 “Super” refresh. The silicon wars are heating up precisely because the companies making chips know that on-device AI is the only way this whole category becomes more than hype. If your gadget still depends entirely on a server farm to do anything interesting, it is already obsolete. Here’s what to expect at CES 2026… but more importantly, what to expect from AI in the near future.

Your laptop is finally becoming the thing running the models

Intel, AMD, and Nvidia are all using CES 2026 as a launching pad for next-generation silicon built around AI workloads. Intel has publicly committed to unveiling its Panther Lake CPUs at the show, chips designed with dedicated neural processing units baked in. AMD’s Lisa Su is doing the opening keynote, with strong buzz around a Ryzen 7 9850X3D that would appeal to gamers and creators who want local AI performance without sacrificing frame rates or render times. Nvidia’s press conference is rumored to focus on RTX 50 “Super” cards that push both graphics and AI inference into new territory. The pitch is straightforward: your next laptop or desktop is not a dumb terminal for ChatGPT; it is the machine actually running the models.

What does that look like in practice? Laptops at CES 2026 will be demoing live transcription and translation that happens entirely on the device, no cloud round trip required. You will see systems that can summarize browser tabs, rewrite documents, and handle background removal on video calls without sending a single frame to a server. Coverage is already predicting a big push toward on-device processing specifically to keep your data private and reduce reliance on cloud infrastructure. For gamers, the story is about AI upscaling and frame generation becoming table stakes, with new GPUs sold not just on raw FPS but on how quickly they can run local AI tools for modding, NPC dialogue generation, or streaming overlays. This is the year “AI PC” might finally mean something beyond a sticker.

Agentic AI is the difference between a chatbot and a butler

Pre-show coverage is leaning heavily on the phrase “agentic AI,” and it is worth understanding what that actually means. Traditional AI assistants answer questions: you ask for the weather, you get the weather. Agentic AI takes goals and executes multi-step workflows to achieve them. Observers expect to see devices at CES 2026 that do not just plan a trip but actually book the flights and reserve the tables, acting on your behalf with minimal supervision. The technical foundation for this is a combination of on-device models that understand context and cloud-based orchestration layers that can touch APIs, but the user experience is what matters: you stop micromanaging and start delegating.

Samsung is bringing its largest CES exhibit to date, merging home appliances, TVs, and smart home products into one massive space with AI and interoperability as the core message. Imagine a fridge, washer, TV, robot vacuum, and phone all coordinated by the same AI layer. The system notices you cooked something smoky, runs the air purifier a bit harder, and pushes a recipe suggestion based on leftovers. Your washer pings the TV when a cycle finishes, and the TV pauses your show at a natural break. None of this requires you to open an app or issue voice commands; the devices are just quietly making decisions based on context. That is the agentic promise, and CES 2026 is where companies will either prove they can deliver it or expose themselves as still stuck in the chatbot era.

Robot vacuums are the first agentic AI success story you can actually buy

CES 2026 is being framed by dedicated floorcare coverage as one of the most important years yet for robot vacuums and AI-powered home cleaning, with multiple brands receiving Innovation Awards and planning major product launches. This category quietly became the testing ground for agentic AI years before most people started using the phrase. Your robot vacuum already maps your home, plans routes, decides when to spot-clean high-traffic areas, schedules deep cleans when you are away, and increasingly maintains itself by emptying dust and washing its own mop pads. It does all of this with minimal cloud dependency; the brains are on the bot.

LG has already won a CES 2026 Innovation Award for a robot vacuum with a built-in station that hides inside an existing cabinet cavity, turning floorcare into an invisible, fully hands-free system. Ecovacs is previewing the Deebot X11 OmniCyclone as a CES 2026 Innovation Awards Honoree and promising its most ambitious lineup to date, pushing into whole-home robotics that go beyond vacuuming. Robotin is demoing the R2, a modular robot that combines autonomous vacuuming with automated carpet washing, moving from daily crumb patrol to actual deep cleaning. These bots are starting to integrate with broader smart home ecosystems, coordinating with your smart lock, thermostat, and calendar to figure out when you are home, when kids are asleep, and when the dog is outside. The robot vacuum category is proof that agentic AI can work in the real world, and CES 2026 is where other product categories are going to try to catch up.

TVs are getting Micro RGB panels and AI brains that learn your taste

LG has teased its first Micro RGB TV ahead of CES 2026, positioning it as the kind of screen that could make OLED owners feel jealous thanks to advantages in brightness, color control, and longevity. Transparent OLED panels are also making appearances in industrial contexts, like concept displays inside construction machinery cabins, hinting at similar tech eventually showing up in living rooms as disappearing TVs or glass partitions that become screens on demand. The hardware story is always important at CES, but the AI layer is where things get interesting for everyday use.

TV makers are layering AI on top of their panels in ways that go beyond simple upscaling. Expect personalized picture and sound profiles that learn your room conditions, content preferences, and viewing habits over time. The pitch is that your TV will automatically switch to low-latency gaming mode when it recognizes you launched a console, dim your smart lights when a movie starts, and adjust color temperature based on ambient light without you touching a remote. Some of this is genuine machine learning happening on-device, and some of it is still marketing spin on basic presets. The challenge for readers at CES 2026 will be figuring out which is which, but the direction is clear: TVs are positioning themselves as smart hubs that coordinate your living room, not just dumb displays waiting for HDMI input.

Gaming gear is wiring itself for AI rendering and 500 Hz dreams

HDMI Licensing Administrator is using CES 2026 to spotlight advanced HDMI gaming technologies with live demos focused on very high refresh rates and next-gen console and PC connectivity. Early prototypes of the Ultra96 HDMI cable, part of the new HDMI 2.2 specification, will be on display with the promise of higher bandwidth to support extreme refresh rates and resolutions. Picture a rig on the show floor: a 500 Hz gaming monitor, next-gen GPU, HDMI 2.2 cable, running an esports title at absurd frame rates with variable refresh rate and minimal latency. It is the kind of setup that makes Reddit threads explode.

GPUs are increasingly sold not just on raw FPS but on AI capabilities. AI upscaling like DLSS is already table stakes, but local AI is also powering streaming tools for background removal, audio cleanup, live captions, and even dynamic NPC dialogue in future games that require on-device inference rather than server-side processing. Nvidia’s rumored RTX 50 “Super” refresh is expected to double down on this positioning, selling the cards as both graphics and AI accelerators. For gamers and streamers, CES 2026 is where the industry will make the case that your rig needs to be built for AI workloads, not just prettier pixels. The infrastructure layer, cables and monitors included, is catching up to match that ambition.

What CES 2026 really tells us about where AI is going

The shift from cloud-dependent assistants to on-device agents is not just a technical upgrade; it is a fundamental change in how gadgets are designed and sold. When Intel, AMD, and Nvidia are all racing to build chips with dedicated AI accelerators, and when Samsung is reorganizing its entire CES exhibit around AI interoperability, the message is clear: companies are betting that local intelligence and cross-device coordination are the only paths forward. The chatbot era served its purpose as a proof of concept, but CES 2026 is where the industry starts delivering products that can think, act, and coordinate without constant cloud supervision.

What makes this year different from the past two is that the infrastructure is finally in place. The silicon can handle real-time inference. The software frameworks for agentic behavior are maturing. Robot vacuums are proving the model works at scale. TVs and smart home ecosystems are learning how to talk to each other without requiring users to become IT managers. The pieces are connecting, and CES 2026 is the first major event where you can see the whole system starting to work as one layer instead of a collection of isolated features.

The real question is what happens after the demos

Trade shows are designed to impress, and CES 2026 will have no shortage of polished demos where everything works perfectly. The real test comes in the six months after the show, when these products ship and people start using them in messy, real-world conditions. Does your AI PC actually keep your data private when it runs models locally, or does it still phone home for half its features? Does your smart home coordinate smoothly when you add devices from different brands, or does it fall apart the moment something breaks the script? Do robot vacuums handle the chaos of actual homes, or do they only shine in controlled environments?

The companies that win in 2026 and beyond will be the ones that designed their AI systems to handle failure, ambiguity, and the unpredictable messiness of how people actually live. CES 2026 is where you will see the roadmap. The year after is where you will see who actually built the roads. If you are walking the show floor or following the coverage, the most important question is not “what can this do in a demo,” but “what happens when it breaks, goes offline, or encounters something it was not trained for.” That is where the gap between real agentic AI and rebranded presets will become impossible to hide.

The post How AI Will Be Different at CES 2026: On‑Device Processing and Actual Agentic Productivity first appeared on Yanko Design.

AI-powered headphones for private conversations even in the most crowded places

We’ve come a long way when it comes to noise isolation used in headphones and earbuds. The Active Noise Cancellation technology employed in current-generation audio accessories has reached a level that allows for adaptive ANC levels depending on the ambient noise environment. A handful of brands even go the distance to implement turning on transparency mode automatically when someone is talking to you. That’s a novelty, but still, you’ll hear the voices of other people in the vicinity if you are in a crowded environment.

That could change with an innovation that aims to eliminate any unwanted voices in the conversation. For instance, when you are talking to your pal on the street, you’ll only hear his voice, and all the other voices of people will be muted out. This innovation will not be helpful as a daily driver, but it will assist people with hearing impairments in hearing better. The initial prototype developed by the group of researchers at the University of Washington is known as the proactive hearing assistant,” and it filters the conversation partner’s voice only and looks promising.

Designer: University of Washington

The AI-powered headphones do all the filtering automatically without any manual input which is a potent functionality current-gen headphones can hugely benefit from. The speech isolating technology suppresses the voices that don’t match the pattern of turn-taking conversation. The AI model on board keeps a tab on the timing patterns and filters out anything that doesn’t fit. Application of this exciting tech could not only be limited to audio accessories and hearing aids but also come integrated with wearable tech like smart glasses or VR headsets. The most practical implementation could come in handy at crowded places where you have to really focus on the person in conversation.

According to Senior author Shyam Gollakota, “Our insight is that when we’re conversing with a specific group of people, our speech naturally follows a turn-taking rhythm. And we can train AI to predict and track those rhythms using only audio, without the need for implanting electrodes.” The current prototype supports one wearer and up to four other people which is impressive. More so when you factor in the lag-free overall experience. Currently, the team is testing two different models of the iteration: one that runs a “who spoke when” check to look for any overlap between the speakers, identifying who’s speaking when. The second model cleans the raw signal and then feeds real-time isolated audio to the user. The latter, so far, has scored well with the 11 participants in the study.

Currently, these basic over-ear headphones are loaded with extra microphones, and the team is working on slimming down the size. In conjunction with the research that is going on, small chips are being developed that run these AI models, so that they can be fitted inside hearing aids or earbuds. So, are we ready for a future where intelligent hearing is part of our daily drive?

 

The post AI-powered headphones for private conversations even in the most crowded places first appeared on Yanko Design.

Sound Maestro Splits Songs Into 4 Speakers You Conduct With a Baton

Most smart speakers are designed to disappear, cylinders and pucks that sit in a corner and wait for voice commands. That is convenient but also a bit dull; you talk, they respond, and the hardware never really asks you to engage with it. Sound Maestro is a concept that goes the other way, imagining a living room as a small orchestra pit you can actually conduct with gestures instead of just tapping a screen.

Sound Maestro is a speaker inspired by an orchestra conductor that consists of three core parts: the conductor’s podium, the instruments, and the conductor’s baton. When everything is docked together, it reads as a single object, but each of the four modular speakers can be detached and assigned a different musical part, vocals, drums, bass, and melody, each with its own LED color glowing underneath the grille.

Designer: Geonwoo Kang

The system uses AI to split a track into four stems and send each to a different speaker, so one cube carries the vocal, another the drums, another the bass, and another the melody. The LEDs on each unit glow in a unique color, making it easy to see which part is where. This spatial mapping of sound means the mix becomes something you can see and point at, not just hear as a single stereo image coming from two speakers.

The baton-shaped controller is the main interface. In Maestro Mode, you twist a dial to enter a state where the default buttons are locked, zand you control speakers by pointing and gesturing. A quick left-right wave skips tracks, a slow up-down motion adjusts volume with LED brightness as feedback, and drawing a circle pauses or resumes playback, with all LEDs turning off or on to confirm what just happened.

Remote Control Mode lets the same baton behave more like a traditional remote. You still point it at a specific speaker, but now you press buttons instead of waving. This lets you fine-tune or mute individual units without the full theatricality of Maestro Mode. The two modes together acknowledge that sometimes you want to perform, and sometimes you just want to nudge the volume down on the drums without getting up.

The main speaker takes its form from an orchestra podium and acts as the system’s brain. It handles the main bass that anchors the center and runs the AI that assigns parts to each satellite. A small display shows the current mode, battery levels, and which part each speaker is playing, so you can glance down and see the state of your orchestra without opening an app.

1

Sound Maestro pokes at the idea that home audio can be more than invisible boxes and playlists. By giving each part of a song its own physical presence and letting you conduct with a baton instead of a touchscreen, it makes listening into a small performance. Whether or not you want to wave a stick in your living room, the idea that a speaker system could ask you to point, gesture, and conduct instead of just pressing play feels like a surprisingly theatrical take on what modular audio might become.

The post Sound Maestro Splits Songs Into 4 Speakers You Conduct With a Baton first appeared on Yanko Design.

Remember “The Ghiblification”? We Treated Ghibli As Disposable Because That’s How We Treat Everything

First, it was cottagecore, filling our feeds with sourdough starters and rustic linen. Then came the sharp, symmetrical pastels of the Wes Anderson trend, followed by a tidal wave of Barbie pink that painted the internet for a summer. Each aesthetic arrived like a weather front, dominating the landscape completely for a short time before vanishing just as quickly, leaving behind only a faint digital echo. They were cultural costumes, tried on for a season and then relegated to the back of the closet.

Into this cycle stepped Studio Ghibli, its decades of patient, handcrafted animation compressed into a one-click selfie generator. The resulting “Ghibli-fication” of our profiles was not a deep engagement with Hayao Miyazaki’s themes of environmentalism and pacifism; it was simply the next costume off the rack. The speed with which we adopted and then abandoned it reveals a difficult truth. Our treatment of Ghibli was a symptom of a much larger cultural pattern, one where even the most profound art is rendered disposable by the internet’s insatiable appetite for the new.

When everything becomes an aesthetic, nothing remains itself

Platforms thrive on legibility. Content needs to be instantly recognizable, easily categorized, and simple enough to reproduce at scale. This creates enormous pressure to reduce complex cultural artifacts into their most surface-level visual markers. A Wes Anderson film becomes “symmetrical shots in pastel.” A hit song from Raye (that marked her leaving a music label and following creative freedom) becomes just a fleeting 20-second TikTok dance about rings on fingers and finding husbands. Ghibli’s intricate storytelling about war, labor, and the natural world gets flattened into “soft colors and big eyes.”

The reduction is not accidental. It is the cost of entry into viral circulation. An aesthetic can only spread if it can be copied quickly, applied broadly, and understood immediately. Nuance, context, and depth are friction. They slow down the sharing, complicate the reproduction, and limit the audience. So they get stripped away, not out of malice, but out of structural necessity. What remains is a shell, a visual shorthand that gestures toward the original without containing any of its substance.

This process turns cultural works into raw material. A film, a book, a philosophical tradition, any of these can be mined for their most photogenic elements and reconfigured into something that fits neatly into a grid post or a TikTok filter. The original becomes less important than the aesthetic it can generate. Once the aesthetic stops performing well in terms of engagement metrics, the entire package gets discarded. The algorithm does not care about preservation or reverence. It cares about what is getting clicks and views today.

The appetite that cannot be satisfied

Social media platforms are built around a fundamental economic problem: they need to hold attention, but attention is finite and easily exhausted. The solution is constant novelty. If users get bored, they leave. If they leave, ad revenue drops. So the feed must always be serving something new, something that feels fresh enough to justify another scroll, another click, another few seconds of eyeball time.

This creates a culture of planned obsolescence for aesthetics. A look can only stay interesting for so long before it becomes familiar, then oversaturated, then tiresome. At that point, it has to be replaced. The cycle repeats endlessly, chewing through visual languages, artistic movements, and cultural traditions at a pace that would have been unthinkable even twenty years ago. What took decades to develop can be extracted, popularized, and discarded in a matter of weeks.

The speed of this churn has consequences. It trains us to engage with culture in a particular way: superficially, briefly, and without much attachment. We learn to skim surfaces rather than dig into depths. We participate in trends not because they resonate with us personally, but because participation itself is the point (the ice bucket challenge boosted ALS awareness for precisely 6 months). Being part of the moment, being visible within the current aesthetic wave, these become more valuable than any lasting connection to the work that aesthetic is borrowed from.

What sticks when the wave recedes

The irony is that while trends are disposable, the works they feed on often are not. Ghibli films continue to be watched, analyzed, and loved by new audiences long after the selfie filters have been forgotten. Wes Anderson’s movies did not become less meaningful because people used his color palettes for Instagram posts. The underlying art survives because it contains something that cannot be reduced to a visual shorthand.

What separates durable culture from disposable trends is substance that exceeds its surface. A Ghibli film rewards attention over time. The more you watch, the more you notice: the way labor is animated with dignity, the long quiet stretches that mirror real life’s pace, the refusal to offer simple moral answers. None of that fits in a filter. None of that can be mass-produced. It requires the viewer to bring time, focus, and openness to complexity.

This is what the trend cycle cannot replicate. It can borrow the look, but it cannot borrow the experience. It can create a momentary association with the aesthetic, but it cannot create the slow, layered engagement that builds lasting attachment. So the original work persists beneath the churn, waiting for the people who want more than a costume, who are looking for something to return to rather than something to discard.

Resisting the rhythm of disposability

Recognizing this pattern is not the same as escaping it. We are all embedded in systems that reward rapid consumption and constant novelty. The feed is designed to keep us moving, to prevent us from lingering too long on any one thing. Resisting that rhythm requires deliberate effort, a conscious choice to slow down when everything around us is accelerating.

That resistance can look small and personal: rewatching a film instead of merely watching a snippet of it on YouTube Shorts, reading longform essays instead of liking someone’s reel about it, spending time with art that does not immediately reveal itself. If anything, the pandemic allowed us to spend days culturing sourdough starter so we could bake our bread. The curfew ended and sourdough became a distant memory… but for those 6 months, we actually indulged in immersion. These acts do not change the structure of the platforms, but they change our relationship to culture. They create space for depth in an environment optimized for surface.

The broader question is whether we can build cultural spaces that do not treat everything as disposable. Platforms will not do this on their own; their incentives run in the opposite direction. But audiences, creators, and critics can push back by valuing longevity over virality, by rewarding substance over aesthetic repackaging, by choosing to engage with work in ways that cannot be reduced to a trend cycle.

Ghibli survived its moment as a disposable aesthetic because it was never fully captured by it. The films remain too slow, too strange, too resistant to easy consumption. They stand as a reminder that some things are built to last, even in an environment designed to make everything temporary. The real work is recognizing that difference and choosing to treat what matters accordingly.

The post Remember “The Ghiblification”? We Treated Ghibli As Disposable Because That’s How We Treat Everything first appeared on Yanko Design.

Remember “The Ghiblification”? We Treated Ghibli As Disposable Because That’s How We Treat Everything

First, it was cottagecore, filling our feeds with sourdough starters and rustic linen. Then came the sharp, symmetrical pastels of the Wes Anderson trend, followed by a tidal wave of Barbie pink that painted the internet for a summer. Each aesthetic arrived like a weather front, dominating the landscape completely for a short time before vanishing just as quickly, leaving behind only a faint digital echo. They were cultural costumes, tried on for a season and then relegated to the back of the closet.

Into this cycle stepped Studio Ghibli, its decades of patient, handcrafted animation compressed into a one-click selfie generator. The resulting “Ghibli-fication” of our profiles was not a deep engagement with Hayao Miyazaki’s themes of environmentalism and pacifism; it was simply the next costume off the rack. The speed with which we adopted and then abandoned it reveals a difficult truth. Our treatment of Ghibli was a symptom of a much larger cultural pattern, one where even the most profound art is rendered disposable by the internet’s insatiable appetite for the new.

When everything becomes an aesthetic, nothing remains itself

Platforms thrive on legibility. Content needs to be instantly recognizable, easily categorized, and simple enough to reproduce at scale. This creates enormous pressure to reduce complex cultural artifacts into their most surface-level visual markers. A Wes Anderson film becomes “symmetrical shots in pastel.” A hit song from Raye (that marked her leaving a music label and following creative freedom) becomes just a fleeting 20-second TikTok dance about rings on fingers and finding husbands. Ghibli’s intricate storytelling about war, labor, and the natural world gets flattened into “soft colors and big eyes.”

The reduction is not accidental. It is the cost of entry into viral circulation. An aesthetic can only spread if it can be copied quickly, applied broadly, and understood immediately. Nuance, context, and depth are friction. They slow down the sharing, complicate the reproduction, and limit the audience. So they get stripped away, not out of malice, but out of structural necessity. What remains is a shell, a visual shorthand that gestures toward the original without containing any of its substance.

This process turns cultural works into raw material. A film, a book, a philosophical tradition, any of these can be mined for their most photogenic elements and reconfigured into something that fits neatly into a grid post or a TikTok filter. The original becomes less important than the aesthetic it can generate. Once the aesthetic stops performing well in terms of engagement metrics, the entire package gets discarded. The algorithm does not care about preservation or reverence. It cares about what is getting clicks and views today.

The appetite that cannot be satisfied

Social media platforms are built around a fundamental economic problem: they need to hold attention, but attention is finite and easily exhausted. The solution is constant novelty. If users get bored, they leave. If they leave, ad revenue drops. So the feed must always be serving something new, something that feels fresh enough to justify another scroll, another click, another few seconds of eyeball time.

This creates a culture of planned obsolescence for aesthetics. A look can only stay interesting for so long before it becomes familiar, then oversaturated, then tiresome. At that point, it has to be replaced. The cycle repeats endlessly, chewing through visual languages, artistic movements, and cultural traditions at a pace that would have been unthinkable even twenty years ago. What took decades to develop can be extracted, popularized, and discarded in a matter of weeks.

The speed of this churn has consequences. It trains us to engage with culture in a particular way: superficially, briefly, and without much attachment. We learn to skim surfaces rather than dig into depths. We participate in trends not because they resonate with us personally, but because participation itself is the point (the ice bucket challenge boosted ALS awareness for precisely 6 months). Being part of the moment, being visible within the current aesthetic wave, these become more valuable than any lasting connection to the work that aesthetic is borrowed from.

What sticks when the wave recedes

The irony is that while trends are disposable, the works they feed on often are not. Ghibli films continue to be watched, analyzed, and loved by new audiences long after the selfie filters have been forgotten. Wes Anderson’s movies did not become less meaningful because people used his color palettes for Instagram posts. The underlying art survives because it contains something that cannot be reduced to a visual shorthand.

What separates durable culture from disposable trends is substance that exceeds its surface. A Ghibli film rewards attention over time. The more you watch, the more you notice: the way labor is animated with dignity, the long quiet stretches that mirror real life’s pace, the refusal to offer simple moral answers. None of that fits in a filter. None of that can be mass-produced. It requires the viewer to bring time, focus, and openness to complexity.

This is what the trend cycle cannot replicate. It can borrow the look, but it cannot borrow the experience. It can create a momentary association with the aesthetic, but it cannot create the slow, layered engagement that builds lasting attachment. So the original work persists beneath the churn, waiting for the people who want more than a costume, who are looking for something to return to rather than something to discard.

Resisting the rhythm of disposability

Recognizing this pattern is not the same as escaping it. We are all embedded in systems that reward rapid consumption and constant novelty. The feed is designed to keep us moving, to prevent us from lingering too long on any one thing. Resisting that rhythm requires deliberate effort, a conscious choice to slow down when everything around us is accelerating.

That resistance can look small and personal: rewatching a film instead of merely watching a snippet of it on YouTube Shorts, reading longform essays instead of liking someone’s reel about it, spending time with art that does not immediately reveal itself. If anything, the pandemic allowed us to spend days culturing sourdough starter so we could bake our bread. The curfew ended and sourdough became a distant memory… but for those 6 months, we actually indulged in immersion. These acts do not change the structure of the platforms, but they change our relationship to culture. They create space for depth in an environment optimized for surface.

The broader question is whether we can build cultural spaces that do not treat everything as disposable. Platforms will not do this on their own; their incentives run in the opposite direction. But audiences, creators, and critics can push back by valuing longevity over virality, by rewarding substance over aesthetic repackaging, by choosing to engage with work in ways that cannot be reduced to a trend cycle.

Ghibli survived its moment as a disposable aesthetic because it was never fully captured by it. The films remain too slow, too strange, too resistant to easy consumption. They stand as a reminder that some things are built to last, even in an environment designed to make everything temporary. The real work is recognizing that difference and choosing to treat what matters accordingly.

The post Remember “The Ghiblification”? We Treated Ghibli As Disposable Because That’s How We Treat Everything first appeared on Yanko Design.

JBL’s AI Wireless Speakers Can Remove Vocals, Guitars, or Drums From Any Song While You’re Jamming

Walk into any rehearsal space and you will see the usual suspects. A combo amp in the corner, a Bluetooth speaker on a shelf, maybe a looper pedal on the floor. Each tool has a single job. One makes your guitar louder, one plays songs, one repeats whatever you feed it. You juggle them to build something that feels like a band around you.

JBL’s BandBox concept asks a different question. What if one box could understand the music it is playing and reorganize it around you in real time. The Solo and Trio units use AI to separate vocals, guitars, and drums inside finished tracks, so you can mute, isolate, or replace parts on the fly. Suddenly the speaker is not just a playback device. It becomes the drummer who never rushes, the backing guitarist who never complains, and the invisible producer nudging you toward tighter practice.

Designer: JBL

This ability to deconstruct any song streamed via Bluetooth is the core of the BandBox experience. The AI stem processing happens locally, inside the unit, without needing an internet connection or a cloud service. You can pull up a track, instantly mute the original guitar part, and then step in to play it yourself over the remaining bass, drums, and vocals. This is a fundamental shift in how musicians can practice. Instead of fighting for space in a dense mix, you create a pocket for yourself, turning passive listening into an interactive rehearsal.

The whole system is self-contained, designed to work straight out of the box without a pile of extra gear. Both models come equipped with a selection of built-in amplifier models and effects, so you can shape your tone directly on the unit. Essentials like a tuner and a looper are also integrated, which streamlines the creative process. You can lay down a rhythm part, loop it, and then practice soloing over it without ever touching an external pedal. It is this thoughtful integration that makes the BandBox feel less like a speaker and more like a complete, portable music-making environment.

The BandBox Solo is the most focused version of this idea, built for the individual. It is a compact, easily carried device with a single combo input that accepts either a guitar or a microphone. This makes it an obvious choice for singer-songwriters or any musician practicing alone. The form factor is all about convenience, with a solid build and a top-mounted handle. A battery life of around six hours means you could take it to a park for an afternoon busking session or just move it around the house without being tethered to a wall outlet. It is a self-sufficient creative station in a small package.

When practice involves more than one person, the BandBox Trio provides the necessary expansion. It is built on the same AI-powered platform but scales up the hardware for group use. The most significant change is the inclusion of four instrument inputs, which transforms the unit into a miniature, portable PA system. A small band or a duo can plug in multiple guitars, a bass, and a microphone, all running through the same box. This is a clever solution for impromptu jam sessions, stripped-down rehearsals, or music classrooms where setting up a full mixer and multiple amps is too cumbersome.

Both units share a clean, modern design that aligns with JBL’s broader product family. The controls seem to be laid out for quick, intuitive access, a must for musicians who need to make adjustments without interrupting their flow. Connectivity extends beyond just playing music; a USB-C port allows the BandBox to double as an audio interface. You can connect it directly to a computer or tablet to record your sessions or lay down a demo, adding a layer of studio utility that makes the device even more versatile. It is not just for practice, it is for capturing the ideas that come from it.

Of course, none of this would matter if the sound was not up to par. JBL’s reputation in audio engineering creates a high expectation, and the BandBox aims to meet it by delivering a full-range sound that can handle both a dynamic instrument and a complex backing track simultaneously. The goal is to provide a clear, responsive guitar tone that cuts through, while the underlying track remains rich and detailed. This dual-functionality is key, ensuring it performs just as well as a high-quality Bluetooth speaker for casual listening as it does as a dedicated practice amp.

The JBL BandBox series has started its rollout in Southeast Asian markets, with promotions and availability already noted in the Philippines and Malaysia. A wider international release is expected to follow. While pricing will fluctuate by region, the BandBox Solo appears to be positioned competitively against other popular smart amps on the market. The Trio, with its expanded inputs and group-oriented features, will naturally sit at a higher price point, offering a unique proposition as an all-in-one portable rehearsal hub.

The post JBL’s AI Wireless Speakers Can Remove Vocals, Guitars, or Drums From Any Song While You’re Jamming first appeared on Yanko Design.

TWS Earbuds With Built-In Cameras Puts ChatGPT’s AI Capabilities In Your Ears

Everyone is racing to build the next great AI gadget. Some companies are betting on smartglasses, others on pins and pocket companions. All of them promise an assistant that can see, hear, and understand the world around you. Very few ask a simpler question. What if the smartest AI hardware is just a better pair of earbuds?

This concept imagines TWS earbuds with a twist. Each bud carries an extra stem with a built in camera, positioned close to your natural line of sight. Paired with ChatGPT, those lenses become a constant visual feed for an assistant that lives in your ears. It can read menus, interpret signs, describe scenes, and guide you through a city without a screen. The form factor stays familiar, the capabilities feel new. If OpenAI wants a hardware foothold, this is the kind of product that could make AI feel less like a demo and more like a daily habit. Here’s why a camera in your ear might beat a camera on your face.

Designer: Emil Lukas

The industrial design has a sort of sci fi inhaler vibe that I weirdly like. The lens sits at the end of the stem like a tiny action cam, surrounded by a ring that doubles as a visual accent. It looks deliberate rather than tacked on, which matters when you are literally hanging optics off your head. The colored shells and translucent tips keep it playful enough that it still reads as audio gear first, camera second.

The cutaway render looks genuinely fascinating. You can see a proper lens stack, a sensor, and a compact board that would likely host an ISP and Bluetooth SoC. That is a lot of silicon inside something that still has to fit a driver, battery, microphones, and antennas. Realistically, any heavy lifting for vision and language goes straight to the phone and then to the cloud. On device compute at that scale would murder both battery and comfort.

All that visual data has to be processed somewhere, and it is not happening inside the earbud. On-device processing for GPT-4 level vision would turn your ear canal into a hotplate. This means the buds are basically streaming video to your phone for the heavy lifting. That introduces latency. A 200 millisecond delay is one thing; a two second lag is another. People tolerate waiting for a chatbot response at their desk. They will absolutely not tolerate that delay when they ask their “AI eyes” a simple question like “which gate am I at?”

Then there is the battery life, which is the elephant in the room. Standard TWS buds manage around five to seven hours of audio playback. Adding a camera, an image signal processor, and a constant radio transmission for video will absolutely demolish that runtime. Camera-equipped wearables like the Ray-Ban Meta glasses get about four hours of mixed use, and those have significantly more volume to pack in batteries. These concept buds look bulky, but they are still tiny compared to a pair of frames.

The practical result is that these would not be all-day companions in their current form. You are likely looking at two or three hours of real-world use before they are completely dead, and that is being generous. This works for specific, short-term tasks, like navigating a museum or getting through an airport. It completely breaks the established user behavior of having earbuds that last through a full workday of calls and music. The utility would have to be incredibly high to justify that kind of battery trade-off.

From a social perspective, the design is surprisingly clever. Smartglasses failed partly because the forward-facing camera made everyone around you feel like they were being recorded. An earbud camera might just sneak under the radar. People are already accustomed to stems sticking out of ears, so this form factor could easily be mistaken for a quirky design choice rather than a surveillance device. It is less overtly aggressive than a lens pointed from the bridge of your nose, which could lower social friction considerably.

The cynical part of me wonders about the field of view. Ear level is better than chest level, but your ears do not track your gaze. If you are looking down at your phone while walking, those cameras are still pointed forward at the horizon. You would need either a very wide angle lens, which introduces distortion and eats processing power for correction, or you would need to train yourself to move your whole head like you are wearing a VR headset. Neither is ideal, but both are solvable with enough iteration. What you get in return is an AI that can actually participate in your environment instead of waiting for you to pull out your phone and aim it at something. That shift from reactive to ambient is the entire value proposition, and it only works if the cameras are always positioned and always ready.

The post TWS Earbuds With Built-In Cameras Puts ChatGPT’s AI Capabilities In Your Ears first appeared on Yanko Design.

Stickerbox: Kids Say an Idea, AI Prints It as a Sticker in Seconds

Smart speakers for kids feel like a gamble most parents would rather skip. The promise is educational content and hands-free help, but the reality often involves screens lighting up at bedtime, algorithms deciding what comes next, and a lingering suspicion that someone is cataloging every question your child shouts into the room. The tension between letting kids explore technology and protecting their attention spans has never felt sharper, and most connected toys lean heavily toward the former without much restraint.

Stickerbox by Hapiko offers a quieter trade. It looks like a bright red cube, measures 3.75 inches on each side, and does one thing when you press its white button. Kids speak an idea out loud, a dragon made of clouds or a broccoli superhero, and the box prints it as a black-and-white sticker within seconds. The interaction feels less like talking to Alexa and more like whispering to a magic printer that happens to understand imagination.

Designer: Hapiko

The design stays deliberately simple. A small screen shows prompts like “press to talk,” while a large white button sits below, easy for small hands to press confidently. Stickers emerge from a slot at the top, fed by thermal paper rolls. The starter bundle includes three BPA-free paper rolls, eight colored pencils, and a wall adapter, turning the cube into a complete creative kit rather than just another gadget waiting for accessory purchases to feel useful.

The magic happens in three beats. A kid presses the button and speaks their prompt, as silly or specific as they want. The box sends audio over Wi-Fi to a generative AI model that turns phrases into line art. Within seconds, a thermal printer traces the image onto sticker paper, and the finished piece emerges from the top, ready to be torn, peeled, and stuck onto notebooks, walls, or comic book pages at home.

What keeps this from feeling like surveillance is the scaffolding Hapiko built around the AI. The microphone only listens when the button is pressed, so there’s no ambient eavesdropping happening in the background. Every prompt runs through filters designed to block inappropriate requests before reaching the image generator. Voice recordings are processed and discarded immediately, not stored for training. The system is kidSAFE COPPA certified, meaning it passed third-party audits for data handling and child privacy standards.

Thermal printing sidesteps ink cartridge mess entirely. Each paper roll holds material for roughly sixty stickers, and refill packs of three cost six dollars. The catch is that Stickerbox only accepts its own branded paper; using generic rolls will damage the mechanism. The bigger design choice is that every sticker is printed in monochrome, which is intentional. It forces kids to pick up pencils and spend time coloring, turning a quick AI trick into a slower, more tactile ritual.

Stickerbox gestures toward a version of AI-infused play that feels less anxious. The algorithm works quietly, translating spoken prompts into something kids can hold, cut, and trade, but the most important part happens after the sticker prints. It ends up taped inside homemade comic books, stuck on bedroom doors, or colored during rainy afternoons. The box becomes forgettable infrastructure, which might be the kindest thing you can say about a piece of children’s technology designed for creative independence.

The post Stickerbox: Kids Say an Idea, AI Prints It as a Sticker in Seconds first appeared on Yanko Design.