Antigravity A1 Review: Reimagining What a Drone Feels Like to Fly

PROS:


  • Unique immersive experience with vision goggles

  • 8K 360 capture with post-flight reframing

  • Intuitive one-hand grip controller and automated modes lower the skill barrier

CONS:


  • Several pieces to carry and manage: drone, goggles, and controller

  • First-time setup and learning curve can feel overwhelming

  • Visual observer requirements in places like the U.S. limit solo flying

RATINGS:

AESTHETICS
ERGONOMICS
PERFORMANCE
SUSTAINABILITY / REPAIRABILITY
VALUE FOR MONEY

EDITOR'S QUOTE:

Antigravity A1 turns flying a drone into a new point of view, and once you are inside it, the experience feels hard to put a price on.

Antigravity is Insta360’s bold experiment in what happens when a 360‑camera company stops thinking only about the camera and starts redesigning the entire act of flying. It is an independent drone brand, incubated by Insta360, built on the same obsession with immersive imaging and playful storytelling, but free to rethink the aircraft, the controls, and the viewing experience as one coherent object. Instead of asking how to strap a 360 camera onto a drone, Antigravity asks how to make the whole system feel like a natural extension of your point of view.

Antigravity A1 is the first expression of that idea. It is a compact 8K 360 drone that arrives as a complete kit, with Vision goggles and a single‑hand Grip controller that you steer with subtle tilts and gestures. You do not fly it by staring at a phone and juggling twin sticks. You put on the goggles, step into a 360‑degree bubble of imagery, and guide the drone by moving your hand in the direction you want to travel. What was the experience with Antigravity A1 like? We tested it to bring you that answer.

Designer: Antigravity

Aesthetics

Antigravity A1 presents itself more as a system than a single object. There is the compact drone with its dual cameras, the Vision goggles, and the one‑hand Grip controller. Visually, the aircraft itself is quite understated. Aside from the two opposing lenses and the leg that shields the lower camera on the ground, it looks like a neat, functional quadcopter. The drama is reserved for what the system does, not how the airframe shouts for attention.

The Vision goggles lean into an almost character-like, even bug-like look, especially when you fold up the black antennas on each side that resemble insect feelers. The front shell is white with two large, dark circular eyes, giving the whole front a slightly cartoonish face. In between and just above those eyes sits an inverted triangle-shaped grille with a subtle Antigravity logo, adding a small technical accent without breaking the simplicity.  The fabric strap and thick face padding sit behind this front mask. Wearing the goggles does look strange at first, but in a strangely cool way.


 
The Grip motion controller has a white plastic shell with buttons and a dial that uses color and icon cues to hint at their functions. On the back, a black trigger-style pull bar sits where your index finger naturally rests. There are additional buttons on each side. The mix of white body, black accents, and clearly marked controls makes the Grip look approachable rather than intimidating, which suits a controller that is meant to translate simple hand movements into flight.

Overall, the drone, goggles, and controller share a cohesive design language. They all use the same soft white shell, black accents, and gently rounded forms. The whole kit feels like a single, intentional system rather than three unrelated gadgets.

Ergonomics

The Vision goggles are where comfort really matters, and Antigravity has clearly spent time on fit. The goggles weigh 340 grams, yet the padding and strap geometry distribute that weight in a way that avoids obvious pressure points, even during longer sessions. The side that meets your face feels soft and accommodating, so the hardware never feels harsh. Once the 360-degree image appears, the headset fades faster than you might expect, which is exactly what you want from an immersive device. Optional corrective inserts mean many glasses wearers can enjoy a sharp view without wrestling frames under the band, which makes the experience more inclusive and less fussy.

Power for the goggles lives in a separate battery pack that you can wear on a lanyard around your neck. At 175 grams, it is not heavy, but over time, it can feel slightly cumbersome to have it hanging there, especially when you are moving around. Antigravity sells a 1.2 metre (3.9 foot) USB-C to DC power cable that lets you route the battery to a trouser pocket or bag instead, which makes the whole setup feel less dangly and more integrated.

You adjust the head strap with velcro, which works, but it is not perfect. A small buckle or hinge mechanism would make it much easier to put the goggles on or take them off while wearing a hat, without having to readjust the strap length every time. It is a minor detail, yet it shows how close the design already is. You start wishing for refinements, not fixes.

The Grip controller is where Antigravity’s ergonomic thinking really shows. It rests comfortably in one hand, with a form that supports a natural, slightly relaxed grip rather than a tense, clawed hold. For my hand, it is just a tiny bit on the large side, enough to notice but not enough to break the experience. This is very much nitpicking, and it actually underlines how well resolved the controller already is. When you are down to debating a few millimetres of girth, it means the fundamentals of comfort and control are in a very good place.

Performance

My experience with Antigravity A1 actually started at IFA in Berlin in early September. Outside the exhibition halls, I slipped on the Vision goggles while an Antigravity staff member flew the drone. As the A1 lifted and the IFA venue unfolded beneath me in every direction, my legs actually shivered a little, even though I like heights. Being wrapped in a live 360-degree view felt less like watching a screen and more like I was flying. That first taste was magical, which made me both excited and nervous to test the A1 myself later. I had almost crashed a friend’s drone years ago and had not flown since, so my piloting skills were close to none.

That magic comes with a setup phase that feels more like preparing a small system than turning on a single gadget. The first time you connect the drone, pair the Vision goggles, update firmware, and learn the grip controls, it can feel overwhelming. There are menus on the drone, options in the goggles, and status lights to decode, and they all compete for your attention at once. After a few sessions, it settles into a rhythm, but that initial ramp is something you feel before you ever lift off on your own.

Mobile app – Tutorial

Packing the Antigravity A1 means finding room for the drone, the goggles and their separate battery, and the grip controller, often in a dedicated case or carefully arranged backpack. This nudges the whole experience away from “throw it in your bag just in case” and toward “plan a proper flying session.” The result is that the A1 feels more like a deliberate outing than a casual accessory.

On paper, the A1 looks quite sensible. With the standard battery, it weighs 249 g, staying just under the 250 g threshold that works nicely with regulations in many places, and it offers up to about 24 minutes of flight time in ideal conditions. Pop in the high-capacity battery, and the weight goes over 250 g, but Antigravity quotes up to around 39 minutes in the air. In reality, you get a solid single session per pack and will want spares if you plan to film seriously.

Flight behaviour is also adjustable. There are three flight modes, Cinematic, Normal, and Sport, so you can match how the drone responds to the scene you are flying in. Together with Free Motion and FPV, that gives the A1 enough range to feel relaxed and floaty when you want it, or more direct and energetic when the shot calls for it.

Vision goggles menu

On top of those basics, Antigravity adds automated tools like Sky Genie, Deep Track, and Sky Path. Sky Genie runs preprogrammed patterns that give you smooth, cinematic moves with minimal effort. Deep Track follows a chosen subject automatically, so you can focus more on timing than stick precision. Sky Path lets you record waypoints and have the A1 repeat the route on its own, which is handy for repeated takes or for nervous pilots.

Safety and workflow sit quietly in the background, which is exactly where they should be. Obstacle sensors on the top and bottom help protect the drone when you are close to structures or changes in elevation, and one click Return to Home acts as a psychological parachute. Knowing you can call the drone back with a single command does a lot to calm the nerves, especially if your last memory of drones involves a near crash.

In the United States, FAA rules treat goggle-only flying as beyond visual line of sight, so you are meant to have a visual observer watching the drone while you are wearing the headset. That nudges the A1 away from solo, spur-of-the-moment flights and toward planned sessions with someone beside you acting as spotter.

On the imaging side, the A1 records up to 8K 360-degree video, with lower resolutions unlocking higher frame rates when you want smoother motion. Footage can be stored on internal memory or a microSD card, and you can offload it either by removing the card or plugging in via USB-C, so it slips neatly into most existing editing habits.

Vision goggle screen recording

The real leap, though, comes from the goggles. They are the thing that truly sets A1 apart from almost every other consumer drone. Instead of glancing down at a phone, you step into an immersive 360-degree view that tracks your head and surrounds your vision. The drone feels less like a gadget in the sky and more like the spot your eyes and body are occupying. A double-tap on the side button flips you into passthrough view, so you can check your surroundings without pulling the headset off, and a tiny outer display mirrors a miniature version of the live feed for people nearby.

That small detail turned out to be important in Bali, where a group of local kids noticed the goggles and the moving image, wandered over, and suddenly found themselves taking turns “flying” above their own neighbourhood. Their gasps, laughter, and stunned silence were as memorable as the footage itself.

Mobile app

The magic continues even after you land. Because the A1 captures everything in 360 degrees, you can decide on your framing after the flight, which feels a bit like getting a second chance at every shot. Antigravity provides both mobile and desktop apps for this, so you can scrub through the sphere, mark angles, and carve out regular flat videos without having to nail every move in real time.

Desktop app

If you have used the Insta360 app, the Antigravity app will feel instantly familiar, with similar timelines, keyframes, and swipe-to-pan gestures. Even if you have not, it is straightforward to learn, helped by clear icons and responsive previews. There is also an AI auto-edit mode that can assemble quick cuts for you, which is handy when you just want something shareable without sinking an evening into manual reframing.

In the end, A1’s performance is not just about how long it stays in the air or how many modes it offers. Those pieces matter, and they are solid, but what you remember is the feeling of lifting off inside the goggles and the ease with which you can hand that experience to someone else. It still behaves like a well-mannered compact drone on the spec sheet, yet in use it edges closer to a shared flying machine, one that turns a patch of ground into a small, temporary viewing platform in the sky.

Sustainability

Antigravity does not make any big sustainability claims with the A1. There is no mention of recycled materials or lower-impact manufacturing, and the packaging and hardware feel very much in line with a typical consumer drone. This is not a product that sells itself on being green, and the company does not pretend otherwise. 

What you do get is some support for repairing rather than replacing. The A1 ships with spare propellers in the box, which encourages you to swap out damaged blades instead of treating minor knocks as the end of the drone. Antigravity also sells replacement lenses, so a scratched front element does not automatically become a total write-off. It is a small step, but it nudges the A1 slightly toward a longer, more fixable life rather than a purely disposable gadget.

Value

The standard Antigravity A1 bundle starts at 1599 USD, with Explorer and Infinity bundles stepping up battery count and accessories for longer, more serious flying. It is undeniably an expensive system, especially compared to regular camera drones that only give you a phone view.

At the same time, what you are really paying for is the experience of being inside the flight and reframing your shots after the fact. That sense of presence and flexibility is hard to put a number on, and for me, it nudges the A1 from “costly gadget” toward something closer to a priceless experience machine, if you know you will actually use it.

Verdict

Antigravity A1 is not the simplest drone in terms of equipment. You are managing goggles, a grip controller, multiple batteries, and in some places, you also need a visual observer if regulations require it. On top of that, the price sits firmly in premium territory. In return, you get a very different kind of flying. At first, setup and piloting can feel overwhelming, but it becomes natural surprisingly quickly, and there are plenty of automated features to help you keep the drone under control and capture cool shots. Combined with 360-degree capture and post-flight reframing in the Antigravity app, it feels less like operating hardware and more like stepping into a movable viewpoint.

If you just want straightforward aerial clips, the A1 is probably more than you need. If you care about immersive perspective and shared experiences, the mix of kit, software, and feeling it delivers starts to justify the cost. It is fussy, ambitious, and occasionally awkward, yet when you are inside that live 360-degree view, it really does reimagine what a drone can feel like to fly.

The post Antigravity A1 Review: Reimagining What a Drone Feels Like to Fly first appeared on Yanko Design.

ChatGPT-Powered Desk Mic gives your Existing Laptop Realtime Translation and Agentic Powers

The most interesting AI hardware this year might not be a new screen or headset. It might be a microphone. Powerrider frames that idea very literally. It takes the form factor of a conference mic and refits it as a GPT‑4o terminal, so the same stem on your desk that handles Zoom calls can also translate in real time, summarize a briefing, or draft follow‑up emails while the meeting is still in progress.

What makes it feel clever is how little ceremony it adds. There is no new display to manage, just a few sculpted buttons for voice input, translation, and AI control. Tap, talk, and the response appears on your existing laptop, ready to paste into a chat, a slide deck, or a script. In a single accessory you get cleaner audio for podcasting and live streaming, plus a dedicated channel that turns casual speech into an ongoing conversation with ChatGPT.

Designer: Powerrider

Click Here to Buy Now: $59 $120 (56% off). Hurry, only a few left!

The hardware itself (model M1) weighs 290 grams and stands 107 millimeters tall, machined from aluminum alloy with a 60‑degree adjustable boom so you can talk comfortably without hunching over your keyboard. The capsule is an omni‑directional condenser tuned to pick up voice across a 100 to 15,000 Hz range, with DSP noise reduction baked into the signal chain. It samples at 16‑bit/48kHz, which puts it squarely in the clean‑enough category for content work without venturing into audiophile overkill. USB‑C handles both power and data, plus there is a 3.5mm jack if you want to monitor through headphones. The base houses four physical buttons, each programmable through companion software. One button wakes the AI mode, another triggers translation, a third handles dictation, and the fourth is a rotary knob that doubles as a mute toggle and volume dial.

This is where Powerrider stops being a mic and starts being a control surface. You can map those keys to custom GPT‑4o prompts, so tapping one button might fire off “translate the last paragraph into Spanish and make it sound conversational,” while another could trigger “rewrite this email to sound less corporate.” The software supports Windows 7 and up, plus macOS 10.15 or later, which covers most setups that still get security patches. The AI functions pull from a pretty expansive toolkit: text translation, PPT generation, AI drawing, background removal, speech writing, document conversion, image analysis, code generation, reading comprehension, Q&A, writing assistance, table creation, and mind mapping. Some of those feel gimmicky (I have yet to meet anyone who genuinely wants AI‑generated mind maps on demand), but the core translation and drafting tools hit real pain points if you work across languages or spend half your day rewriting the same three types of message.

The hook here is immediacy. Most of us already talk to ChatGPT, but we do it through a browser tab or a pinned app, which means context‑switching, copying text, pasting prompts, and generally breaking flow. Powerrider tries to make that interaction feel more like push‑to‑talk in a game or on a two‑way radio. You hold a key, speak the command, release, and the result lands in your active window or in a floating overlay, depending on how you configure it. That workflow collapses a six‑step process (open ChatGPT, type or paste, wait, copy response, switch back, paste again) into a two‑step one (press, speak). If you live in tools like Notion, Google Docs, or any IDE that supports text injection, the time savings compound quickly. The software also handles screenshot translation, which is genuinely useful if you are reading documentation, design files, or research papers in another language and want inline conversion without manually copying blocks of text into DeepL.

Because the mic itself is a legitimate audio interface, you can use it in OBS, Zoom, or any DAW that recognizes standard USB microphones. The frequency response is wide enough for vocal clarity but not so hyped that you get harsh sibilance or boomy proximity effects. Think more “podcast interview” than “ASMR whisper track.” The omni pickup pattern means you do not have to aim it perfectly, which is nice if you are someone who gestures while talking or shifts around in your chair. The DSP noise reduction does a decent job of killing keyboard clatter and ambient hum, though it is not going to save you if you are recording next to a mechanical keyboard with clicky blues or a window AC unit. For meeting‑quality audio and streaming voiceover work, it sits comfortably in the same tier as entry‑level USB mics like the Blue Yeti Nano or the HyperX SoloCast, but with the GPT layer on top.

The company behind the Powerrider is positioning this as part of a broader peripheral ecosystem, which is where things get more interesting. They are also offering an AI‑powered keyboard (model K1) and an AI‑powered mouse (model S1), both of which follow the same philosophy: take an essential input device and wire it directly to GPT‑4o so you can invoke AI functions without leaving your workspace. The keyboard is a 98‑key Crater mechanical with RGB backlighting, a volume knob, and three custom macro keys dedicated to AI tasks. It supports both wired USB and wireless 2.4GHz/Bluetooth 5.0 across four channels, and the battery will run for 148 hours of continuous typing with the backlight off, or about 16 hours with the RGB cranked. The mouse is a wireless optical with adjustable DPI up to 4000, seven buttons (including dedicated AI, custom, and search keys), and a two‑hour charge time for what they claim is several days of use. Both peripherals plug into the same software suite as the mic, so you can trigger translation, text generation, or document conversion from any of the three devices depending on which one is closest to your hand.

Powerrider is live on Kickstarter right now with a few weeks left in the campaign, and the pricing is structured around bundles. A single mic starts at $59 for the super early bird tier (limited to 300 units) or $69 for the regular early bird. The full “Powerrider AI One Suite” bundle, which includes one mic, one keyboard, and one mouse, is priced at $269 (down from a claimed $608 MSRP). You can also grab the mic plus keyboard for $169 or the mic plus mouse for $149. Add‑on pricing if you are already backing is $119 for the keyboard, $99 for the mouse, and $59 for an extra mic. Those numbers put the mic roughly on par with mid‑tier USB condensers, but with the AI layer effectively thrown in as the value‑add. Whether that trade‑off makes sense depends entirely on how much friction you currently feel when bouncing between your tools and ChatGPT, and whether you are willing to let a hardware button own part of that workflow instead of a keyboard shortcut or Alfred snippet.

Click Here to Buy Now: $59 $120 (56% off). Hurry, only a few left!

The post ChatGPT-Powered Desk Mic gives your Existing Laptop Realtime Translation and Agentic Powers first appeared on Yanko Design.

How to Spot Fake AI Products at CES 2026 Before You Buy

Merriam-Webster just named “slop” its word of the year, defining it as “digital content of low quality that is produced usually in quantity by means of artificial intelligence.” The choice is blunt, almost mocking, and it captures something that has been building for months: a collective exhaustion with AI hype that promises intelligence but delivers mediocrity. Over the past three months, that exhaustion has started bleeding into Wall Street. Investors, analysts, and even CEOs of AI companies themselves have been openly questioning whether we are living through an AI bubble. OpenAI’s Sam Altman warned in August that investors are “overexcited about AI,” and Google’s Sundar Pichai admitted to “elements of irrationality” in the sector. The tech industry is pouring trillions into AI infrastructure while revenues lag far behind, raising fears of a dot-com-style correction that could rattle the entire economy.

CES 2026 is going to be ground zero for this tension. Every booth will have an “AI-powered” sticker on something, and a lot of those products will be genuine innovations built on real on-device intelligence and agentic workflows. But a lot of them will also be slop: rebranded features, cloud-dependent gimmicks, and shallow marketing plays designed to ride the hype wave before it crashes. If you are walking the show floor or reading coverage from home, knowing how to separate real AI from fake AI is not just a consumer protection issue anymore. It is a survival skill for navigating a market that feeds on confusion and a general lack of awareness around actual Artificial Intelligence.

1. If it goes offline and stops working, it was never really AI

The simplest test for fake AI is also the most reliable: ask what happens when the internet connection drops. Real AI that lives on your device will keep functioning because the processing is happening locally, using dedicated chips and models stored in the gadget itself. Fake AI is just a thin client that calls a cloud API, and the moment your Wi-Fi cuts out, the “intelligence” disappears with it.

Picture a laptop at CES 2026 that claims to have an AI writing assistant. If that assistant can still summarize documents, rewrite paragraphs, and handle live transcription when you are on a plane with no internet, you are looking at real on-device AI. If it gives you an error message the second you disconnect, it is cloud-dependent marketing wrapped in an “AI PC” label. The same logic applies to TVs, smart home devices, robot vacuums, and wearables. Genuine AI products are designed to think locally, with cloud connectivity as an optional boost rather than a lifeline.

The distinction matters because on-device AI is expensive to build. It requires new silicon, tighter integration between hardware and software, and real engineering effort. Companies that invested in that infrastructure will want you to know it works offline because that is their competitive edge. Companies that skipped that step will either avoid the question or bury it in fine print. At CES 2026, press the demo staff on this: disconnect the device from the network and see if the AI features still run. If they do not, you just saved yourself from buying rebranded cloud software in a shiny box.

If your Robot Vacuum has Microsoft Copilot, RUN!

2. If it’s just a chatbot, it isn’t AI… it’s GPT Customer Care

The laziest fake AI move at CES 2026 will be products that open a chat window, let you type questions, and call that an AI feature. A chatbot is not product intelligence. It is a generic language model wrapper that any company can license from OpenAI, Anthropic, or Google in about a week, then slap their logo on top and call it innovation. If the only AI interaction your gadget offers is typing into a text box and getting conversational responses, you are not looking at an AI product. You are looking at customer service automation dressed up as a feature.

Real AI is embedded in how the product works. It is the robot vacuum that maps your home, decides which rooms need more attention, and schedules itself around your routine without you opening an app. It is the laptop that watches what you do, learns your workflow, and starts suggesting shortcuts or automating repetitive tasks before you ask. It is the TV that notices you always pause shows when your smart doorbell rings and starts doing it automatically. None of that requires a chat interface because the intelligence is baked into the behavior of the device itself, not bolted on as a separate conversation layer.

If a company demo at CES 2026 starts with “just ask it anything,” probe deeper. Can it take actions across the system, or does it just answer questions? Does it learn from how you use the product, or is it the same canned responses for everyone? Is the chat interface the only way to interact with the AI, or does the product also make smart decisions in the background without prompting? A chatbot can be useful, but it is table stakes now, not a differentiator. If that is the whole AI story, the company did not build AI into their product. They rented a language model and hoped you would not notice.

3. If the AI only does one narrow thing, it is probably just a renamed preset

Another red flag is when a product’s AI feature is weirdly specific and cannot generalize beyond a single task. A TV that has “AI motion smoothing” but no other intelligent behavior is not running a real AI model; it is running the same interpolation algorithm TVs have had for years, now rebranded with an AI label. A camera that has “AI portrait mode” but cannot recognize anything else is likely just using a basic depth sensor and calling it artificial intelligence. Real AI, especially the kind built into modern chips and operating systems, is designed to generalize across tasks: it can recognize objects, understand context, predict user intent, and coordinate with other devices.

Ask yourself: does this product’s AI learn, adapt, or handle multiple scenarios, or does it just trigger a preset when you press a button? If it is the latter, you are looking at a marketing gimmick. Fake AI products love to hide behind phrases like “AI-enhanced” or “AI-optimized,” which sound impressive but are deliberately vague. Real AI products will tell you exactly what the system is doing: “on-device object recognition,” “local natural language processing,” “agentic task coordination.” Specificity is a sign of substance. Vagueness is a sign of slop.

The other giveaway is whether the AI improves over time. Genuine AI systems get smarter as they process more data and learn from user behavior, often through firmware updates that improve the underlying models. Fake AI products ship with a fixed set of presets and never change. At CES 2026, ask demo reps if the product’s AI will improve after launch, how updates work, and whether the intelligence adapts to individual users. If they cannot give you a clear answer, you are looking at a one-time software trick masquerading as artificial intelligence.

Don’t fall for ‘AI Enhancement’ presets or buttons that don’t do anything related to AI.

4. If the company cannot explain what the AI actually does, walk away

Fake AI thrives on ambiguity. Companies that bolt a chatbot onto a product and call it AI-powered know they do not have a real differentiator, so they lean into buzzwords and avoid specifics. Real AI companies, by contrast, will happily explain what their models do, where the processing happens, and what problems the AI solves that the previous generation could not. If a booth rep at CES 2026 gives you vague non-answers like “it uses machine learning to optimize performance” without defining what gets optimized or how, that is a warning sign.

Push for concrete examples. If a smart home hub claims to have AI coordination, ask: what decisions does it make on its own, and what still requires manual setup? If a wearable says it has AI health coaching, ask: is the analysis happening on the device or in the cloud, and can it work offline while hiking in the wilderness? If a laptop advertises an AI assistant, ask: what can it do without an internet connection, and does it integrate with other apps (agentic) or just sit in a sidebar? Companies with real AI will have detailed, confident answers because they built the system from the ground up. Companies with fake AI will deflect, generalize, or change the subject.

The other test is whether the AI claim matches the price and the hardware. If a $200 gadget promises the same on-device AI capabilities as a $1,500 laptop with a dedicated neural processing unit, somebody is lying. Real AI requires real silicon, and that silicon costs money. Budget products can absolutely have useful AI features, but they will typically offload more work to the cloud or use simpler models. If the pricing does not line up with the technical claims, it is worth being skeptical. At CES 2026, ask what chip is powering the AI, whether it has a dedicated NPU, and how much of the intelligence is local versus cloud-based. If they cannot or will not tell you, that is your cue to move on.

5. Check if the AI plays well with others, or if it lives in a silo

One of the clearest differences between real agentic AI and fake “AI inside” products is interoperability. Genuine AI systems are designed to coordinate with other devices, share context, and act on your behalf across an ecosystem. Fake AI products exist in isolation: they have a chatbot you can talk to, but it does not connect to anything else, and it cannot take actions beyond its own narrow interface. Samsung’s CES 2026 exhibit is explicitly built around AI and interoperability, with appliances, TVs, and smart home products all coordinated by a shared AI layer. That is what real agentic AI looks like: the fridge, washer, vacuum, and thermostat all understand context and can make decisions together without you micromanaging each one. Fake AI, by contrast, gives you five isolated apps with five separate chatbots, none of which talk to each other. If a product at CES 2026 claims to have AI but cannot integrate with the rest of your smart home, car, or workflow, it is not delivering the core promise of agentic systems.

Ask demo reps: does this work with other brands, or only within your ecosystem? Can it trigger actions in other apps or devices, or does it just respond to questions? Does it understand my preferences across multiple products, or does each device start from scratch? Companies that built real AI ecosystems will brag about cross-device coordination because it is hard to pull off and it is the whole point. Companies selling fake AI will either avoid the topic or try to upsell you on buying everything from them, which is a sign they do not have real interoperability.

6. When in doubt, look for the slop

The rise of AI-generated “slop” gives you a shortcut for spotting lazy AI products: if the marketing materials, product images, or demo videos look AI-generated and low-effort, the product itself is probably shallow too. Merriam-Webster defines slop as low-quality digital content produced in quantity by AI, and it has flooded everything from social media to advertising to product launches. Brands that cut corners on their own marketing by using obviously AI-generated visuals are signaling that they also cut corners on the actual product development.

Watch for telltale signs: weird proportions in product photos, uncanny facial expressions in lifestyle shots, text that sounds generic and buzzword-heavy with no real specifics, and claims that are too good to be true with no technical backing. Real AI products are built by companies that care about craft, and that care shows up in how they present the product. Fake AI products are built by companies chasing a trend, and the slop in their marketing is the giveaway. At CES 2026, trust your instincts: if the booth, the video, or the pitch feels hollow and mass-produced, the gadget probably is too.

The post How to Spot Fake AI Products at CES 2026 Before You Buy first appeared on Yanko Design.

This $2,899 Desktop AI Computer With RTX 5090M Lets You Cancel Every AI Subscription Forever

Look across the history of consumer tech and a pattern appears. Ownership gives way to services, and services become subscriptions. We went from stacks of DVDs to streaming movies online, from external drives for storing data and backups to cloud drives, from MP3s on a player to Spotify subscriptions, from one time software licenses to recurring plans. But when AI arrived, it skipped the ownership phase entirely. Intelligence came as a service, priced per month or per million tokens. No ownership, no privacy. Just a $20 a month fee.

A device like Olares One rearranges that relationship. It compresses a full AI stack into a desktop sized box that behaves less like a website and more like a personal studio. You install models the way you once installed apps. You shape its behavior over time, training it on your documents, your archives, your creative habits. The result is an assistant that feels less rented and more grown, with privacy, latency, and long term cost all tilting back toward the owner.

Designer: Olares

Click Here to Buy Now: $2,899 $3,999 (28% off) Hurry! Only 15/320 units left!

The pitch is straightforward. Take the guts of a $4,000 gaming laptop, strip out the screen and keyboard, put everything in a minimalist chassis that looks like Apple designed a chonky Mac mini, and tune it for sustained performance instead of portability. Dimensions are 320 x 197 x 55mm, weighs 2.15 kg without the PSU, and the whole package pulls 330 watts under full load. Inside sits an Intel Core Ultra 9 275HX with 24 cores running up to 5.4 GHz and 36 MB of cache, the same chip you would find in flagship creator laptops this year. The GPU is an NVIDIA GeForce RTX 5090 Mobile with 24 GB of GDDR7 VRAM, 1824 AI TOPS of tensor performance, and a 175W max TGP. Pair that with 96 GB of DDR5 RAM at 5600 MHz and a PCIe 4.0 NVMe SSD, and you have workstation level compute in a box smaller than most soundbars.

Olares OS runs on top of all that hardware, and it is open source, which means you can audit the code, fork it, or wipe it entirely if you want. Out of the box it behaves like a personal cloud with an app store containing over 200 applications ready to deploy with one click. Think Docker and Kubernetes, but without needing to touch a terminal unless you want to. The interface looks clean, almost suspiciously clean, like someone finally asked what would happen if you gave a NAS the polish of an iPhone. You get a unified account system so all your apps share a single login, configurable multi factor authentication, enterprise grade sandboxing for third party apps, and Tailscale integration that lets you access your Olares box securely from anywhere in the world. Your data stays on your hardware, full stop.

I have been tinkering with local LLMs for the past year, and the setup has always been the worst part. You spend hours wrestling with CUDA drivers, Python environments, and obscure GitHub repos just to get a model running, and then you realize you need a different frontend for image generation and another tool for managing multiple models and suddenly you have seven terminal windows open and nothing talks to each other. Olares solves that friction by bundling everything into a coherent ecosystem. Chat agents like Open WebUI and Lobe Chat, general agents like Suna and OWL, AI search with Perplexica and SearXNG, coding assistants like Void, design agents like Denpot, deep research tools like DeerFlow, task automation with n8n and Dify. Local LLMs include Ollama, vLLM, and SGIL. You also get observability tools like Grafana, Prometheus, and Langfuse so you can actually monitor what your models are doing. The philosophy is simple. String together workflows that feel as fluid as using a cloud service, except everything runs on metal you control.

Gaming on this thing is a legitimate use case, which feels almost incidental given the AI focus but makes total sense once you look at the hardware. That RTX 5090 Mobile with 24 GB of VRAM and 175 watts of power can handle AAA titles at high settings, and because the machine is designed as a desktop box, you can hook it up to any monitor or TV you want. Olares positions this as a way to turn your Steam library into a personal cloud gaming service. You install your games on the Olares One, then stream them to your phone, tablet, or laptop from anywhere. It is like running your own GeForce Now or Xbox Cloud Gaming, except you own the server and there are no monthly fees eating into your budget. The 2 TB of NVMe storage gives you room for a decent library, and if you need more, the system uses standard M.2 drives, so upgrades are straightforward.

Cooling is borrowed from high end laptops, with a 2.8mm vapor chamber and a 176 layer copper fin array handling heat dissipation across a massive 310,000 square millimeter surface. Two custom 54 blade fans keep everything moving, and the acoustic tuning is genuinely impressive. At idle, the system sits at 19 dB, which is whisper quiet. Under full GPU and CPU load, it climbs to 38.8 dB, quieter than most gaming desktops and even some laptops. Thermal control keeps things stable at 43.8 degrees Celsius under sustained loads, which means you can run inference on a 70B model or render a Blender scene without the fans turning into jet engines. I have used plenty of small form factor PCs that sound like they are preparing for liftoff the moment you ask them to do anything demanding, so this is a welcome change.

RAGFlow and AnythingLLM handle retrieval augmented generation, which lets you feed your own documents, notes, and files into your AI models so they can answer questions about your specific data. Wise and Files manage your media and documents, all searchable and indexed locally. There is a digital secret garden feature that keeps an AI powered local first reader for articles and research, with third party integration so you can pull in content from RSS feeds or save articles for later. The configuration hub lets you manage storage, backups, network settings, and app deployments without touching config files, and there is a full Kubernetes console if you want to go deep. The no CLI Kubernetes interface is a big deal for people who want the power of container orchestration but do not want to memorize kubectl commands. You get centralized control, performance monitoring at a glance, and the ability to spin up or tear down services in seconds.

Olares makes a blunt economic argument. If you are using Midjourney, Runway, ChatGPT Pro, and Manus for creative work, you are probably spending around $6,456 per year per user. For a five person team, that balloons to $32,280 annually. Olares One costs $2,899 for the hardware (early-bird pricing), which breaks down to about $22.20 per month per user over three years if you split it across a five person team. Your data stays private, stored locally on your own hardware instead of floating through someone else’s data center. You get a unified hub of over 200 apps with one click installs, so there are no fragmented tools or inconsistent experiences. Performance is fast and reliable, even when you are offline, because everything runs on device. You own the infrastructure, which means unconditional and sovereign control over your tools and data. The rented AI stack leaves you as a tenant with conditional and revocable access.

Ports include Thunderbolt 5, RJ45 Ethernet at 2.5 Gbps, USB A, and HDMI 2.1, plus Wi-Fi 7 and Bluetooth 5.4 for wireless connectivity. The industrial design leans heavily into the golden ratio aesthetic, with smooth curves and a matte aluminum finish that would not look out of place next to a high end monitor or a piece of studio equipment. It feels like someone took the guts of a $4,000 gaming laptop, stripped out the compromises of portability, and optimized everything for sustained performance and quietness. The result is a machine that can handle creative work, AI experimentation, gaming, and personal cloud duties without breaking a sweat or your eardrums.

Olares One is available now on Kickstarter, with units expected to ship early next year. The base configuration with the RTX 5090 Mobile, Intel Core Ultra 9 275HX, 96 GB RAM, and 2 TB SSD is priced at a discounted $2,899 for early-bird backers (MSRP $3,999). That still is a substantial upfront cost, but when you compare it to the ongoing expense of cloud AI subscriptions and the privacy compromises that come with them, the math starts to make sense. You pay once, and the machine is yours. No throttling, no price hikes, no terms of service updates that quietly change what the company can do with your data. If you have been looking for a way to bring AI home without sacrificing capability or convenience, this is probably the most polished attempt at that idea so far.

Click Here to Buy Now: $2,899 $3,999 (28% off) Hurry! Only 15/320 units left!

The post This $2,899 Desktop AI Computer With RTX 5090M Lets You Cancel Every AI Subscription Forever first appeared on Yanko Design.

This $7,000 Robot Shapeshifts Into 3 Different Machines

Imagine a robot that can transform like a high-tech LEGO set, swapping out legs for arms or wheels depending on what the day throws at it. That’s exactly what LimX Dynamics has cooked up with their latest creation, the Tron 2, and honestly, it’s making me rethink everything I thought I knew about what robots can do.

The Tron 2 isn’t your typical one-trick-pony robot. This thing is basically the Swiss Army knife of the robotics world. Chinese startup LimX Dynamics just unveiled this modular marvel that can morph between three completely different configurations: a dual-armed humanoid torso, a wheeled-leg explorer, or a bipedal walker that can actually climb stairs without making you nervous. And get this, you can switch between these forms with just a screwdriver. No fancy tools, no complicated procedures. Just some strategic unscrewing and you’ve got a whole new robot.

Designer: LimX Dynamics

The company’s demo video starts with something delightfully surreal: just a pair of robotic legs casually strolling along, completely headless and armless. Then, like watching a transformer come to life in real time, those same leg components get repurposed into arms, complete with a head and torso. Suddenly, you’ve got a full humanoid lifting heavy water bottles and showing off its surprisingly impressive strength.

What makes the Tron 2 particularly fascinating is its intelligence layer. This isn’t just a mechanical chameleon. It’s powered by advanced AI and built on what’s called a vision-language-action platform, which essentially means it can see, understand commands, and actually do something useful with that information. The robot comes with a fully open software development kit that plays nice with both ROS1 and ROS2, making it a dream for researchers and developers who want to experiment without fighting proprietary systems.

Performance-wise, the specs are genuinely impressive. Each of its dual arms features seven degrees of freedom with a reach of 70 centimeters and can handle up to 10 kilograms of payload together. The wheeled configuration offers about four hours of runtime and can haul around 30 kilograms of cargo, while the bipedal mode excels at navigating tricky terrain like staircases that would leave most wheeled robots stuck at the bottom. The demo footage shows Tron 2 doing things that feel almost show-offy: playing table tennis, performing cartwheels, rolling around smoothly on wheels, and conquering staircases with the confidence of someone who’s done it a thousand times. It’s the kind of versatility that makes you wonder why we’ve been so committed to single-purpose robots for so long.

And here’s where things get really interesting. LimX is positioning the Tron 2 as ideal for future Mars missions. Think about it: on Mars, you can’t exactly call a repair truck when something breaks or send a specialized robot for every different task. You need something adaptable, something that can switch roles as mission needs evolve. The modular design means you could potentially swap out damaged components or reconfigure for different tasks without needing an entirely new robot shipped from Earth.

For research labs, the Tron 2 offers something that’s been surprisingly rare: a flexible test bed that can support multiple types of projects without requiring a whole fleet of different robots. Whether you’re studying manipulation, locomotion, or AI integration, you can configure the same platform to suit your specific needs. Perhaps most surprisingly, this technological marvel starts at just 49,800 Chinese yuan, which translates to around $7,000 USD. For context, that’s dramatically cheaper than many specialized robots that can only do a fraction of what the Tron 2 offers. Pre-orders are already open, though LimX hasn’t fully disclosed all the pricing details or specified exactly who their target customers are.

The Tron 2 represents something bigger than just another cool robot demo. It’s pointing toward a future where adaptability matters more than specialization, where one well-designed platform can handle whatever challenges come its way. Whether it ends up exploring Mars or revolutionizing warehouse operations here on Earth, this shape-shifting bot is definitely one to watch.

The post This $7,000 Robot Shapeshifts Into 3 Different Machines first appeared on Yanko Design.

How AI Will Be Different at CES 2026: On‑Device Processing and Actual Agentic Productivity

Last year, every other product at CES had a chatbot slapped onto it. Your TV could talk. Your fridge could answer trivia. Your laptop had a sidebar that would summarize your emails if you asked nicely. It was novel for about five minutes, then it became background noise. The whole “AI revolution” at CES 2024 and 2025 felt like a tech industry inside joke: everyone knew it was mostly marketing, but nobody wanted to be the one company without an AI sticker on the booth.

CES 2026 is shaping up differently. Coverage ahead of the show is already calling this the year AI stops being a feature you demo and starts being infrastructure you depend on. The shift is twofold: AI is moving from the cloud onto the device itself, and it is evolving from passive assistants that answer questions into agentic systems that take action on your behalf. Intel has confirmed it will introduce Panther Lake CPUs, AMD CEO Lisa Su is headlining the opening keynote with expectations around a Ryzen 7 9850X3D reveal, and Nvidia is rumored to be prepping an RTX 50 “Super” refresh. The silicon wars are heating up precisely because the companies making chips know that on-device AI is the only way this whole category becomes more than hype. If your gadget still depends entirely on a server farm to do anything interesting, it is already obsolete. Here’s what to expect at CES 2026… but more importantly, what to expect from AI in the near future.

Your laptop is finally becoming the thing running the models

Intel, AMD, and Nvidia are all using CES 2026 as a launching pad for next-generation silicon built around AI workloads. Intel has publicly committed to unveiling its Panther Lake CPUs at the show, chips designed with dedicated neural processing units baked in. AMD’s Lisa Su is doing the opening keynote, with strong buzz around a Ryzen 7 9850X3D that would appeal to gamers and creators who want local AI performance without sacrificing frame rates or render times. Nvidia’s press conference is rumored to focus on RTX 50 “Super” cards that push both graphics and AI inference into new territory. The pitch is straightforward: your next laptop or desktop is not a dumb terminal for ChatGPT; it is the machine actually running the models.

What does that look like in practice? Laptops at CES 2026 will be demoing live transcription and translation that happens entirely on the device, no cloud round trip required. You will see systems that can summarize browser tabs, rewrite documents, and handle background removal on video calls without sending a single frame to a server. Coverage is already predicting a big push toward on-device processing specifically to keep your data private and reduce reliance on cloud infrastructure. For gamers, the story is about AI upscaling and frame generation becoming table stakes, with new GPUs sold not just on raw FPS but on how quickly they can run local AI tools for modding, NPC dialogue generation, or streaming overlays. This is the year “AI PC” might finally mean something beyond a sticker.

Agentic AI is the difference between a chatbot and a butler

Pre-show coverage is leaning heavily on the phrase “agentic AI,” and it is worth understanding what that actually means. Traditional AI assistants answer questions: you ask for the weather, you get the weather. Agentic AI takes goals and executes multi-step workflows to achieve them. Observers expect to see devices at CES 2026 that do not just plan a trip but actually book the flights and reserve the tables, acting on your behalf with minimal supervision. The technical foundation for this is a combination of on-device models that understand context and cloud-based orchestration layers that can touch APIs, but the user experience is what matters: you stop micromanaging and start delegating.

Samsung is bringing its largest CES exhibit to date, merging home appliances, TVs, and smart home products into one massive space with AI and interoperability as the core message. Imagine a fridge, washer, TV, robot vacuum, and phone all coordinated by the same AI layer. The system notices you cooked something smoky, runs the air purifier a bit harder, and pushes a recipe suggestion based on leftovers. Your washer pings the TV when a cycle finishes, and the TV pauses your show at a natural break. None of this requires you to open an app or issue voice commands; the devices are just quietly making decisions based on context. That is the agentic promise, and CES 2026 is where companies will either prove they can deliver it or expose themselves as still stuck in the chatbot era.

Robot vacuums are the first agentic AI success story you can actually buy

CES 2026 is being framed by dedicated floorcare coverage as one of the most important years yet for robot vacuums and AI-powered home cleaning, with multiple brands receiving Innovation Awards and planning major product launches. This category quietly became the testing ground for agentic AI years before most people started using the phrase. Your robot vacuum already maps your home, plans routes, decides when to spot-clean high-traffic areas, schedules deep cleans when you are away, and increasingly maintains itself by emptying dust and washing its own mop pads. It does all of this with minimal cloud dependency; the brains are on the bot.

LG has already won a CES 2026 Innovation Award for a robot vacuum with a built-in station that hides inside an existing cabinet cavity, turning floorcare into an invisible, fully hands-free system. Ecovacs is previewing the Deebot X11 OmniCyclone as a CES 2026 Innovation Awards Honoree and promising its most ambitious lineup to date, pushing into whole-home robotics that go beyond vacuuming. Robotin is demoing the R2, a modular robot that combines autonomous vacuuming with automated carpet washing, moving from daily crumb patrol to actual deep cleaning. These bots are starting to integrate with broader smart home ecosystems, coordinating with your smart lock, thermostat, and calendar to figure out when you are home, when kids are asleep, and when the dog is outside. The robot vacuum category is proof that agentic AI can work in the real world, and CES 2026 is where other product categories are going to try to catch up.

TVs are getting Micro RGB panels and AI brains that learn your taste

LG has teased its first Micro RGB TV ahead of CES 2026, positioning it as the kind of screen that could make OLED owners feel jealous thanks to advantages in brightness, color control, and longevity. Transparent OLED panels are also making appearances in industrial contexts, like concept displays inside construction machinery cabins, hinting at similar tech eventually showing up in living rooms as disappearing TVs or glass partitions that become screens on demand. The hardware story is always important at CES, but the AI layer is where things get interesting for everyday use.

TV makers are layering AI on top of their panels in ways that go beyond simple upscaling. Expect personalized picture and sound profiles that learn your room conditions, content preferences, and viewing habits over time. The pitch is that your TV will automatically switch to low-latency gaming mode when it recognizes you launched a console, dim your smart lights when a movie starts, and adjust color temperature based on ambient light without you touching a remote. Some of this is genuine machine learning happening on-device, and some of it is still marketing spin on basic presets. The challenge for readers at CES 2026 will be figuring out which is which, but the direction is clear: TVs are positioning themselves as smart hubs that coordinate your living room, not just dumb displays waiting for HDMI input.

Gaming gear is wiring itself for AI rendering and 500 Hz dreams

HDMI Licensing Administrator is using CES 2026 to spotlight advanced HDMI gaming technologies with live demos focused on very high refresh rates and next-gen console and PC connectivity. Early prototypes of the Ultra96 HDMI cable, part of the new HDMI 2.2 specification, will be on display with the promise of higher bandwidth to support extreme refresh rates and resolutions. Picture a rig on the show floor: a 500 Hz gaming monitor, next-gen GPU, HDMI 2.2 cable, running an esports title at absurd frame rates with variable refresh rate and minimal latency. It is the kind of setup that makes Reddit threads explode.

GPUs are increasingly sold not just on raw FPS but on AI capabilities. AI upscaling like DLSS is already table stakes, but local AI is also powering streaming tools for background removal, audio cleanup, live captions, and even dynamic NPC dialogue in future games that require on-device inference rather than server-side processing. Nvidia’s rumored RTX 50 “Super” refresh is expected to double down on this positioning, selling the cards as both graphics and AI accelerators. For gamers and streamers, CES 2026 is where the industry will make the case that your rig needs to be built for AI workloads, not just prettier pixels. The infrastructure layer, cables and monitors included, is catching up to match that ambition.

What CES 2026 really tells us about where AI is going

The shift from cloud-dependent assistants to on-device agents is not just a technical upgrade; it is a fundamental change in how gadgets are designed and sold. When Intel, AMD, and Nvidia are all racing to build chips with dedicated AI accelerators, and when Samsung is reorganizing its entire CES exhibit around AI interoperability, the message is clear: companies are betting that local intelligence and cross-device coordination are the only paths forward. The chatbot era served its purpose as a proof of concept, but CES 2026 is where the industry starts delivering products that can think, act, and coordinate without constant cloud supervision.

What makes this year different from the past two is that the infrastructure is finally in place. The silicon can handle real-time inference. The software frameworks for agentic behavior are maturing. Robot vacuums are proving the model works at scale. TVs and smart home ecosystems are learning how to talk to each other without requiring users to become IT managers. The pieces are connecting, and CES 2026 is the first major event where you can see the whole system starting to work as one layer instead of a collection of isolated features.

The real question is what happens after the demos

Trade shows are designed to impress, and CES 2026 will have no shortage of polished demos where everything works perfectly. The real test comes in the six months after the show, when these products ship and people start using them in messy, real-world conditions. Does your AI PC actually keep your data private when it runs models locally, or does it still phone home for half its features? Does your smart home coordinate smoothly when you add devices from different brands, or does it fall apart the moment something breaks the script? Do robot vacuums handle the chaos of actual homes, or do they only shine in controlled environments?

The companies that win in 2026 and beyond will be the ones that designed their AI systems to handle failure, ambiguity, and the unpredictable messiness of how people actually live. CES 2026 is where you will see the roadmap. The year after is where you will see who actually built the roads. If you are walking the show floor or following the coverage, the most important question is not “what can this do in a demo,” but “what happens when it breaks, goes offline, or encounters something it was not trained for.” That is where the gap between real agentic AI and rebranded presets will become impossible to hide.

The post How AI Will Be Different at CES 2026: On‑Device Processing and Actual Agentic Productivity first appeared on Yanko Design.

Music-reactive LED Christmas tree turns holiday decor into an interactive display

Holiday lighting has long relied on repeated patterns and static effects, but this music-reactive LED Christmas tree brings a new dimension to seasonal decor by turning sound into visual effects. The project is a simple wooden frame with off-the-shelf LEDs and an audio sensor to create a festive display that animates in real time with sound. Built around an ESP32 microcontroller running the open-source WLED software, the assembly combines woodworking, basic electronics, and wireless configuration into a project that is both instructive and visually striking.

The core of this DIY is an ESP32-D1 mini microcontroller, chosen for its built-in Wi-Fi, processing capability, and compatibility with WLED, a flexible lighting control platform. WLED runs on the ESP32 and provides a web-based interface for configuring LED lighting effects, colors, and patterns without requiring deep coding knowledge. In this tree, WLED’s audio-reactive mode analyzes sound input and drives the LED effects so that the lights flash, pulse, and change in response to music playing nearby. A small INMP441 digital microphone module is wired to the ESP32 to capture ambient audio, enabling this interaction between the physical decorations and sound.

Designer: DB Making

Structurally, the tree is made from common materials. A wooden frame cut into the triangular silhouette of a Christmas tree serves as the backbone. Addressable WS2812B LED strips are mounted along this frame, arranged to expose each LED through a round opening in a corresponding ping-pong ball acting as the light diffuser. These balls soften and spread the light emitted by each LED, creating a uniform glow rather than pinpoint beams. A 3D-printed jig assists in cutting consistent openings in the balls, which are then glued in orderly rows to complete the tree’s face.

Electronic assembly happens on a small perfboard, where the ESP32, microphone module, power connector, and LED strip connector are soldered together. Wiring the LEDs to follow the correct data flow direction and securing the controller board in a neat enclosure ensures reliable operation. Once built, a 5V DC supply powers the tree, and the ESP32 is connected to a computer or network to install WLED firmware via the official web installer. Within WLED’s setup interface, users enter Wi-Fi credentials, set the total number of LEDs, assign the correct data pin, and enable audio-reactive settings along with microphone parameters.

After configuration, the tree’s lighting can be controlled from a smartphone or computer, allowing owners to adjust brightness, choose effects, or simply enjoy music-responsive visuals. The sound-reactive mode responds to ambient audio captured by the microphone, translating beats and rhythms into dynamic light patterns that bring an interactive element to holiday decorations.

Beyond its immediate festive appeal, the project provides a learning platform for hobbyists seeking hands-on experience with microcontrollers, programmable lighting, and real-time sensor integration. By using off-the-shelf components and open-source software, builders can expand or modify the design. This can be done by increasing the number of LEDs, experimenting with alternative diffuser materials, or adding networked effects.

The post Music-reactive LED Christmas tree turns holiday decor into an interactive display first appeared on Yanko Design.

Your Unplayable CD Collection Just Got a $2,000 Solution

Remember when we all decided CDs were dead? When we shoved those jewel cases into storage bins and declared ourselves streaming converts, convinced that digital files and algorithm-curated playlists were the future? Here’s the embarrassing part: I have a stack of CDs sitting on my shelf right now with absolutely no way to play them. And I’m not alone. People are still buying CDs, especially in the K-pop world where physical albums are part of the whole experience, complete with photo cards, posters, and elaborate packaging. We’re collecting music we can’t even listen to properly. Pro-Ject Audio’s new CD Box RS2 Tube might actually fix that problem, and honestly, it’s making me want to finally do something about my unplayable collection.

This isn’t some nostalgic throwback designed to capitalize on retro vibes. Pro-Ject built this thing with the kind of serious engineering usually reserved for audiophile turntables. The Austrian company’s latest entry in their top-tier RS2 line is a top-loading CD player with a fully balanced tube output stage, featuring two premium E88CC vacuum tubes that add warmth and fluidity to digital playback. Think of it as the vinyl listening experience but for your CDs. You know that organic, emotionally engaging sound that makes you actually feel the music instead of just hearing it? That’s what these tubes are doing to your digital audio.

Designer: Pro-Ject Audio

What makes this particularly interesting is the SUOS DM-3381 Red Book drive at its core. This isn’t just any CD mechanism thrown into a pretty case. SUOS-HiFi, which used to be StreamUnlimited Optical Storage, was founded by former Philips CD engineers based near Vienna. These are literally some of the people who helped invent CD technology in the first place. The drive uses a BlueTiger CD-88 servo with predictive algorithms that can maintain accurate data retrieval even when your discs are scratched or less than pristine. We’ve all got a few of those CDs that have seen better days, right?

The integrated Texas Instruments PCM1796 DAC is where things get even more interesting. This means the CD Box RS2 Tube can connect directly to any amplifier with analog inputs without needing a separate digital-to-analog converter. The DAC operates in a fully differential configuration and feeds straight into that balanced tube output stage for maximum signal integrity. You get both XLR balanced outputs and single-ended RCA connections, each with its own dedicated output stage, so you can run both simultaneously without any impedance issues. And if you’re the type who already has a favorite external DAC, there are optical and coaxial digital outputs too.

The build quality is exactly what you’d expect from a product in this range. The entire chassis is precision-machined from aluminum, available in either silver or black finishes, and it’s absolutely gorgeous. The top-loading design means you actually get to interact with your music in a tactile way that tapping a screen just can’t match. There’s something satisfying about placing a disc on the magnetic clamp and watching it load. The big LCD display shows track information and CD-text when available, and it comes with a full aluminum remote control that feels substantial in your hand.

Power delivery matters for any high-end audio component, and Pro-Ject addressed this by using an external power supply to keep transformer noise away from the tube circuitry. For those who want to go even further down the rabbit hole, the player is compatible with Pro-Ject’s Power Box RS2 Sources linear power supply upgrade, which can improve soundstage depth and background silence.

What’s really striking about the CD Box RS2 Tube is how it positions physical media not as obsolete technology but as a deliberate choice for people who care about how music sounds and feels. The resurgence of CD collecting, particularly driven by fandoms like K-pop where physical albums are collectible art objects, proves that people still want to own their music. There’s something to be said for building a curated collection that reflects your actual taste rather than what an algorithm thinks you should like. And if you’re going to own CDs, why not finally be able to play them through something that does them justice?

The CD Box RS2 Tube is set to arrive at UK and EU dealers this month, priced at £1,749 or €1,900. US pricing hasn’t been announced yet, but it’s clearly positioned as a premium product for people who take their listening seriously. Maybe it’s time those of us with unplayed CD collections finally gave them the player they deserve.

The post Your Unplayable CD Collection Just Got a $2,000 Solution first appeared on Yanko Design.

TCL’s $199 Projector Puts a 120-Inch Screen in Any Room (And Costs Less Than AirPods Pro)

Home cinema has never been this affordable. The TCL Projector C1 brings 120-inch screen entertainment to your living room for just $199, making it cheaper than the AirPods Pro, which sounds wild considering one’s a tiny pair of earbuds and the other’s an entire cinema in your house. This isn’t a stripped-down compromise either. The projector packs Google TV, automatic focus, and a built-in battery into a portable package.

What makes this pricing remarkable is the complete feature set TCL has managed to include. Most projectors at this price point require external speakers, lack smart TV capabilities, or need constant manual adjustments. The C1 combines all these essentials in one device. You can set it up anywhere in your home, cast content from your phone, and enjoy Dolby Audio without buying additional equipment. For the cost of a mid-range streaming device, you’re getting an entire home theater system.

Designer: TCL

Click Here to Buy Now

TCL just launched their C1 projector in the UK for £249.99, though Americans get a fairly sizeable price slash of $199. I keep staring at that number trying to figure out where the catch is. You can project a 120-inch image for less than a pair of premium wireless earbuds. A full-size screen that dwarfs even the most absurdly large televisions, available for impulse-purchase money. And there isn’t some limited Black Friday offer anywhere – this is the MSRP on the box.

Obviously they cut corners somewhere. The projector outputs 230 ISO lumens, which isn’t the brightest out there by a fair mile. Yes, you can still watch movies and shows just fine, the only real caveat is that you’ll need absolute darkness – simply drawing one curtain in the afternoon won’t cut it, and watching a game with the lights on may prove to be less than satisfactory – but hey, two hundred bucks. Spend a few more on blackout curtains and you’re good. The LCD panel delivers 1080p natively with 4K support, and you need about 2.5 meters of throw distance to hit that 120-inch maximum.

Google TV comes baked in, which matters more than it should. Most cheap projectors force you to plug in a Chromecast or Fire Stick, adding another $50 and another remote to lose between your couch cushions. Netflix certification means proper app support instead of janky workarounds or browser-based streaming that buffers at the worst possible moments. Auto-focus and keystone correction handle the setup pain points that make most people abandon projectors after one frustrating evening. I’ve spent twenty minutes adjusting focus wheels on projectors that cost ten times this much, so having it happen automatically feels like cheating.

TCL included a 60 Wh battery, which gets you through a two-hour movie without trailing extension cords across your living room. Weighing 1.8 kilograms means you can actually carry this thing around from your living room to your bedroom. The integrated adjustable stand folds into the body instead of requiring a separate tripod purchase, and you can even rotate the C1 to face your ceiling for in-bed entertainment. HDMI and USB-A ports cover the basics, Wi-Fi 5 handles streaming without constant buffering, and Bluetooth 5.1 lets you pair actual speakers because that 8-watt built-in option with Dolby Audio support exists purely for emergencies. Nobody’s watching Dune on an 8-watt speaker and pretending they’re satisfied.

Projectors have always occupied this frustrating middle ground where cheap ones are genuinely terrible and good ones cost mortgage payment money. You either bought a $79 pico-projector that barely functioned or dropped $2,000 on something that required a dedicated room and professional calibration. TCL figured out that most people just want to watch movies on a big screen without taking out a loan or earning an engineering degree. The brightness limitations mean this won’t replace your main TV for daytime viewing, but it turns movie nights into actual events instead of just sitting on your couch scrolling through Netflix for forty minutes. Gaming on a 100-inch screen changes how you experience everything from racing games to sprawling RPGs. Your living room becomes the place where people actually want to gather instead of everyone staring at their phones in different corners.

Two hundred dollars removes most of the decision-making anxiety. You can buy this on a whim and if it doesn’t work out, you’re not crying into your pillow about wasted money. Although, considering TCL’s track record, this one might actually work out to be as good as, if not more reliable than, a 50″ smart TV that may cost 4-5x more.

Click Here to Buy Now

The post TCL’s $199 Projector Puts a 120-Inch Screen in Any Room (And Costs Less Than AirPods Pro) first appeared on Yanko Design.

AI-powered headphones for private conversations even in the most crowded places

We’ve come a long way when it comes to noise isolation used in headphones and earbuds. The Active Noise Cancellation technology employed in current-generation audio accessories has reached a level that allows for adaptive ANC levels depending on the ambient noise environment. A handful of brands even go the distance to implement turning on transparency mode automatically when someone is talking to you. That’s a novelty, but still, you’ll hear the voices of other people in the vicinity if you are in a crowded environment.

That could change with an innovation that aims to eliminate any unwanted voices in the conversation. For instance, when you are talking to your pal on the street, you’ll only hear his voice, and all the other voices of people will be muted out. This innovation will not be helpful as a daily driver, but it will assist people with hearing impairments in hearing better. The initial prototype developed by the group of researchers at the University of Washington is known as the proactive hearing assistant,” and it filters the conversation partner’s voice only and looks promising.

Designer: University of Washington

The AI-powered headphones do all the filtering automatically without any manual input which is a potent functionality current-gen headphones can hugely benefit from. The speech isolating technology suppresses the voices that don’t match the pattern of turn-taking conversation. The AI model on board keeps a tab on the timing patterns and filters out anything that doesn’t fit. Application of this exciting tech could not only be limited to audio accessories and hearing aids but also come integrated with wearable tech like smart glasses or VR headsets. The most practical implementation could come in handy at crowded places where you have to really focus on the person in conversation.

According to Senior author Shyam Gollakota, “Our insight is that when we’re conversing with a specific group of people, our speech naturally follows a turn-taking rhythm. And we can train AI to predict and track those rhythms using only audio, without the need for implanting electrodes.” The current prototype supports one wearer and up to four other people which is impressive. More so when you factor in the lag-free overall experience. Currently, the team is testing two different models of the iteration: one that runs a “who spoke when” check to look for any overlap between the speakers, identifying who’s speaking when. The second model cleans the raw signal and then feeds real-time isolated audio to the user. The latter, so far, has scored well with the 11 participants in the study.

Currently, these basic over-ear headphones are loaded with extra microphones, and the team is working on slimming down the size. In conjunction with the research that is going on, small chips are being developed that run these AI models, so that they can be fitted inside hearing aids or earbuds. So, are we ready for a future where intelligent hearing is part of our daily drive?

 

The post AI-powered headphones for private conversations even in the most crowded places first appeared on Yanko Design.