Instagram vs Impact: How Design Awards Separate Digital Noise from Real Value

Yanko Design’s new podcast, Design Mindset, continues to bring fresh perspectives from design leaders around the world. Every week, this series (Powered by KeyShot) explores critical questions shaping the future of design, from recognition and validation to the evolving role of awards in our digital age. Episode 15 tackles a particularly timely subject: whether design awards still hold relevance when every designer has Instagram, Behance, and LinkedIn at their fingertips.

Jova Zec, Vice President of Red Dot Awards, joins host Radhika Seth for a candid discussion about the changing landscape of design recognition. As the second generation leading one of the world’s most prestigious design competitions (founded by his father, Professor Dr. Peter Zec), Jova brings a unique vantage point on how awards have transformed over three decades, from insider validation to global influence. He’s actively reshaping what recognition means in 2025 and beyond, viewing it as a responsibility rather than simply a reward.

Download your Free Trial of KeyShot Here

From Visibility to Validation: What Awards Mean Now

Jova recalls a time when getting recognized meant appearing on TV or in newspapers. For designers especially, having their own platform was nearly impossible. But now, with Instagram profiles and countless social media options, the landscape has completely changed. This shift has fundamentally altered what design awards need to offer the creative community.

The focus has pivoted from providing visibility to providing qualification. Awards have evolved from megaphones to validators, from amplifiers to authenticators. Jova explains that nowadays, the emphasis lies on being qualified by Red Dot as somebody who produces something that carries genuine value, helping designers prove that their work matters beyond popularity metrics. In a world drowning in content, expert validation proves that a designer’s work holds timeless value beyond digital noise.

The Four Qualities That Separate Impact from Noise

Red Dot evaluates submissions based on four core qualities: functionality, use, responsibility, and seduction. Interestingly, Jova highlights seduction as perhaps the most important. This quality creates the emotional connection that makes consumers genuinely want a product. While functionality and responsibility might seem self-explanatory, seduction is what really drives desire and adoption in the marketplace.

This evaluation approach allows Red Dot to look past short-term viral gimmicks that might rack up likes online. The judges evaluate products on timeless criteria that have remained consistent across the award’s history. Washing machines, for instance, might all look similar to casual observers, but there’s often extraordinary design work happening in the details. Quality never changes; it’s about the experience. If you experience a quality moment with a product, that experience stays the same whether it happened 50 years ago or will happen 100 years from now.

Meta-Categories: Recognizing Invisible Excellence

One of Red Dot’s most significant evolutions has been the introduction of meta-categories. While core principles remain constant, these categories allow Red Dot to highlight specific aspects of design that deserve elevation. The innovative category, for example, recognizes technologically advanced ideas that may lack polish but carry revolutionary potential. Red Dot has also introduced a sustainability meta-category to encourage environmental responsibility.

When Radhika presents Jova with a hypothetical scenario (a sustainable packaging startup with genuinely innovative biodegradable materials that’s technically brilliant but doesn’t photograph beautifully), his response perfectly illustrates this approach. Such a product would win both the innovative award for finding a solution that could revolutionize the industry and the sustainability award for its environmental impact. Winners of these meta-category awards then gain access to a network that includes experts in visual and seductive design, fostering collaboration that can yield products blending sustainable innovation with high aesthetic quality. Leaving such innovation unrecognized is never an option.

Validation Matters at Every Career Stage

The conversation turns personal when discussing how recognition affects designers differently throughout their careers. Jova’s observation is insightful: the importance to the person themselves always stays the same. Whether you’re a design legend or an emerging talent, validation matters deeply.

For established professionals and design legends, winning a Red Dot confirms they’re still performing at the level they believe they are, that they remain in the mindset of the current generation. For young designers trying to establish themselves, awards serve as career kickstarters. Jova shares stories of students who took part in Red Dot, won something, and immediately got employed by major companies wanting their design talent. Beyond career advancement, recognition provides crucial feedback from professionals who aren’t involved in your project and may have never met you before. This validation boosts self-esteem and helps designers affirm they’re on the right path, especially when they’ve just created something great and need confirmation to continue in that direction.

Recognition as Responsibility: Creating a Better World

The overarching theme throughout the conversation is that recognition has evolved significantly in its purpose and meaning. As Jova reflects, he’s watched recognition transform from something designers hoped for to something they expect, from validation to influence, from celebration to obligation. Today, every designer has a platform, every product gets shared instantly, and everyone’s fighting for the same attention. The question isn’t whether awards still matter; it’s whether they’re measuring the right things.

When asked during the rapid fire round what recognition should ultimately create, Jova offers two words: a better world. The biggest misconception designers have about awards? That it’s all a scam. The most overrated aspect of design recognition today? Just designing something that is very popular but lacks usefulness. This episode of Design Mindset crystallizes something important: in an age when anyone can go viral and content floods every feed, expert validation becomes more critical than ever. Awards that maintain rigorous standards and evaluate based on timeless principles fulfill a vital function, steering the design community toward values that matter: quality, responsibility, innovation, and seduction. The future belongs to awards that actively create conditions for great design to flourish.

Design Mindset, Powered by KeyShot, premieres every week with new conversations exploring the minds shaping the future of design. Listen to the full episode with Jova Zec to hear more insights on recognition, Red Dot’s evolution, and what makes design truly timeless.

Download your Free Trial of KeyShot Here

The post Instagram vs Impact: How Design Awards Separate Digital Noise from Real Value first appeared on Yanko Design.

LEGO And Creality Come Together in This Incredibly Detailed Ender-Inspired 3D Printer Model

LEGO and 3D printing occupy similar creative territory, both letting you turn ideas into physical objects through systematic processes. Yet despite this natural kinship, there’s never been an official LEGO model of the specific machine that’s currently democratizing small-scale manufacturing. This fan submission fixes that gap with a recognizably Ender-inspired design that captures both the utilitarian aesthetic and basic kinematic structure of Creality’s popular printer lineup.

The build doesn’t actually function like some ambitious LEGO projects (there’s a working LEGO Turing machine out there made from 2,900 bricks), but that’s not really the point. Someone unfamiliar with 3D printing could assemble this and understand how Cartesian motion systems work, how the hotend assembly relates to the build plate, and why those vertical lead screws matter for Z-axis stability. For people who already own an Ender or similar machine, it’s more about the novelty and nostalgia of seeing familiar hardware translated into a tabletop collectible to admire and cherish.

Designer: Guris14

Paying homage to the Ender 3 is fitting, since it was literally the first 3D printer for so many people, quite like an entire generation having a Nokia first phone. Creality sold hundreds of thousands of these things, maybe millions at this point, and the design became the default mental image of what a 3D printer looks like for an entire generation of makers. That boxy aluminum frame, the single Z-axis lead screw on earlier models (this LEGO version appears to reference the dual-screw V2), the bowden extruder setup with that blue PTFE tube snaking from the frame-mounted motor to the hotend. That characteristic black and silver color scheme with blue accent components has become as visually shorthand for “budget 3D printer” as the beige tower was for 90s PCs. Designer Guris14 scaled the model down from the Ender 3 V2’s actual 220x220x250mm build volume to something desk-friendly, but kept the proportions honest enough that you immediately recognize what you’re looking at.

What’s impressive is how the mechanical systems translate into LEGO’s vocabulary without completely abandoning accuracy. The Z-axis uses what appears to be LEGO’s ribbed hose pieces to represent lead screws, with the gantry able to move up and down the vertical supports. The X-axis gantry rides on a black beam that mimics the 2040 aluminum extrusion found on real Enders, while the hotend assembly hangs from a carriage with that signature blue bowden tube curling back toward the extruder. The build plate sits on a Y-axis assembly with its own lead screw mechanism, and there’s even a LEGO logo on the build-plate, like perfectly placed branding!

Flip the model and you’ll find representations of the motherboard and power supply tucked beneath the build plate, exactly where Creality positions them on the actual hardware. There’s that angled LCD screen mount on the front right corner, positioned just like the stock Ender setup. Even the spool holder perched on the top frame gets included, which is the kind of completeness that separates a thoughtful recreation from a surface-level approximation. You could hand this to someone who’s never seen a 3D printer and they’d walk away with a surprisingly accurate mental model of how these machines are structured.

The project currently sits on the LEGO Ideas website, where fans share their own creations and vote for their favorites. Lucky builds that hit the 10,000 vote mark move to the review stage where LEGO actually considers it for production. That’s always been the tricky part with Ideas submissions. You need a concept that’s simultaneously niche enough to excite enthusiasts but broad enough that LEGO thinks they can sell tens of thousands of units through their retail channels. A 3D printer model lives in an interesting space there. The maker community overlap is real and passionate, but you’re also asking LEGO to produce a set celebrating a technology that competes with their own manufacturing process in certain contexts.

Still, LEGO has greenlit plenty of sets that celebrate tools and technology. The Typewriter, the Polaroid camera, the various Technic construction vehicles, all of these acknowledge that people enjoy building detailed models of machines they find interesting or useful. A 3D printer fits that pattern perfectly, especially as these devices become more common in homes and schools. The educational angle writes itself: here’s a hands-on way to understand additive manufacturing without dealing with bed leveling or filament moisture. Whether that’s enough to get LEGO’s product team on board is another question entirely, but stranger things have made it through the Ideas gauntlet. The NASA Apollo Saturn V started as a fan submission. So did the ship in a bottle.

The post LEGO And Creality Come Together in This Incredibly Detailed Ender-Inspired 3D Printer Model first appeared on Yanko Design.

This Weird $12 Clip-On Gamepad Turns Your Smartphone Into a Game Boy Color

Playtiles looks like something that shouldn’t work. A thin piece of plastic with buttons, no electronics inside, sticking to your smartphone screen like a temporary tattoo. Yet this $12 accessory has managed to capture what expensive gaming phones and elaborate clip-on controllers often miss: the pure, uncomplicated joy of pressing actual buttons while playing retro-style games. The device ships with access to a curated library of indie titles that feel lifted straight from the Game Boy Color era.

The design strips away everything modern mobile gaming has become. No account setup, no firmware updates, no charging cables. You place it on your screen where the virtual controls appear, press the buttons, and play. Thousands of micro suction cups hold it in place during gameplay, and when you’re done, it slides back into your wallet next to your credit cards. After months of anticipation since July’s pre-order launch, units are now reaching backers who wanted to rediscover what handheld gaming felt like before touchscreens took over.

Designer: Playtile

The buttons work through capacitive conduction, using your own body’s electrical properties to register a press on the screen beneath. It’s a completely powerless system, which in a world of constant charging is a breath of fresh air. The entire polycarbonate unit weighs just 0.2 ounces and measures 2.68 by 1.57 inches, making it smaller than a credit card. This isn’t trying to compete with a Backbone or Razer Kishi; those are full-fledged peripherals that turn your phone into a console hybrid. Playtiles is a fundamentally different idea, an accessory so unobtrusive it feels more like a guitar pick than a piece of hardware.

Of course, the hardware is only half the story. The back of every Playtiles has a QR code that launches a browser-based OS, completely sidestepping the app stores. This is an incredibly shrewd move, giving the creators a direct channel to their audience without platform fees or gatekeepers. Early adopters who bought the Season 1 bundle get a new, bite-sized retro game delivered every week for twelve weeks, all built in GB Studio. This transforms a simple controller into a curated content platform. It solves the biggest problem with mobile gaming, which is finding good games amidst a sea of ad-riddled clones. You get a handpicked library that you know is designed perfectly for the D-pad and two-button layout.

You are obviously not going to be playing Genshin Impact on this thing. The two-button constraint is a feature, a deliberate design choice that forces a return to the focused game mechanics of the 8-bit and 16-bit eras. It works on any phone with a screen wider than 68mm, so long as the game lets you reposition the on-screen controls to align with the controller. That’s the key requirement. For $12, it’s an impulse buy that feels like a low-risk experiment in nostalgia. In a market where dedicated handhelds from companies like Anbernic command prices north of $100, the Playtiles carves out its own space by being almost disposable in price yet surprisingly robust in its concept.

The post This Weird $12 Clip-On Gamepad Turns Your Smartphone Into a Game Boy Color first appeared on Yanko Design.

Jetbeam E28 Review: The Swiss Army Knife of EDC Flashlights Finally Exists

Most flashlights ask you to choose. Throw or flood. Pocket size or runtime. A simple beam or specialty features. Jetbeam’s E28 walks into the room and suggests you stop choosing altogether. This flat, brick-shaped EDC light packs dual independently controlled white beams (one flood, one throw), a 365 nm UV emitter, a 520 nm green laser, an RGB side strip with nine modes, and a 7,000 mAh power bank into a single 251-gram body. It is the sort of design that makes you wonder whether the engineers were trying to solve real problems or just win a feature-count contest.

Here’s the thing: the spec sheet sounds like overkill until you actually think about the situations where you need more than a basic beam. Checking a hotel room for cleanliness with UV. Using the laser as a presentation pointer by day and a pet toy by night. Mounting the light magnetically under a car hood while the flood beam lights your work and the throw beam spotlights a distant part number. The E28 is betting that enough people want a true multi-tool in flashlight form, and the early reviews suggest Jetbeam might be onto something.

Designer: Jetbeam

Click Here to Buy Now: $87.45 $159.95 (45% off). Hurry, only a few left!

Two 18650 cells sit inside a flat aluminum body measuring 107.6 × 48 × 26.6 mm, delivering 7,000 mAh of total capacity. That translates to 8.3 hours at 500 lumens in flood mode or 13.2 hours at 300 lumens in throw mode, which are the runtimes that actually matter when you cannot swap batteries mid-hike. Moonlight mode allegedly hits 350 hours, though nobody is realistically running a light that dim for two weeks straight. The dual-cell setup adds weight, pushing the E28 to 251 grams with batteries installed, but that heft comes with the benefit of never worrying about your light dying during an evening walk or a weekend camping trip.

Jetbeam gave each beam its own proper optics instead of cramming compromised emitters into a too-small head. The flood side uses a 7070 LED with a wide, shallow reflector, maxing out at 3,300 lumens (briefly, before stepping down to 1,500 then 1,000 as heat builds). It is a wall of light that illuminates everything within 10 meters with zero shadows, exactly what you want for close work or navigating a dark campsite. The throw channel uses a Luminus SFT-42R with a smooth, focused reflector, hitting 2,480 lumens and reaching 365 meters with a 33,375-candela hotspot. That is search-and-rescue level throw from a light you can slip into a jacket pocket. Running both channels simultaneously gives you a beam profile with bright center punch and complete peripheral coverage, which is how dual-beam lights should work but rarely do because most manufacturers cheap out on one emitter or the other.

A rotary dial handles mode switching, which immediately sets this apart from the “click seventeen times to find strobe” nonsense that plagues most multi-mode lights. Rotate to flood, throw, dual-beam, UV, laser, or RGB, then tap the side button to turn on or cycle brightness. It takes maybe ten minutes to learn and then becomes completely intuitive. You can operate it one-handed even with gloves because the dial has positive detents and the button is chunky and easy to find by feel. Jetbeam clearly spent time thinking about how people actually use lights in the field instead of just designing a UI that looks good on paper.

The UV emitter sits on one side at 365 nm, which is proper ultraviolet (not the 395 nm purple wash that cheap lights use). This wavelength makes currency security features glow, reveals pet stains on carpets, highlights HVAC leak-detection dye, and generally makes invisible contaminants visible. If you work in automotive, HVAC, or forensics, this is a tool you already carry separately. If you travel frequently and care about hotel cleanliness, same deal. For everyone else, it is a fun party trick that might come in handy twice a year. The 520 nm green laser sits opposite, useful for presentations, pointing out distant landmarks, or entertaining pets. It is low-powered enough to be safe but bright enough to be visible across a parking lot at night. The RGB strip runs along the side with nine different modes: solid colors, breathing patterns, meteor effects, rainbow flow. Red light preserves night vision when you are reading maps. Multicolor modes create ambient lighting at camp or act as fill light for photos. Solid white functions as a secondary task light. Some people will use this constantly; others will turn it on once, say “neat,” and forget it exists.

Aerospace-grade aluminum with HA III hard anodizing means the body can take scratches, drops, and general abuse without looking like it fell off a truck. The machining cuts along the flat sides double as heat fins and grip texture, which is functional design instead of just aesthetics. IPX8 waterproofing handles 2 meters of submersion, and the USB-C port hides behind a sealed rubber cover. The magnetic tail holds firm on steel surfaces even when the light is pumping out heat on high mode, making hands-free work actually practical. A removable clip mounts in either direction for cap-brim carry, backpack straps, or belt attachment, and the base plate is compatible with GoPro-style action camera mounts, so you can stick this on bike handlebars, helmets, or quick-release brackets.

The power bank function turns 7,000 mAh of onboard capacity into emergency phone charging via USB-C. You can fully charge most phones at least once, which makes the E28 useful during power outages or long days away from outlets. It is not replacing a dedicated battery bank, but as something that lives in your car or go-bag anyway, having that backup option adds real value. The RGB strip shows battery status for five seconds on power-up, cycling through colors to indicate remaining charge, which is smarter than trying to guess voltage by how bright the beam looks.

Jetbeam ships the E28 with two 3,500 mAh 18650 cells, a USB-C cable, lanyard, mounting clip, hardware, and a hex wrench, so you can use it immediately without buying accessories. Pricing lands at $87.45 with 2 color options to choose from – a tactical green, and a classic grey, which feels reasonable for a light that consolidates a flood beam, throw beam, UV source, laser pointer, and power bank into one 251-gram package. If you already carry multiple single-purpose tools, the E28 is the Swiss Army knife consolidation you did not know you needed. If your lighting needs are simple, a $25 single-beam EDC  or even your phone’s flashlight will serve you fine. But for anyone who regularly finds themselves thinking “I wish I had X tool right now,” Jetbeam built exactly that.

Click Here to Buy Now: $87.45 $159.95 (45% off). Hurry, only a few left!

The post Jetbeam E28 Review: The Swiss Army Knife of EDC Flashlights Finally Exists first appeared on Yanko Design.

This LEGO Retro TV Build Shows You How Cathode Ray Tubes Actually Worked

Before flat screens and streaming services, television sets were hulking pieces of furniture that commanded respect and curiosity in equal measure. FMDavid’s LEGO Ideas submission celebrates these beloved artifacts with a build that goes far beyond surface level nostalgia, diving deep into the mechanical heart of what made these cathode ray tube televisions actually work.

The exterior immediately transports viewers back several decades with its mint green housing, classic rabbit ear antenna, and the unmistakable SMPTE color bars displayed on its gently curved screen. Remove the back panel, however, and the true engineering achievement reveals itself. Every major component of a vintage television has been faithfully recreated in brick form, from the deflection coils wrapped around the CRT neck to the colorful wiring snaking between vacuum tubes and capacitors along the chassis floor.

Designer: FMDavid

And that’s what’s so fascinating about this build – the inner guts. Most retro TV builds in LEGO form stop at the cabinet and screen. Slap on some rabbit ears, throw in a color bar pattern, call it a day. FMDavid apparently decided that approach was for amateurs. The real story here happens when you pop off that back panel and discover what amounts to a miniature engineering degree compressed into approximately 200 square studs of space. The cathode ray tube dominates the interior volume exactly as it would in an actual 1960s Zenith or RCA, which tells me this builder actually studied reference material instead of just vibing on childhood memories. Those deflection coils wrapping around the tube neck aren’t decorative. They’re positioned where they’d actually sit in a functioning set, using what appears to be copper-colored flexible elements or possibly custom printed tiles to simulate the electromagnetic coils that would bend electron beams across phosphor screens at 15,734 times per second.

This build works as both display piece and educational tool. The SMPTE color bars on screen are a nice touch that any broadcast engineer would immediately recognize. Those bars weren’t just pretty patterns. They were precision test signals containing specific luminance and chrominance values that let technicians calibrate everything from color temperature to sync pulse timing. The curved screen profile captures that subtle convex bulge of real CRT glass, which existed because a flat surface would implode under atmospheric pressure once you evacuated the tube interior to near-vacuum conditions. Physics demanded that curve, and FMDavid respected it.

The exterior styling nails the mid-century aesthetic with that sage green cabinet color and brown wooden legs angled outward in classic Danish modern furniture tradition. Those aren’t just legs, they’re cultural signifiers of an era when televisions were statement furniture pieces that families planned their living rooms around. The two control knobs on the right panel would’ve been your channel selector and volume control, back when changing channels meant physically walking across the room and turning a mechanical detent switch through twelve discrete positions. No endless scrolling through 500 cable channels, just ABC, NBC, CBS, and maybe PBS if you were lucky.

The component density here feels right for a television set from the tube era without overwhelming the interior space. Real TV sets from the 1960s packed dozens of components into their cabinets, handling everything from IF amplification to horizontal output to audio processing. FMDavid’s arranged the internal elements so you can actually see the relationship between the major systems. The vacuum tubes reminiscent of the old-timey technology, the transformers with their ribbed heat sinks sit where you’d expect them, probably using modified tile or plate stacks to create those distinctive cooling fins that prevented components from cooking themselves to death during long viewing sessions. Those cylinders at the bottom represent capacitors, which in real sets would filter high voltage DC and store energy for the horizontal deflection circuit. Get a capacitor failure in a vintage TV and you’d lose either your picture width or your vertical hold, sending the image rolling endlessly up the screen. Heck, there’s even the RCA output on the back, with the yellow and red for left and right audio channels, and a white for presumably the video.

The build currently sits at 1,136 supporters on LEGO Ideas, which means it needs another 8,864 votes to hit the 10,000 threshold for official review. That’s how the Ideas platform works. You need 10,000 people to vote for your concept within a limited timeframe, then LEGO’s internal review board evaluates it for commercial viability, piece count economics, licensing considerations, and market fit. FMDavid’s got 418 days remaining to gather those supporters. If you want to see this hit production shelves, head over to the LEGO Ideas website, create a free account if you haven’t already, and cast your vote. No money required, just a few clicks to tell LEGO this deserves manufacturing consideration alongside other fan-designed sets.

The post This LEGO Retro TV Build Shows You How Cathode Ray Tubes Actually Worked first appeared on Yanko Design.

GameSir’s $79 MFi-compatible Controller Lets You Play PC & Xbox titles on an iPhone or iPad Mini

Backbone has enjoyed relatively comfortable dominance in the iPhone controller market, but GameSir just made things considerably less comfortable. The GameSir G8 Plus MFi arrives as the company’s first MFi-certified product, bringing proven gaming hardware expertise to Apple’s ecosystem at an aggressive $79.99 price point. This puts GameSir $20 below the established market leader while matching many of its core features. The competitive landscape matters here because Backbone now faces much stronger competition from companies like GameSir, Gamevice, and Razer, making its premium positioning harder to justify. GameSir counters Backbone’s sleek design and app integration with Hall Effect technology, customizable faceplates, and dual back buttons. The G8 Plus MFi also supports both iOS and compact Android devices, offering flexibility that pure iPhone-focused controllers cannot match.

GameSir finally secured MFi certification, which means reliable performance and stable connectivity across iOS devices without the usual third-party controller jank. The company built its reputation on solid hardware, particularly with controllers like the standard G8 Plus that launched earlier this year with Bluetooth and battery support. This MFi version strips out both the battery and wireless connectivity to meet Apple’s specifications and hit that $79.99 price point. You’re getting a wired-only experience through a movable USB-C port, but the tradeoff includes pass-through charging so your phone doesn’t die mid-session. The telescopic design stretches to accommodate devices up to 215mm, which covers everything from standard iPhones to the iPad Mini, giving you way more versatility than you’d expect from a phone controller.

Designer: GameSir

Click Here to Buy Now

Hall Effect sensors in both the thumbsticks and analog triggers eliminate stick drift, which remains a persistent problem even in premium controllers. The mechanical D-pad provides tactile feedback that membrane alternatives can’t match, though the ABXY buttons use membrane technology to keep costs reasonable. Two programmable back buttons sit on laser-engraved grips, and the entire controller works with the GameSir app for customization. The detachable magnetic faceplate lets you swap thumbstick positions and rearrange the ABXY layout, something Backbone doesn’t offer at any price point. There’s also a 3.5mm audio jack for wired headphones, which matters more than you’d think when Bluetooth audio introduces latency in competitive games. GameSir clearly spent their engineering budget on components that affect gameplay rather than feature bloat.

No gyroscope means games that rely on motion controls won’t work properly, which eliminates a chunk of the iOS gaming library. The wired-only design lacks the flexibility of Backbone’s newer Pro model with its 40-hour battery and Bluetooth connectivity. GameSir’s app exists but doesn’t approach the polish or social features of Backbone’s ecosystem, which has become a genuine differentiator for the brand. Backbone built a game launcher, social platform, and recording hub that transforms the controller from a peripheral into a gaming experience. GameSir offers button remapping and firmware updates, which covers the basics but won’t replace your need for separate apps. You can tell where each company decided to compete and where they chose to concede ground.

The calculation for buyers comes down to whether Backbone’s ecosystem and brand cachet justify a 25% premium over GameSir’s hardware-focused approach. If you care about launching games from a unified interface, sharing clips with friends, or using your controller as a social hub, Backbone remains the obvious choice despite the higher cost. But if you want Hall Effect reliability, physical customization options, and the ability to use the same controller with both your iPhone and a compact Android tablet without switching devices, GameSir built exactly that product. The G8 Plus MFi proves you can compete with an established market leader by focusing on what actually matters to a specific segment of buyers. Backbone set the standard for mobile controllers on iOS, and now someone finally showed up with enough credibility to make the comparison worthwhile rather than embarrassing.

Click Here to Buy Now

The post GameSir’s $79 MFi-compatible Controller Lets You Play PC & Xbox titles on an iPhone or iPad Mini first appeared on Yanko Design.

ChatGPT-Powered Desk Mic gives your Existing Laptop Realtime Translation and Agentic Powers

The most interesting AI hardware this year might not be a new screen or headset. It might be a microphone. Powerrider frames that idea very literally. It takes the form factor of a conference mic and refits it as a GPT‑4o terminal, so the same stem on your desk that handles Zoom calls can also translate in real time, summarize a briefing, or draft follow‑up emails while the meeting is still in progress.

What makes it feel clever is how little ceremony it adds. There is no new display to manage, just a few sculpted buttons for voice input, translation, and AI control. Tap, talk, and the response appears on your existing laptop, ready to paste into a chat, a slide deck, or a script. In a single accessory you get cleaner audio for podcasting and live streaming, plus a dedicated channel that turns casual speech into an ongoing conversation with ChatGPT.

Designer: Powerrider

Click Here to Buy Now: $59 $120 (56% off). Hurry, only a few left!

The hardware itself (model M1) weighs 290 grams and stands 107 millimeters tall, machined from aluminum alloy with a 60‑degree adjustable boom so you can talk comfortably without hunching over your keyboard. The capsule is an omni‑directional condenser tuned to pick up voice across a 100 to 15,000 Hz range, with DSP noise reduction baked into the signal chain. It samples at 16‑bit/48kHz, which puts it squarely in the clean‑enough category for content work without venturing into audiophile overkill. USB‑C handles both power and data, plus there is a 3.5mm jack if you want to monitor through headphones. The base houses four physical buttons, each programmable through companion software. One button wakes the AI mode, another triggers translation, a third handles dictation, and the fourth is a rotary knob that doubles as a mute toggle and volume dial.

This is where Powerrider stops being a mic and starts being a control surface. You can map those keys to custom GPT‑4o prompts, so tapping one button might fire off “translate the last paragraph into Spanish and make it sound conversational,” while another could trigger “rewrite this email to sound less corporate.” The software supports Windows 7 and up, plus macOS 10.15 or later, which covers most setups that still get security patches. The AI functions pull from a pretty expansive toolkit: text translation, PPT generation, AI drawing, background removal, speech writing, document conversion, image analysis, code generation, reading comprehension, Q&A, writing assistance, table creation, and mind mapping. Some of those feel gimmicky (I have yet to meet anyone who genuinely wants AI‑generated mind maps on demand), but the core translation and drafting tools hit real pain points if you work across languages or spend half your day rewriting the same three types of message.

The hook here is immediacy. Most of us already talk to ChatGPT, but we do it through a browser tab or a pinned app, which means context‑switching, copying text, pasting prompts, and generally breaking flow. Powerrider tries to make that interaction feel more like push‑to‑talk in a game or on a two‑way radio. You hold a key, speak the command, release, and the result lands in your active window or in a floating overlay, depending on how you configure it. That workflow collapses a six‑step process (open ChatGPT, type or paste, wait, copy response, switch back, paste again) into a two‑step one (press, speak). If you live in tools like Notion, Google Docs, or any IDE that supports text injection, the time savings compound quickly. The software also handles screenshot translation, which is genuinely useful if you are reading documentation, design files, or research papers in another language and want inline conversion without manually copying blocks of text into DeepL.

Because the mic itself is a legitimate audio interface, you can use it in OBS, Zoom, or any DAW that recognizes standard USB microphones. The frequency response is wide enough for vocal clarity but not so hyped that you get harsh sibilance or boomy proximity effects. Think more “podcast interview” than “ASMR whisper track.” The omni pickup pattern means you do not have to aim it perfectly, which is nice if you are someone who gestures while talking or shifts around in your chair. The DSP noise reduction does a decent job of killing keyboard clatter and ambient hum, though it is not going to save you if you are recording next to a mechanical keyboard with clicky blues or a window AC unit. For meeting‑quality audio and streaming voiceover work, it sits comfortably in the same tier as entry‑level USB mics like the Blue Yeti Nano or the HyperX SoloCast, but with the GPT layer on top.

The company behind the Powerrider is positioning this as part of a broader peripheral ecosystem, which is where things get more interesting. They are also offering an AI‑powered keyboard (model K1) and an AI‑powered mouse (model S1), both of which follow the same philosophy: take an essential input device and wire it directly to GPT‑4o so you can invoke AI functions without leaving your workspace. The keyboard is a 98‑key Crater mechanical with RGB backlighting, a volume knob, and three custom macro keys dedicated to AI tasks. It supports both wired USB and wireless 2.4GHz/Bluetooth 5.0 across four channels, and the battery will run for 148 hours of continuous typing with the backlight off, or about 16 hours with the RGB cranked. The mouse is a wireless optical with adjustable DPI up to 4000, seven buttons (including dedicated AI, custom, and search keys), and a two‑hour charge time for what they claim is several days of use. Both peripherals plug into the same software suite as the mic, so you can trigger translation, text generation, or document conversion from any of the three devices depending on which one is closest to your hand.

Powerrider is live on Kickstarter right now with a few weeks left in the campaign, and the pricing is structured around bundles. A single mic starts at $59 for the super early bird tier (limited to 300 units) or $69 for the regular early bird. The full “Powerrider AI One Suite” bundle, which includes one mic, one keyboard, and one mouse, is priced at $269 (down from a claimed $608 MSRP). You can also grab the mic plus keyboard for $169 or the mic plus mouse for $149. Add‑on pricing if you are already backing is $119 for the keyboard, $99 for the mouse, and $59 for an extra mic. Those numbers put the mic roughly on par with mid‑tier USB condensers, but with the AI layer effectively thrown in as the value‑add. Whether that trade‑off makes sense depends entirely on how much friction you currently feel when bouncing between your tools and ChatGPT, and whether you are willing to let a hardware button own part of that workflow instead of a keyboard shortcut or Alfred snippet.

Click Here to Buy Now: $59 $120 (56% off). Hurry, only a few left!

The post ChatGPT-Powered Desk Mic gives your Existing Laptop Realtime Translation and Agentic Powers first appeared on Yanko Design.

How to Spot Fake AI Products at CES 2026 Before You Buy

Merriam-Webster just named “slop” its word of the year, defining it as “digital content of low quality that is produced usually in quantity by means of artificial intelligence.” The choice is blunt, almost mocking, and it captures something that has been building for months: a collective exhaustion with AI hype that promises intelligence but delivers mediocrity. Over the past three months, that exhaustion has started bleeding into Wall Street. Investors, analysts, and even CEOs of AI companies themselves have been openly questioning whether we are living through an AI bubble. OpenAI’s Sam Altman warned in August that investors are “overexcited about AI,” and Google’s Sundar Pichai admitted to “elements of irrationality” in the sector. The tech industry is pouring trillions into AI infrastructure while revenues lag far behind, raising fears of a dot-com-style correction that could rattle the entire economy.

CES 2026 is going to be ground zero for this tension. Every booth will have an “AI-powered” sticker on something, and a lot of those products will be genuine innovations built on real on-device intelligence and agentic workflows. But a lot of them will also be slop: rebranded features, cloud-dependent gimmicks, and shallow marketing plays designed to ride the hype wave before it crashes. If you are walking the show floor or reading coverage from home, knowing how to separate real AI from fake AI is not just a consumer protection issue anymore. It is a survival skill for navigating a market that feeds on confusion and a general lack of awareness around actual Artificial Intelligence.

1. If it goes offline and stops working, it was never really AI

The simplest test for fake AI is also the most reliable: ask what happens when the internet connection drops. Real AI that lives on your device will keep functioning because the processing is happening locally, using dedicated chips and models stored in the gadget itself. Fake AI is just a thin client that calls a cloud API, and the moment your Wi-Fi cuts out, the “intelligence” disappears with it.

Picture a laptop at CES 2026 that claims to have an AI writing assistant. If that assistant can still summarize documents, rewrite paragraphs, and handle live transcription when you are on a plane with no internet, you are looking at real on-device AI. If it gives you an error message the second you disconnect, it is cloud-dependent marketing wrapped in an “AI PC” label. The same logic applies to TVs, smart home devices, robot vacuums, and wearables. Genuine AI products are designed to think locally, with cloud connectivity as an optional boost rather than a lifeline.

The distinction matters because on-device AI is expensive to build. It requires new silicon, tighter integration between hardware and software, and real engineering effort. Companies that invested in that infrastructure will want you to know it works offline because that is their competitive edge. Companies that skipped that step will either avoid the question or bury it in fine print. At CES 2026, press the demo staff on this: disconnect the device from the network and see if the AI features still run. If they do not, you just saved yourself from buying rebranded cloud software in a shiny box.

If your Robot Vacuum has Microsoft Copilot, RUN!

2. If it’s just a chatbot, it isn’t AI… it’s GPT Customer Care

The laziest fake AI move at CES 2026 will be products that open a chat window, let you type questions, and call that an AI feature. A chatbot is not product intelligence. It is a generic language model wrapper that any company can license from OpenAI, Anthropic, or Google in about a week, then slap their logo on top and call it innovation. If the only AI interaction your gadget offers is typing into a text box and getting conversational responses, you are not looking at an AI product. You are looking at customer service automation dressed up as a feature.

Real AI is embedded in how the product works. It is the robot vacuum that maps your home, decides which rooms need more attention, and schedules itself around your routine without you opening an app. It is the laptop that watches what you do, learns your workflow, and starts suggesting shortcuts or automating repetitive tasks before you ask. It is the TV that notices you always pause shows when your smart doorbell rings and starts doing it automatically. None of that requires a chat interface because the intelligence is baked into the behavior of the device itself, not bolted on as a separate conversation layer.

If a company demo at CES 2026 starts with “just ask it anything,” probe deeper. Can it take actions across the system, or does it just answer questions? Does it learn from how you use the product, or is it the same canned responses for everyone? Is the chat interface the only way to interact with the AI, or does the product also make smart decisions in the background without prompting? A chatbot can be useful, but it is table stakes now, not a differentiator. If that is the whole AI story, the company did not build AI into their product. They rented a language model and hoped you would not notice.

3. If the AI only does one narrow thing, it is probably just a renamed preset

Another red flag is when a product’s AI feature is weirdly specific and cannot generalize beyond a single task. A TV that has “AI motion smoothing” but no other intelligent behavior is not running a real AI model; it is running the same interpolation algorithm TVs have had for years, now rebranded with an AI label. A camera that has “AI portrait mode” but cannot recognize anything else is likely just using a basic depth sensor and calling it artificial intelligence. Real AI, especially the kind built into modern chips and operating systems, is designed to generalize across tasks: it can recognize objects, understand context, predict user intent, and coordinate with other devices.

Ask yourself: does this product’s AI learn, adapt, or handle multiple scenarios, or does it just trigger a preset when you press a button? If it is the latter, you are looking at a marketing gimmick. Fake AI products love to hide behind phrases like “AI-enhanced” or “AI-optimized,” which sound impressive but are deliberately vague. Real AI products will tell you exactly what the system is doing: “on-device object recognition,” “local natural language processing,” “agentic task coordination.” Specificity is a sign of substance. Vagueness is a sign of slop.

The other giveaway is whether the AI improves over time. Genuine AI systems get smarter as they process more data and learn from user behavior, often through firmware updates that improve the underlying models. Fake AI products ship with a fixed set of presets and never change. At CES 2026, ask demo reps if the product’s AI will improve after launch, how updates work, and whether the intelligence adapts to individual users. If they cannot give you a clear answer, you are looking at a one-time software trick masquerading as artificial intelligence.

Don’t fall for ‘AI Enhancement’ presets or buttons that don’t do anything related to AI.

4. If the company cannot explain what the AI actually does, walk away

Fake AI thrives on ambiguity. Companies that bolt a chatbot onto a product and call it AI-powered know they do not have a real differentiator, so they lean into buzzwords and avoid specifics. Real AI companies, by contrast, will happily explain what their models do, where the processing happens, and what problems the AI solves that the previous generation could not. If a booth rep at CES 2026 gives you vague non-answers like “it uses machine learning to optimize performance” without defining what gets optimized or how, that is a warning sign.

Push for concrete examples. If a smart home hub claims to have AI coordination, ask: what decisions does it make on its own, and what still requires manual setup? If a wearable says it has AI health coaching, ask: is the analysis happening on the device or in the cloud, and can it work offline while hiking in the wilderness? If a laptop advertises an AI assistant, ask: what can it do without an internet connection, and does it integrate with other apps (agentic) or just sit in a sidebar? Companies with real AI will have detailed, confident answers because they built the system from the ground up. Companies with fake AI will deflect, generalize, or change the subject.

The other test is whether the AI claim matches the price and the hardware. If a $200 gadget promises the same on-device AI capabilities as a $1,500 laptop with a dedicated neural processing unit, somebody is lying. Real AI requires real silicon, and that silicon costs money. Budget products can absolutely have useful AI features, but they will typically offload more work to the cloud or use simpler models. If the pricing does not line up with the technical claims, it is worth being skeptical. At CES 2026, ask what chip is powering the AI, whether it has a dedicated NPU, and how much of the intelligence is local versus cloud-based. If they cannot or will not tell you, that is your cue to move on.

5. Check if the AI plays well with others, or if it lives in a silo

One of the clearest differences between real agentic AI and fake “AI inside” products is interoperability. Genuine AI systems are designed to coordinate with other devices, share context, and act on your behalf across an ecosystem. Fake AI products exist in isolation: they have a chatbot you can talk to, but it does not connect to anything else, and it cannot take actions beyond its own narrow interface. Samsung’s CES 2026 exhibit is explicitly built around AI and interoperability, with appliances, TVs, and smart home products all coordinated by a shared AI layer. That is what real agentic AI looks like: the fridge, washer, vacuum, and thermostat all understand context and can make decisions together without you micromanaging each one. Fake AI, by contrast, gives you five isolated apps with five separate chatbots, none of which talk to each other. If a product at CES 2026 claims to have AI but cannot integrate with the rest of your smart home, car, or workflow, it is not delivering the core promise of agentic systems.

Ask demo reps: does this work with other brands, or only within your ecosystem? Can it trigger actions in other apps or devices, or does it just respond to questions? Does it understand my preferences across multiple products, or does each device start from scratch? Companies that built real AI ecosystems will brag about cross-device coordination because it is hard to pull off and it is the whole point. Companies selling fake AI will either avoid the topic or try to upsell you on buying everything from them, which is a sign they do not have real interoperability.

6. When in doubt, look for the slop

The rise of AI-generated “slop” gives you a shortcut for spotting lazy AI products: if the marketing materials, product images, or demo videos look AI-generated and low-effort, the product itself is probably shallow too. Merriam-Webster defines slop as low-quality digital content produced in quantity by AI, and it has flooded everything from social media to advertising to product launches. Brands that cut corners on their own marketing by using obviously AI-generated visuals are signaling that they also cut corners on the actual product development.

Watch for telltale signs: weird proportions in product photos, uncanny facial expressions in lifestyle shots, text that sounds generic and buzzword-heavy with no real specifics, and claims that are too good to be true with no technical backing. Real AI products are built by companies that care about craft, and that care shows up in how they present the product. Fake AI products are built by companies chasing a trend, and the slop in their marketing is the giveaway. At CES 2026, trust your instincts: if the booth, the video, or the pitch feels hollow and mass-produced, the gadget probably is too.

The post How to Spot Fake AI Products at CES 2026 Before You Buy first appeared on Yanko Design.

This $2,899 Desktop AI Computer With RTX 5090M Lets You Cancel Every AI Subscription Forever

Look across the history of consumer tech and a pattern appears. Ownership gives way to services, and services become subscriptions. We went from stacks of DVDs to streaming movies online, from external drives for storing data and backups to cloud drives, from MP3s on a player to Spotify subscriptions, from one time software licenses to recurring plans. But when AI arrived, it skipped the ownership phase entirely. Intelligence came as a service, priced per month or per million tokens. No ownership, no privacy. Just a $20 a month fee.

A device like Olares One rearranges that relationship. It compresses a full AI stack into a desktop sized box that behaves less like a website and more like a personal studio. You install models the way you once installed apps. You shape its behavior over time, training it on your documents, your archives, your creative habits. The result is an assistant that feels less rented and more grown, with privacy, latency, and long term cost all tilting back toward the owner.

Designer: Olares

Click Here to Buy Now: $2,899 $3,999 (28% off) Hurry! Only 15/320 units left!

The pitch is straightforward. Take the guts of a $4,000 gaming laptop, strip out the screen and keyboard, put everything in a minimalist chassis that looks like Apple designed a chonky Mac mini, and tune it for sustained performance instead of portability. Dimensions are 320 x 197 x 55mm, weighs 2.15 kg without the PSU, and the whole package pulls 330 watts under full load. Inside sits an Intel Core Ultra 9 275HX with 24 cores running up to 5.4 GHz and 36 MB of cache, the same chip you would find in flagship creator laptops this year. The GPU is an NVIDIA GeForce RTX 5090 Mobile with 24 GB of GDDR7 VRAM, 1824 AI TOPS of tensor performance, and a 175W max TGP. Pair that with 96 GB of DDR5 RAM at 5600 MHz and a PCIe 4.0 NVMe SSD, and you have workstation level compute in a box smaller than most soundbars.

Olares OS runs on top of all that hardware, and it is open source, which means you can audit the code, fork it, or wipe it entirely if you want. Out of the box it behaves like a personal cloud with an app store containing over 200 applications ready to deploy with one click. Think Docker and Kubernetes, but without needing to touch a terminal unless you want to. The interface looks clean, almost suspiciously clean, like someone finally asked what would happen if you gave a NAS the polish of an iPhone. You get a unified account system so all your apps share a single login, configurable multi factor authentication, enterprise grade sandboxing for third party apps, and Tailscale integration that lets you access your Olares box securely from anywhere in the world. Your data stays on your hardware, full stop.

I have been tinkering with local LLMs for the past year, and the setup has always been the worst part. You spend hours wrestling with CUDA drivers, Python environments, and obscure GitHub repos just to get a model running, and then you realize you need a different frontend for image generation and another tool for managing multiple models and suddenly you have seven terminal windows open and nothing talks to each other. Olares solves that friction by bundling everything into a coherent ecosystem. Chat agents like Open WebUI and Lobe Chat, general agents like Suna and OWL, AI search with Perplexica and SearXNG, coding assistants like Void, design agents like Denpot, deep research tools like DeerFlow, task automation with n8n and Dify. Local LLMs include Ollama, vLLM, and SGIL. You also get observability tools like Grafana, Prometheus, and Langfuse so you can actually monitor what your models are doing. The philosophy is simple. String together workflows that feel as fluid as using a cloud service, except everything runs on metal you control.

Gaming on this thing is a legitimate use case, which feels almost incidental given the AI focus but makes total sense once you look at the hardware. That RTX 5090 Mobile with 24 GB of VRAM and 175 watts of power can handle AAA titles at high settings, and because the machine is designed as a desktop box, you can hook it up to any monitor or TV you want. Olares positions this as a way to turn your Steam library into a personal cloud gaming service. You install your games on the Olares One, then stream them to your phone, tablet, or laptop from anywhere. It is like running your own GeForce Now or Xbox Cloud Gaming, except you own the server and there are no monthly fees eating into your budget. The 2 TB of NVMe storage gives you room for a decent library, and if you need more, the system uses standard M.2 drives, so upgrades are straightforward.

Cooling is borrowed from high end laptops, with a 2.8mm vapor chamber and a 176 layer copper fin array handling heat dissipation across a massive 310,000 square millimeter surface. Two custom 54 blade fans keep everything moving, and the acoustic tuning is genuinely impressive. At idle, the system sits at 19 dB, which is whisper quiet. Under full GPU and CPU load, it climbs to 38.8 dB, quieter than most gaming desktops and even some laptops. Thermal control keeps things stable at 43.8 degrees Celsius under sustained loads, which means you can run inference on a 70B model or render a Blender scene without the fans turning into jet engines. I have used plenty of small form factor PCs that sound like they are preparing for liftoff the moment you ask them to do anything demanding, so this is a welcome change.

RAGFlow and AnythingLLM handle retrieval augmented generation, which lets you feed your own documents, notes, and files into your AI models so they can answer questions about your specific data. Wise and Files manage your media and documents, all searchable and indexed locally. There is a digital secret garden feature that keeps an AI powered local first reader for articles and research, with third party integration so you can pull in content from RSS feeds or save articles for later. The configuration hub lets you manage storage, backups, network settings, and app deployments without touching config files, and there is a full Kubernetes console if you want to go deep. The no CLI Kubernetes interface is a big deal for people who want the power of container orchestration but do not want to memorize kubectl commands. You get centralized control, performance monitoring at a glance, and the ability to spin up or tear down services in seconds.

Olares makes a blunt economic argument. If you are using Midjourney, Runway, ChatGPT Pro, and Manus for creative work, you are probably spending around $6,456 per year per user. For a five person team, that balloons to $32,280 annually. Olares One costs $2,899 for the hardware (early-bird pricing), which breaks down to about $22.20 per month per user over three years if you split it across a five person team. Your data stays private, stored locally on your own hardware instead of floating through someone else’s data center. You get a unified hub of over 200 apps with one click installs, so there are no fragmented tools or inconsistent experiences. Performance is fast and reliable, even when you are offline, because everything runs on device. You own the infrastructure, which means unconditional and sovereign control over your tools and data. The rented AI stack leaves you as a tenant with conditional and revocable access.

Ports include Thunderbolt 5, RJ45 Ethernet at 2.5 Gbps, USB A, and HDMI 2.1, plus Wi-Fi 7 and Bluetooth 5.4 for wireless connectivity. The industrial design leans heavily into the golden ratio aesthetic, with smooth curves and a matte aluminum finish that would not look out of place next to a high end monitor or a piece of studio equipment. It feels like someone took the guts of a $4,000 gaming laptop, stripped out the compromises of portability, and optimized everything for sustained performance and quietness. The result is a machine that can handle creative work, AI experimentation, gaming, and personal cloud duties without breaking a sweat or your eardrums.

Olares One is available now on Kickstarter, with units expected to ship early next year. The base configuration with the RTX 5090 Mobile, Intel Core Ultra 9 275HX, 96 GB RAM, and 2 TB SSD is priced at a discounted $2,899 for early-bird backers (MSRP $3,999). That still is a substantial upfront cost, but when you compare it to the ongoing expense of cloud AI subscriptions and the privacy compromises that come with them, the math starts to make sense. You pay once, and the machine is yours. No throttling, no price hikes, no terms of service updates that quietly change what the company can do with your data. If you have been looking for a way to bring AI home without sacrificing capability or convenience, this is probably the most polished attempt at that idea so far.

Click Here to Buy Now: $2,899 $3,999 (28% off) Hurry! Only 15/320 units left!

The post This $2,899 Desktop AI Computer With RTX 5090M Lets You Cancel Every AI Subscription Forever first appeared on Yanko Design.

How Coca Cola’s Benny Lee Is Redefining Industrial Design as Storytelling, Not Just “Making Products”

Design Mindset steps into episode 16 with a clear purpose: to understand how industrial designers are navigating a world where tools, platforms, and expectations keep shifting under their feet. Yanko Design’s weekly podcast, Design Mindset, powered by KeyShot, is less about design celebrity and more about design thinking, unpacking how decisions get made, how stories are built around products, and how technology is reshaping the craft from the inside out. Each week, a new episode premieres with designers who are actively pushing workflows, visuals, and experiences into new territory.

This episode features Benny Lee, Senior Design Manager of Technology and Strategic Partnerships at The Coca-Cola Company, and a practitioner who moves comfortably between mass production, digital ecosystems, and even film props. Trained as an industrial designer, Benny started at Coke in a traditional ID role while also leading visualization, bringing advanced 3D rendering into a company that was still heavily reliant on Photoshop and 2D assets. He now sits at the intersection of heritage and innovation, helping a 140 year old brand adopt real time visualization, AI, and new storytelling platforms without losing what makes Coca-Cola recognizable everywhere.

Download your Free Trial of KeyShot Here

Storytelling as the real job of industrial design

Benny treats industrial design as a storytelling discipline first and a styling discipline second. His training spans sketching, 3D modeling, rendering, and prototyping, but he frames each of these as a narrative tool rather than a technical checkpoint. Sketches, CAD, and renders exist to show what a product does, how it behaves, and how it should feel to use, not just how it looks on a white background.

Inside a large organization, that narrative focus becomes practical very quickly. He puts it plainly in the conversation: “Storytelling as an ID, you know, is important because it’s all about bringing this visual alignment of the actual product when you’re trying to get a buy in to sell in.” The job is to reach a point where the design communicates its intent on its own, without the designer in the room. Call to action areas, material breaks, and even lighting choices in a render become part of that silent story, aligning stakeholders around what the product is supposed to be.

When rendering becomes a thinking tool, not just a final output

When Benny joined Coca-Cola, much of the visualization work sat in a 2D world. Concepts were often built through Photoshop and static compositions, heavily intertwined with graphic design. He talks about the shift he helped drive quite directly: “I find it really quite an honor and a pleasure that I was able to bring 3D renderings into the practice here.” That move to 3D was not just about realism, it was about adding depth to how ideas are explored and communicated.

The key change is that rendering is no longer treated as the last step before a presentation. Tools like KeyShot become part of the exploration loop. Benny uses quick CAD setups and fast render passes to test light, material, and even simple motion, and to storyboard how a product opens, glows, or reacts in context. He describes this as a way to “fail fast, iterate faster,” and he underlines that “we don’t always just use renderings to create pretty visuals and a lot of times we’re using it to build new experience.” Visualization turns into a thinking environment, especially valuable when physical labs and prototypes are slow or limited.

Respecting a 140 year old brand while pushing it into new arenas

Designing at Coca-Cola means working around a product that barely changes. The formula in the bottle remains constant, so innovation happens in the ecosystem that surrounds it. Packaging systems, retail touchpoints, digital layers, and immersive experiences become the canvas where design can move, while the core product stays familiar.

Benny describes his role with a custodian mindset. He imagines the brand as a skyscraper built over generations, and his work as adding “layers of bricks” rather than ripping out foundations. That perspective shows up in how Coca-Cola experiments with new platforms. The company explores metaverse activations, NFTs, experiential installations, and AI driven storytelling, not as disconnected stunts but as new ways to retell the same product story for new audiences. The strategy, as he frames it, is to adapt the ecosystem and technology “to retell the product’s story” while staying true to the brand’s core character.

Mass production versus one off film props

Benny’s portfolio stretches across lifestyle accessories, consumer electronics, and concept work for films like the Avengers. On the surface, the process for these domains begins similarly, with sketching, modeling, and rendering. The divergence appears when the work hits reality. In consumer products, industrial design is tied to mass production, with all the constraints of tooling, factory collaboration, golden samples, logistics, and long term durability.

Film work operates under a different set of pressures. Concept art might start in tools like ZBrush with exaggerated, dramatic forms that look incredible on screen but are not remotely manufacturable in a traditional sense. Benny’s responsibility in those situations is to respect the creative vision while making it buildable. Props do not have to scale to millions of units. They have to survive a shoot and read correctly on camera. If one breaks, it can be rebuilt. That freedom shifts what is possible in form and material, but the throughline is still storytelling, captured in a few seconds of screen time instead of years of daily use.

Adapting to an ever expanding toolset without losing your core

Throughout the episode, Benny returns to the pace of change in design tools. Skills that were once specialized are now table stakes. Students are graduating with exposure to UI and UX, electronics integration, and AI enhanced workflows. He notes that “you have to wear so many hats,” and points out that traditional industrial design is becoming a “rare breed” precisely because the field has branched into web, mobile, service, and emerging tech work.

His response is not to chase mastery of every new tool, but to understand what each category can do and to build teams around that understanding. He emphasizes hiring people who are better than you at specific domains and managing the mix of skills rather than guarding personal expertise. In parallel, he argues that adaptation is now the most important traditional trait. The designers who thrive will be the ones who stay resilient, keep a story first mindset, and move fluidly between CAD, KeyShot, AI, and whatever comes next, while still grounding their decisions in how things work in the real world.


Design Mindset, powered by KeyShot, returns every week with conversations like this, tracing the connection between how designers think, the tools they use, and the work they put into the world. Episode 6 with Reid Schlegel leaves you with a simple, practical challenge: see your ideas sooner, in more ways, and with less fear of being imperfect.

Download your Free Trial of KeyShot Here

The post How Coca Cola’s Benny Lee Is Redefining Industrial Design as Storytelling, Not Just “Making Products” first appeared on Yanko Design.