2026 ROG Zephyrus Duo, ASUS Zenbook DUO: Versatility You Can Use Today

We have seen quite a number of laptops bearing mind-blowing flexible screens that fold or roll, and while they do help push the envelope of laptop design, they might be the future, but it is definitely not yet here. Foldables still scratch easily and are expensive, rollables are at a concept stage, and both rely on technology that is impressive in a demo booth but nerve-wracking when you actually need to get work done and cannot afford downtime or repair bills.

At CES 2026, ASUS and its gaming brand Republic of Gamers are offering two designs for people who need to get stuff done here and now. Although less spectacular than a screen that folds like paper, the ROG Zephyrus Duo 2026 (GX561) and the ASUS Zenbook DUO 2026 (UX8407) promise a more versatile and more reliable experience, using two rigid OLED panels, conventional hinges, and software layouts that treat dual screens as a workflow multiplier instead of a party trick.

Designer: ASUS

Dual Screens, Multiple Possibilities

With a foldable laptop, you get a large screen that folds down to the size of a normal laptop, or a laptop-sized screen that folds down to half its size. A rollable laptop, on the other hand, starts with a normal size and then expands for more real estate. They both try to offer more screen space with a manageable footprint, but it is still a single panel with a limited set of poses. You can fold it like a book or lay it flat, but you cannot flip one half around into a true tent or dual-monitor arrangement, and the panel itself stays soft and fragile under your fingertips.

The dual-screen design sported by the new Zephyrus Duo and Zenbook DUO uses two independent but connected screens, practically dual monitors connected by a hinge. They are conventional, rigid OLED panels, so none of the soft, scratch-prone flexible displays of foldables. It feels almost like a normal laptop, just one that has a second monitor permanently attached, hinged, and ready to be stood up, laid flat, or folded back into tent mode for sharing across a table.

More importantly, however, this design offers more versatility in terms of how you actually use the machine throughout the day. You can use only a single screen in laptop mode if space is a constraint or if you want to stay focused. You can flip the whole thing into tent mode to share your screen with someone sitting across from you. You can detach the keyboard entirely and stand both panels up as a tiny dual-screen desk, with the keyboard floating wherever your hands are most comfortable. ASUS brings this design to two different kinds of laptops, really just two sides of the same coin, offering the same core idea with the flexibility you can use today.

ROG Zephyrus Duo 2026 (GX561): Not Just a Gaming Laptop

This is not the first Zephyrus Duo, but the first one launched nearly six years ago was more of a one-and-a-half-screen laptop. There was a smaller touchscreen right above the keyboard that offered some space for tool palettes and chat windows, but it was still very much a secondary strip. This 2026 redesign, in contrast, is a bold new direction, going full dual-screen with two large OLED panels and a detachable keyboard like no other gaming laptop has dared to go.

It is a true gaming laptop, of course, and the specs show its pedigree. An Intel Core Ultra 9 processor, paired with up to an NVIDIA GeForce RTX 5090 Laptop GPU pushing up to 135W TGP, backed by up to 64GB of LPDDR5X memory and up to 2TB of PCIe Gen5 SSD storage with easy swap access. The 90Wh battery supports fast charging, hitting 50% in 30 minutes.

The main display is ROG Nebula HDR, a 3K OLED panel running at 120Hz with 0.2 ms response time, HDR 1100 nits peak brightness, 100% DCI-P3 coverage, and ΔE below 1 color accuracy, protected by Corning Glass DXC. All of that is cooled by ROG’s Intelligent Cooling system, with liquid metal on the CPU, a vapor chamber, graphite sheets, and 0 dB Ambient Cooling mode for silent operation when you are not rendering or fragging.

At 6.28 lb and just 0.77 inches thin, it is heavy enough to remind you there is serious silicon inside, but still portable enough to live in a backpack. The machine includes Wi-Fi 7, Thunderbolt 4, HDMI 2.1, USB 3.2 Gen 2 Type-A, and an SD card slot, plus a six-speaker system with two tweeters and four woofers running Dolby Atmos, so you can actually enjoy game audio without always reaching for headphones.

Where the ROG Zephyrus Duo 2026 really shines is in versatility. Because a laptop that can run AAA games can practically do anything as well, including content creation, programming, video editing, and 3D work. Designers and creatives will definitely love the freedom such a design offers, paired with powerful hardware that does not compromise just to fit two screens. You can keep After Effects timelines on one panel while the preview lives on the other, or split code and output, or run a game on the main screen with Discord and guides on the second, all without alt-tabbing or shrinking windows.

ASUS Zenbook DUO 2026 (UX8407): Dual-Screen Goes Lux

The ASUS Zenbook DUO 2026 shaves off some of the gaming hardware to offer a dual-screen laptop that is slimmer, lighter, and a little more stylish. It is no slouch, though, and carries plenty of muscle to handle any productivity task you might throw at it. That also includes content creation, with a bit of light gaming on the side when you want to unwind between meetings or deadlines and do not need RTX power for every session.

The Zenbook DUO 2026 runs a next-gen Intel Core Ultra processor with up to 50 TOPS NPU for AI workloads, paired with Intel Arc integrated graphics, up to 32GB of memory, and up to 2TB of SSD storage. It supports up to 45W TDP with a dual-fan thermal solution, keeping the machine stable during sustained loads without the heavy cooling overhead of a discrete GPU, which helps keep the chassis thin and light.

The main display is an ASUS Lumina Pro OLED with 1000 nits peak brightness, and both screens are treated with the same level of care, making them equally usable for productivity, media, and light creative work. What differentiates this next-gen dual-screen from its predecessor is the new hinge design that puts the screens closer together. With thinner bezels, they now sit just 8.28mm apart, a 70% reduction, and they almost look like a single continuous piece.

ASUS has adopted its Ceraluminum material for the Zenbook DUO 2026’s laptop lid, bottom case, and kickstand, making it not only look and feel more luxurious but also be a bit more resilient to accidents and daily wear. The Zenbook DUO weighs just 1.65kg and has a 5% smaller footprint than previous generations, which makes it easier to carry and fit on smaller desks or café tables.

It is packed with ports, including two Thunderbolt 4 connections, HDMI 2.1, USB 3.2 Gen 2 Type-A, and an audio jack, plus six speakers with two front-firing tweeters and four woofers for surprisingly rich audio from a thin chassis. The keyboard connects via magnetic pogo pins or Bluetooth, and the machine supports ASUS Pen 3.0, turning both screens into writable surfaces for notes, sketches, or annotations during video calls or brainstorming sessions.

Like the Zephyrus Duo, the Zenbook DUO 2026 can be used in multiple orientations. Laptop mode with the keyboard on top of the lower screen for traditional clamshell use. Desktop mode with both screens stacked or side-by-side, the detachable keyboard placed separately, and the built-in kickstand propping the whole thing up like a tiny dual-monitor workstation. Tent mode for presentations or sharing content across a table without needing an external display or awkward screen mirroring. The flexibility is the point, and it works without asking you to trust a flexible panel not to crease or scratch under normal use.

Trade-offs and Potential

Dual-screen laptops are not perfect, of course. You need to keep track of a separate keyboard you hope you will not lose, though that is also the case for some foldable laptops anyway, and the detachable keyboard is also what lets both the Zephyrus Duo and Zenbook DUO behave like tiny dual-monitor desks in tent or desktop modes. These machines are easily heavier than single-screen laptops with equivalent specs, and they will likely be priced firmly in premium territory, though still far below the stratospheric costs of early foldables.

There is also that unavoidable divider between the two screens, though ASUS has gotten it down to 8.28 mm on the Zenbook DUO, and at that point it starts to feel more like a subtle pause than a major interruption. The hinge is still visible, the gap is still there, but it is less about accepting compromise and more about acknowledging that two rigid, high-quality OLED panels with a small gap are more practical than one fragile foldable panel with no gap at all.

Despite those limitations, these designs offer a kind of versatility that neither conventional laptops nor foldable laptops can match. You get to decide how to use the laptop, unrestricted by a single panel or a prescribed set of folds. You can boost your productivity with two screens for timelines and tools, or save space with just one when you are working in a tight spot. You can stand them up for presentations, lay them flat for collaborative work, or use them as a traditional clamshell when muscle memory takes over.

Maybe someday, we will have foldable laptops that can bend both ways, support multiple modes, and will not easily scratch with a fingernail or develop a permanent crease after a few months of daily folding. But if you want to be productive and create content today, the ROG Zephyrus Duo 2026 and ASUS Zenbook DUO 2026 could very well be among the most productive and most versatile laptops of 2026, delivering the dual-screen promise without the fragility, the expense, or the anxiety that comes with carrying a piece of still-experimental tech into the real world.

The post 2026 ROG Zephyrus Duo, ASUS Zenbook DUO: Versatility You Can Use Today first appeared on Yanko Design.

The Upcoming iPhone Fold feels like a response to Peer Pressure, not Innovation

Image Credits: Techtics

I could be wrong, and I hope to be… but the iPhone Fold seems to be gathering interest but not for the right reasons. Everyone loves innovation – not everyone adopts it. We saw how the Vision Pro absolutely caused a tsunami online before subsiding into the tiny ripple it now is. For what it’s worth, the iPhone Fold feels like déjà vu. Impressive tech that Apple took years to perfect, launched to much fanfare, but without a true reason or ecosystem to actually boost user adoption. The Vision Pro is cool, but even after 3 years, nobody really NEEDS it.

We all knew the iPhone Air was going to just be a stepping stone towards something greater – but the iPhone Air’s sales prove one thing – nobody needed a slim phone, so nobody ended up buying one. Samsung’s been making foldables for the better part of a decade, and I still don’t see people overwhelmingly choosing them over regular candybar phones, so my question is simple. What exactly can Apple do to make their iPhone Fold measurably better? And more importantly, does “Measurably Better” actually translate to sales? Or is this a response to peer pressure without really innovating in a direction that users want?

Joining a Party After the Music Has Faded

The context for Apple’s entry is a market that has already chosen a winner, and it is the conventional smartphone. For all the engineering hours poured into hinges and flexible glass by Samsung, Google, and others, the foldable category remains a rounding error in the grand scheme of things. Global foldable shipments are expected to hover around 20 million units in 2025, with Samsung commanding nearly two-thirds of that volume. This sounds impressive until you place it next to the more than one billion smartphones shipped annually. Foldables are a niche, a high-priced experiment that has had years to capture the public’s imagination and has largely failed to do so. Apple is not just late to this party; it is showing up after the keg is tapped and most of the guests have gone home.

This sets up a strange dynamic. Apple’s usual playbook involves letting a market mature, identifying its core flaws, and then releasing a product so polished and user-focused that it redefines the category. With the iPhone Fold, the company appears to be entering a segment that is not just mature but also stagnant, with little evidence of pent-up consumer demand. The consensus timeline points to a 2026 launch, positioning the device as a hyper-premium “Ultra” or “Fold” model within the iPhone 18 lineup. This framing alone suggests a halo product, something to be admired from afar, rather than the next revolutionary device for the masses. It feels less like a strategic strike and more like an obligation.

Image Credits: Techtics

An Obsession with Perfecting the Crease

The rumored hardware details paint a picture of a device engineered to within an inch of its life. Reports converge on a book-style foldable with a 7.7 to 7.8-inch inner display and a smaller 5.5-inch screen on the outside. The central obsession seems to be the crease, that subtle valley that plagues every other foldable. Apple is reportedly holding out for a near-invisible fold, leaning on a next-generation ultra-thin glass solution from Samsung Display and a complex internal hinge with metal plates to manage stress. The device is also expected to be incredibly thin, perhaps just 4.5 millimeters when open and around 9.6 millimeters when closed, which would make it one of the most slender mobile devices ever made.

These are impressive technical feats, to be sure. A phone that unfolds into a small tablet without a distracting crease is a laudable goal. But it also speaks to a focus on solving problems that only engineers and tech reviewers seem to lose sleep over. To achieve this thinness, compromises are already surfacing, such as the rumored omission of Face ID in favor of a Touch ID sensor on the power button. This is the kind of trade-off that indicates Apple is prioritizing the physical object itself, its thinness and aesthetic perfection, over the established user experience. It is a device built to win spec-sheet comparisons and design awards, while its practical value for the average user remains an open question.

Image Credits: Techtics

A Playbook Written by a Rival

Perhaps the most telling detail in this whole saga is Apple’s reported reliance on its chief rival. Analyst Ming-Chi Kuo and others have indicated that Apple will adopt Samsung Display’s “crease-free display solution” instead of a fully homegrown technology stack. This is a significant departure for a company that prides itself on vertical integration and owning the core technologies that define its products. From custom silicon to camera sensors, Apple’s advantage has always been its ability to design the whole widget. By turning to Samsung for the most critical and defining component of its first foldable, Apple is tacitly admitting that it is playing catch-up in a game whose rules were written by someone else.

This move fundamentally supports the “peer pressure” thesis. It suggests that the urgency to have a foldable in the lineup has overridden the traditional, patient Apple R&D cycle. The company is effectively outsourcing the hardest part of the problem to the very competitor that has defined the category for years. While Apple has been filing patents related to flexible displays since 2014, the decision to launch with a rival’s core technology feels reactionary. It is a move made to fill a perceived gap in its portfolio, ensuring that Samsung does not get to claim the “most futuristic” phone on the market without a fight.

Image Credits: Techtics

The Ghost of the Vision Pro

This entire narrative feels eerily familiar. Just a few years ago, Apple launched the Vision Pro, a product of breathtaking technical achievement that answered a question few people were asking. It was, and is, a marvel of engineering that commands a price tag to match, and its sustained adoption has been modest at best. The iPhone Fold appears to be tracking along the same trajectory: years of secretive development, a focus on solving incredibly difficult hardware challenges, and a final product that will likely be priced into the stratosphere. Leaks suggest a starting price between $1,800 and $2,300, placing it well above even the most expensive iPhone Pro Max.

This pricing strategy pre-selects its audience, limiting it to die-hard enthusiasts and those for whom price is no object. Just like the Vision Pro, the iPhone Fold risks becoming a solution in search of a problem. A crease-free display is a better display, but is it $2,000 better? A thinner phone is nice to hold, but does it fundamentally change what you can do with it? The Vision Pro proved that technical excellence alone does not create a market. Without a compelling, everyday use case that justifies its cost and complexity, the iPhone Fold could easily become another beautiful, expensive piece of technology that is more admired than it is used.

Image Credits: Techtics

A New Class of Halo Product

Ultimately, the iPhone Fold is shaping up to be less of a mainstream product and more of a statement piece. It is Apple’s answer to a question posed by its competitors, a way to plant its flag at the absolute peak of the smartphone market. The goal may not be to sell tens of millions of units in the first year, though some bullish forecasts suggest shipments could reach 13-15 million. It is about defending the brand’s reputation for innovation and ensuring that the title of “most advanced smartphone” does not belong exclusively to an Android device. It is a halo product in the truest sense, designed to make the rest of the iPhone lineup look good by comparison.

The real innovation users crave might be more mundane: longer battery life, more durable screens, and more accessible pricing. The iPhone Fold, with its focus on mechanical novelty and aesthetic perfection, does not seem to address these core desires. Instead, it doubles down on the very trends that have made high-end phones feel increasingly out of reach for many. It is a beautiful, exquisitely engineered response to industry pressure, a device that perfects the foldable form factor. Whether it perfects it for a world that actually wants it remains to be seen.

The post The Upcoming iPhone Fold feels like a response to Peer Pressure, not Innovation first appeared on Yanko Design.

When the Wolf Is Already Part of You: Yama Moon’s Quiet Character Design

Most illustration that deals with human-animal hybridity treats it as a problem to be solved, a boundary to be crossed or defended, a transformation caught mid-process. Yamazuki Mari refuses this framing entirely. Her figures don’t struggle with their condition. They wear it.

A woman stands before a massive black wolf, their bodies aligned so precisely that the creature reads less as a separate entity and more as an extension of her silhouette. No tension exists between them. No drama of possession or escape. Mari positions the wolf head directly above the woman’s own, along the same vertical axis, creating a visual grammar of doubling rather than confrontation. The relationship feels ceremonial, almost devotional, with the wolf serving as guardian rather than threat.

What makes this work distinctive isn’t the subject matter but the formal clarity she brings to it. Against deep black backgrounds, her figures emerge in pale creams and icy blues, coloring deliberately muted to let compositional geometry carry emotional weight.

Surface as Signal

Mari builds atmosphere through material choices that reward close attention. Fine crosshatched textures give her digital work a tactile quality suggesting engraving or woodcut, linking contemporary illustration to centuries of folk art tradition. When color does assert itself, it arrives as intrusion: coral red branches cutting through darkness like warning signals, their sharpness creating tension against the soft gradients of hair and fur.

Her background in graphic design shows in the disciplined relationship between figure and ground. Black backgrounds create ceremonial weight. White backgrounds create clinical clarity. Neither choice is neutral, and the shift between them across her body of work creates a tonal range that color specification alone can’t achieve.

By introducing fine grain and crosshatching into digital illustration, Mari creates a surface quality that resists the smoothness associated with computer-generated imagery. The textures read as handmade even when they’re not, and this material fiction supports the folkloric atmosphere her subjects require. The wolf’s fur carries the same visual density as the woman’s hair, unifying disparate biological forms through shared treatment.

Color operates with similar intentionality. Limited palettes prevent the images from reading as naturalistic while specific hues, coral red against midnight blue, winter gray against bone white, carry cultural associations that enrich the viewing experience without requiring explicit narrative. Red as warning, white as purity or death, black as depth or the unconscious: Mari leverages these associations without being constrained by them.

Stillness as Strategy

Stillness in Mari’s work isn’t absence of movement but rather the deliberate suspension of it.

Her compositions feel frozen at the instant before something happens, and this temporal ambiguity becomes a structural principle. The woman and wolf don’t move because they exist outside of narrative time. They occupy a space where transformation has already occurred and no further change is necessary. Traditional fantasy illustration often relies on dynamism to generate interest, filling the frame with action that guides the eye along predictable paths. Mari inverts this expectation. Her figures hold their positions, and the viewer must do the work of discovering the relationships between elements.

The result is an experience closer to portrait study than narrative illustration, where the reward comes from sustained attention rather than immediate comprehension.

Motion as Rupture

Her second piece abandons stillness entirely, and the contrast illuminates what the first image withholds. A human figure fuses with a lunging wolf in mid-leap, their bodies stretched forward in parallel lines of urgent motion. The wolf’s jaws are open, its eye wide with instinct, and the scene pulses with predatory energy that the first composition suppressed.

Cool grays and winter whites dominate the palette, replacing ceremonial blacks with something closer to raw weather. Sharp white branches frame the movement like cracked ice. The grainy textures that felt archival in the static piece now read as velocity blur. Same technical vocabulary, entirely different emotional results.

What the motion reveals is the cost of transformation. Static imagery presented hybridity as achieved and peaceful. Dynamic imagery shows it as ongoing and violent. These aren’t contradictory statements but rather complementary views of the same condition: the wolf is both guardian and hunter, protector and predator, and the human figure rides that duality rather than resolving it.

Mari doesn’t choose between interpretations because choosing would reduce the complexity her work investigates.

Tenderness Without Sentiment

Her final piece pivots toward something that resembles cuteness but refuses to commit to it. Two small catlike figures stand side by side against a clean white background, their rounded forms and oversized fur collars giving them a plush, doll-like presence. One wears red, the other blue. The color dialogue emphasizes individuality within obvious companionship.

Simplicity here is deceptive. These figures share the hybrid logic of the wolf pieces, with feline ears, tails, and paw-like hands rendering them not quite animal and not quite human. Their expressions are subdued rather than cheerful, and this restraint prevents the image from tipping into pure whimsy.

Mari names them May and Mii, the Nekochi, describing them as inseparable companions ready for playful mischief. The characterization suggests personality and relationship, but the visual treatment maintains distance. She doesn’t animate their mischief or show them in action. Like the woman and the wolf, they simply exist in their hybrid state, present to the viewer without performing.

Grammar of Integration

Across all three works, Mari establishes a consistent visual grammar for depicting hybridity. Human and animal elements don’t compete for dominance within the frame. They occupy the same compositional space with equal formal weight, aligned along shared axes, rendered with equivalent levels of detail.

Neither element reads as metaphor for the other.

The wolf isn’t the woman’s inner nature made visible. The woman isn’t the wolf’s civilized aspect. They exist together as a unified presence that simply happens to contain both forms. This approach distinguishes her work from transformation imagery in the Western fantasy tradition, where hybridity typically signifies conflict, corruption, or metamorphosis in progress. Mari’s hybrids carry no implication of instability. They aren’t becoming something else. They’ve already become, and the images document that completed state with the formal precision of taxonomy rather than the drama of mythology.

Cultural lineage matters here. Japanese visual traditions have long accommodated hybrid beings without requiring them to resolve into single identities. Kitsune, tanuki, and other shapeshifters populate folklore not as monsters to be defeated but as neighbors to be negotiated with. Mari draws on this heritage while filtering it through contemporary illustration sensibilities, producing images that feel simultaneously ancient and digitally native.

What the Hybrids Suggest

Most depictions of human-animal fusion carry anxiety about boundary dissolution, about losing the characteristics that define human identity.

Mari’s figures express no such concern. They wear their hybridity as comfortably as the Nekochi wear their fur collars, as a feature of existence rather than a problem to be solved. This comfort may be the most radical aspect of her visual language. In a design context where character illustration often relies on conflict, transformation, or aspiration to generate viewer engagement, Mari offers figures who’ve arrived at a place of integration and simply occupy it.

The wolf doesn’t need to devour the woman. The woman doesn’t need to tame the wolf. The Nekochi don’t need to choose between their feline and humanoid aspects.

For illustrators working in character design, the approach suggests an alternative to narrative-driven imagery. Not every character needs to be caught mid-journey. Some can simply exist, fully realized, inviting the viewer to spend time in their presence rather than anticipate their next transformation. Mari’s hybrids model this possibility with quiet confidence, their stillness a form of visual authority that movement would only diminish.

The post When the Wolf Is Already Part of You: Yama Moon’s Quiet Character Design first appeared on Yanko Design.

How to Spot Fake AI Products at CES 2026 Before You Buy

Merriam-Webster just named “slop” its word of the year, defining it as “digital content of low quality that is produced usually in quantity by means of artificial intelligence.” The choice is blunt, almost mocking, and it captures something that has been building for months: a collective exhaustion with AI hype that promises intelligence but delivers mediocrity. Over the past three months, that exhaustion has started bleeding into Wall Street. Investors, analysts, and even CEOs of AI companies themselves have been openly questioning whether we are living through an AI bubble. OpenAI’s Sam Altman warned in August that investors are “overexcited about AI,” and Google’s Sundar Pichai admitted to “elements of irrationality” in the sector. The tech industry is pouring trillions into AI infrastructure while revenues lag far behind, raising fears of a dot-com-style correction that could rattle the entire economy.

CES 2026 is going to be ground zero for this tension. Every booth will have an “AI-powered” sticker on something, and a lot of those products will be genuine innovations built on real on-device intelligence and agentic workflows. But a lot of them will also be slop: rebranded features, cloud-dependent gimmicks, and shallow marketing plays designed to ride the hype wave before it crashes. If you are walking the show floor or reading coverage from home, knowing how to separate real AI from fake AI is not just a consumer protection issue anymore. It is a survival skill for navigating a market that feeds on confusion and a general lack of awareness around actual Artificial Intelligence.

1. If it goes offline and stops working, it was never really AI

The simplest test for fake AI is also the most reliable: ask what happens when the internet connection drops. Real AI that lives on your device will keep functioning because the processing is happening locally, using dedicated chips and models stored in the gadget itself. Fake AI is just a thin client that calls a cloud API, and the moment your Wi-Fi cuts out, the “intelligence” disappears with it.

Picture a laptop at CES 2026 that claims to have an AI writing assistant. If that assistant can still summarize documents, rewrite paragraphs, and handle live transcription when you are on a plane with no internet, you are looking at real on-device AI. If it gives you an error message the second you disconnect, it is cloud-dependent marketing wrapped in an “AI PC” label. The same logic applies to TVs, smart home devices, robot vacuums, and wearables. Genuine AI products are designed to think locally, with cloud connectivity as an optional boost rather than a lifeline.

The distinction matters because on-device AI is expensive to build. It requires new silicon, tighter integration between hardware and software, and real engineering effort. Companies that invested in that infrastructure will want you to know it works offline because that is their competitive edge. Companies that skipped that step will either avoid the question or bury it in fine print. At CES 2026, press the demo staff on this: disconnect the device from the network and see if the AI features still run. If they do not, you just saved yourself from buying rebranded cloud software in a shiny box.

If your Robot Vacuum has Microsoft Copilot, RUN!

2. If it’s just a chatbot, it isn’t AI… it’s GPT Customer Care

The laziest fake AI move at CES 2026 will be products that open a chat window, let you type questions, and call that an AI feature. A chatbot is not product intelligence. It is a generic language model wrapper that any company can license from OpenAI, Anthropic, or Google in about a week, then slap their logo on top and call it innovation. If the only AI interaction your gadget offers is typing into a text box and getting conversational responses, you are not looking at an AI product. You are looking at customer service automation dressed up as a feature.

Real AI is embedded in how the product works. It is the robot vacuum that maps your home, decides which rooms need more attention, and schedules itself around your routine without you opening an app. It is the laptop that watches what you do, learns your workflow, and starts suggesting shortcuts or automating repetitive tasks before you ask. It is the TV that notices you always pause shows when your smart doorbell rings and starts doing it automatically. None of that requires a chat interface because the intelligence is baked into the behavior of the device itself, not bolted on as a separate conversation layer.

If a company demo at CES 2026 starts with “just ask it anything,” probe deeper. Can it take actions across the system, or does it just answer questions? Does it learn from how you use the product, or is it the same canned responses for everyone? Is the chat interface the only way to interact with the AI, or does the product also make smart decisions in the background without prompting? A chatbot can be useful, but it is table stakes now, not a differentiator. If that is the whole AI story, the company did not build AI into their product. They rented a language model and hoped you would not notice.

3. If the AI only does one narrow thing, it is probably just a renamed preset

Another red flag is when a product’s AI feature is weirdly specific and cannot generalize beyond a single task. A TV that has “AI motion smoothing” but no other intelligent behavior is not running a real AI model; it is running the same interpolation algorithm TVs have had for years, now rebranded with an AI label. A camera that has “AI portrait mode” but cannot recognize anything else is likely just using a basic depth sensor and calling it artificial intelligence. Real AI, especially the kind built into modern chips and operating systems, is designed to generalize across tasks: it can recognize objects, understand context, predict user intent, and coordinate with other devices.

Ask yourself: does this product’s AI learn, adapt, or handle multiple scenarios, or does it just trigger a preset when you press a button? If it is the latter, you are looking at a marketing gimmick. Fake AI products love to hide behind phrases like “AI-enhanced” or “AI-optimized,” which sound impressive but are deliberately vague. Real AI products will tell you exactly what the system is doing: “on-device object recognition,” “local natural language processing,” “agentic task coordination.” Specificity is a sign of substance. Vagueness is a sign of slop.

The other giveaway is whether the AI improves over time. Genuine AI systems get smarter as they process more data and learn from user behavior, often through firmware updates that improve the underlying models. Fake AI products ship with a fixed set of presets and never change. At CES 2026, ask demo reps if the product’s AI will improve after launch, how updates work, and whether the intelligence adapts to individual users. If they cannot give you a clear answer, you are looking at a one-time software trick masquerading as artificial intelligence.

Don’t fall for ‘AI Enhancement’ presets or buttons that don’t do anything related to AI.

4. If the company cannot explain what the AI actually does, walk away

Fake AI thrives on ambiguity. Companies that bolt a chatbot onto a product and call it AI-powered know they do not have a real differentiator, so they lean into buzzwords and avoid specifics. Real AI companies, by contrast, will happily explain what their models do, where the processing happens, and what problems the AI solves that the previous generation could not. If a booth rep at CES 2026 gives you vague non-answers like “it uses machine learning to optimize performance” without defining what gets optimized or how, that is a warning sign.

Push for concrete examples. If a smart home hub claims to have AI coordination, ask: what decisions does it make on its own, and what still requires manual setup? If a wearable says it has AI health coaching, ask: is the analysis happening on the device or in the cloud, and can it work offline while hiking in the wilderness? If a laptop advertises an AI assistant, ask: what can it do without an internet connection, and does it integrate with other apps (agentic) or just sit in a sidebar? Companies with real AI will have detailed, confident answers because they built the system from the ground up. Companies with fake AI will deflect, generalize, or change the subject.

The other test is whether the AI claim matches the price and the hardware. If a $200 gadget promises the same on-device AI capabilities as a $1,500 laptop with a dedicated neural processing unit, somebody is lying. Real AI requires real silicon, and that silicon costs money. Budget products can absolutely have useful AI features, but they will typically offload more work to the cloud or use simpler models. If the pricing does not line up with the technical claims, it is worth being skeptical. At CES 2026, ask what chip is powering the AI, whether it has a dedicated NPU, and how much of the intelligence is local versus cloud-based. If they cannot or will not tell you, that is your cue to move on.

5. Check if the AI plays well with others, or if it lives in a silo

One of the clearest differences between real agentic AI and fake “AI inside” products is interoperability. Genuine AI systems are designed to coordinate with other devices, share context, and act on your behalf across an ecosystem. Fake AI products exist in isolation: they have a chatbot you can talk to, but it does not connect to anything else, and it cannot take actions beyond its own narrow interface. Samsung’s CES 2026 exhibit is explicitly built around AI and interoperability, with appliances, TVs, and smart home products all coordinated by a shared AI layer. That is what real agentic AI looks like: the fridge, washer, vacuum, and thermostat all understand context and can make decisions together without you micromanaging each one. Fake AI, by contrast, gives you five isolated apps with five separate chatbots, none of which talk to each other. If a product at CES 2026 claims to have AI but cannot integrate with the rest of your smart home, car, or workflow, it is not delivering the core promise of agentic systems.

Ask demo reps: does this work with other brands, or only within your ecosystem? Can it trigger actions in other apps or devices, or does it just respond to questions? Does it understand my preferences across multiple products, or does each device start from scratch? Companies that built real AI ecosystems will brag about cross-device coordination because it is hard to pull off and it is the whole point. Companies selling fake AI will either avoid the topic or try to upsell you on buying everything from them, which is a sign they do not have real interoperability.

6. When in doubt, look for the slop

The rise of AI-generated “slop” gives you a shortcut for spotting lazy AI products: if the marketing materials, product images, or demo videos look AI-generated and low-effort, the product itself is probably shallow too. Merriam-Webster defines slop as low-quality digital content produced in quantity by AI, and it has flooded everything from social media to advertising to product launches. Brands that cut corners on their own marketing by using obviously AI-generated visuals are signaling that they also cut corners on the actual product development.

Watch for telltale signs: weird proportions in product photos, uncanny facial expressions in lifestyle shots, text that sounds generic and buzzword-heavy with no real specifics, and claims that are too good to be true with no technical backing. Real AI products are built by companies that care about craft, and that care shows up in how they present the product. Fake AI products are built by companies chasing a trend, and the slop in their marketing is the giveaway. At CES 2026, trust your instincts: if the booth, the video, or the pitch feels hollow and mass-produced, the gadget probably is too.

The post How to Spot Fake AI Products at CES 2026 Before You Buy first appeared on Yanko Design.

How AI Will Be Different at CES 2026: On‑Device Processing and Actual Agentic Productivity

Last year, every other product at CES had a chatbot slapped onto it. Your TV could talk. Your fridge could answer trivia. Your laptop had a sidebar that would summarize your emails if you asked nicely. It was novel for about five minutes, then it became background noise. The whole “AI revolution” at CES 2024 and 2025 felt like a tech industry inside joke: everyone knew it was mostly marketing, but nobody wanted to be the one company without an AI sticker on the booth.

CES 2026 is shaping up differently. Coverage ahead of the show is already calling this the year AI stops being a feature you demo and starts being infrastructure you depend on. The shift is twofold: AI is moving from the cloud onto the device itself, and it is evolving from passive assistants that answer questions into agentic systems that take action on your behalf. Intel has confirmed it will introduce Panther Lake CPUs, AMD CEO Lisa Su is headlining the opening keynote with expectations around a Ryzen 7 9850X3D reveal, and Nvidia is rumored to be prepping an RTX 50 “Super” refresh. The silicon wars are heating up precisely because the companies making chips know that on-device AI is the only way this whole category becomes more than hype. If your gadget still depends entirely on a server farm to do anything interesting, it is already obsolete. Here’s what to expect at CES 2026… but more importantly, what to expect from AI in the near future.

Your laptop is finally becoming the thing running the models

Intel, AMD, and Nvidia are all using CES 2026 as a launching pad for next-generation silicon built around AI workloads. Intel has publicly committed to unveiling its Panther Lake CPUs at the show, chips designed with dedicated neural processing units baked in. AMD’s Lisa Su is doing the opening keynote, with strong buzz around a Ryzen 7 9850X3D that would appeal to gamers and creators who want local AI performance without sacrificing frame rates or render times. Nvidia’s press conference is rumored to focus on RTX 50 “Super” cards that push both graphics and AI inference into new territory. The pitch is straightforward: your next laptop or desktop is not a dumb terminal for ChatGPT; it is the machine actually running the models.

What does that look like in practice? Laptops at CES 2026 will be demoing live transcription and translation that happens entirely on the device, no cloud round trip required. You will see systems that can summarize browser tabs, rewrite documents, and handle background removal on video calls without sending a single frame to a server. Coverage is already predicting a big push toward on-device processing specifically to keep your data private and reduce reliance on cloud infrastructure. For gamers, the story is about AI upscaling and frame generation becoming table stakes, with new GPUs sold not just on raw FPS but on how quickly they can run local AI tools for modding, NPC dialogue generation, or streaming overlays. This is the year “AI PC” might finally mean something beyond a sticker.

Agentic AI is the difference between a chatbot and a butler

Pre-show coverage is leaning heavily on the phrase “agentic AI,” and it is worth understanding what that actually means. Traditional AI assistants answer questions: you ask for the weather, you get the weather. Agentic AI takes goals and executes multi-step workflows to achieve them. Observers expect to see devices at CES 2026 that do not just plan a trip but actually book the flights and reserve the tables, acting on your behalf with minimal supervision. The technical foundation for this is a combination of on-device models that understand context and cloud-based orchestration layers that can touch APIs, but the user experience is what matters: you stop micromanaging and start delegating.

Samsung is bringing its largest CES exhibit to date, merging home appliances, TVs, and smart home products into one massive space with AI and interoperability as the core message. Imagine a fridge, washer, TV, robot vacuum, and phone all coordinated by the same AI layer. The system notices you cooked something smoky, runs the air purifier a bit harder, and pushes a recipe suggestion based on leftovers. Your washer pings the TV when a cycle finishes, and the TV pauses your show at a natural break. None of this requires you to open an app or issue voice commands; the devices are just quietly making decisions based on context. That is the agentic promise, and CES 2026 is where companies will either prove they can deliver it or expose themselves as still stuck in the chatbot era.

Robot vacuums are the first agentic AI success story you can actually buy

CES 2026 is being framed by dedicated floorcare coverage as one of the most important years yet for robot vacuums and AI-powered home cleaning, with multiple brands receiving Innovation Awards and planning major product launches. This category quietly became the testing ground for agentic AI years before most people started using the phrase. Your robot vacuum already maps your home, plans routes, decides when to spot-clean high-traffic areas, schedules deep cleans when you are away, and increasingly maintains itself by emptying dust and washing its own mop pads. It does all of this with minimal cloud dependency; the brains are on the bot.

LG has already won a CES 2026 Innovation Award for a robot vacuum with a built-in station that hides inside an existing cabinet cavity, turning floorcare into an invisible, fully hands-free system. Ecovacs is previewing the Deebot X11 OmniCyclone as a CES 2026 Innovation Awards Honoree and promising its most ambitious lineup to date, pushing into whole-home robotics that go beyond vacuuming. Robotin is demoing the R2, a modular robot that combines autonomous vacuuming with automated carpet washing, moving from daily crumb patrol to actual deep cleaning. These bots are starting to integrate with broader smart home ecosystems, coordinating with your smart lock, thermostat, and calendar to figure out when you are home, when kids are asleep, and when the dog is outside. The robot vacuum category is proof that agentic AI can work in the real world, and CES 2026 is where other product categories are going to try to catch up.

TVs are getting Micro RGB panels and AI brains that learn your taste

LG has teased its first Micro RGB TV ahead of CES 2026, positioning it as the kind of screen that could make OLED owners feel jealous thanks to advantages in brightness, color control, and longevity. Transparent OLED panels are also making appearances in industrial contexts, like concept displays inside construction machinery cabins, hinting at similar tech eventually showing up in living rooms as disappearing TVs or glass partitions that become screens on demand. The hardware story is always important at CES, but the AI layer is where things get interesting for everyday use.

TV makers are layering AI on top of their panels in ways that go beyond simple upscaling. Expect personalized picture and sound profiles that learn your room conditions, content preferences, and viewing habits over time. The pitch is that your TV will automatically switch to low-latency gaming mode when it recognizes you launched a console, dim your smart lights when a movie starts, and adjust color temperature based on ambient light without you touching a remote. Some of this is genuine machine learning happening on-device, and some of it is still marketing spin on basic presets. The challenge for readers at CES 2026 will be figuring out which is which, but the direction is clear: TVs are positioning themselves as smart hubs that coordinate your living room, not just dumb displays waiting for HDMI input.

Gaming gear is wiring itself for AI rendering and 500 Hz dreams

HDMI Licensing Administrator is using CES 2026 to spotlight advanced HDMI gaming technologies with live demos focused on very high refresh rates and next-gen console and PC connectivity. Early prototypes of the Ultra96 HDMI cable, part of the new HDMI 2.2 specification, will be on display with the promise of higher bandwidth to support extreme refresh rates and resolutions. Picture a rig on the show floor: a 500 Hz gaming monitor, next-gen GPU, HDMI 2.2 cable, running an esports title at absurd frame rates with variable refresh rate and minimal latency. It is the kind of setup that makes Reddit threads explode.

GPUs are increasingly sold not just on raw FPS but on AI capabilities. AI upscaling like DLSS is already table stakes, but local AI is also powering streaming tools for background removal, audio cleanup, live captions, and even dynamic NPC dialogue in future games that require on-device inference rather than server-side processing. Nvidia’s rumored RTX 50 “Super” refresh is expected to double down on this positioning, selling the cards as both graphics and AI accelerators. For gamers and streamers, CES 2026 is where the industry will make the case that your rig needs to be built for AI workloads, not just prettier pixels. The infrastructure layer, cables and monitors included, is catching up to match that ambition.

What CES 2026 really tells us about where AI is going

The shift from cloud-dependent assistants to on-device agents is not just a technical upgrade; it is a fundamental change in how gadgets are designed and sold. When Intel, AMD, and Nvidia are all racing to build chips with dedicated AI accelerators, and when Samsung is reorganizing its entire CES exhibit around AI interoperability, the message is clear: companies are betting that local intelligence and cross-device coordination are the only paths forward. The chatbot era served its purpose as a proof of concept, but CES 2026 is where the industry starts delivering products that can think, act, and coordinate without constant cloud supervision.

What makes this year different from the past two is that the infrastructure is finally in place. The silicon can handle real-time inference. The software frameworks for agentic behavior are maturing. Robot vacuums are proving the model works at scale. TVs and smart home ecosystems are learning how to talk to each other without requiring users to become IT managers. The pieces are connecting, and CES 2026 is the first major event where you can see the whole system starting to work as one layer instead of a collection of isolated features.

The real question is what happens after the demos

Trade shows are designed to impress, and CES 2026 will have no shortage of polished demos where everything works perfectly. The real test comes in the six months after the show, when these products ship and people start using them in messy, real-world conditions. Does your AI PC actually keep your data private when it runs models locally, or does it still phone home for half its features? Does your smart home coordinate smoothly when you add devices from different brands, or does it fall apart the moment something breaks the script? Do robot vacuums handle the chaos of actual homes, or do they only shine in controlled environments?

The companies that win in 2026 and beyond will be the ones that designed their AI systems to handle failure, ambiguity, and the unpredictable messiness of how people actually live. CES 2026 is where you will see the roadmap. The year after is where you will see who actually built the roads. If you are walking the show floor or following the coverage, the most important question is not “what can this do in a demo,” but “what happens when it breaks, goes offline, or encounters something it was not trained for.” That is where the gap between real agentic AI and rebranded presets will become impossible to hide.

The post How AI Will Be Different at CES 2026: On‑Device Processing and Actual Agentic Productivity first appeared on Yanko Design.

Remember “The Ghiblification”? We Treated Ghibli As Disposable Because That’s How We Treat Everything

First, it was cottagecore, filling our feeds with sourdough starters and rustic linen. Then came the sharp, symmetrical pastels of the Wes Anderson trend, followed by a tidal wave of Barbie pink that painted the internet for a summer. Each aesthetic arrived like a weather front, dominating the landscape completely for a short time before vanishing just as quickly, leaving behind only a faint digital echo. They were cultural costumes, tried on for a season and then relegated to the back of the closet.

Into this cycle stepped Studio Ghibli, its decades of patient, handcrafted animation compressed into a one-click selfie generator. The resulting “Ghibli-fication” of our profiles was not a deep engagement with Hayao Miyazaki’s themes of environmentalism and pacifism; it was simply the next costume off the rack. The speed with which we adopted and then abandoned it reveals a difficult truth. Our treatment of Ghibli was a symptom of a much larger cultural pattern, one where even the most profound art is rendered disposable by the internet’s insatiable appetite for the new.

When everything becomes an aesthetic, nothing remains itself

Platforms thrive on legibility. Content needs to be instantly recognizable, easily categorized, and simple enough to reproduce at scale. This creates enormous pressure to reduce complex cultural artifacts into their most surface-level visual markers. A Wes Anderson film becomes “symmetrical shots in pastel.” A hit song from Raye (that marked her leaving a music label and following creative freedom) becomes just a fleeting 20-second TikTok dance about rings on fingers and finding husbands. Ghibli’s intricate storytelling about war, labor, and the natural world gets flattened into “soft colors and big eyes.”

The reduction is not accidental. It is the cost of entry into viral circulation. An aesthetic can only spread if it can be copied quickly, applied broadly, and understood immediately. Nuance, context, and depth are friction. They slow down the sharing, complicate the reproduction, and limit the audience. So they get stripped away, not out of malice, but out of structural necessity. What remains is a shell, a visual shorthand that gestures toward the original without containing any of its substance.

This process turns cultural works into raw material. A film, a book, a philosophical tradition, any of these can be mined for their most photogenic elements and reconfigured into something that fits neatly into a grid post or a TikTok filter. The original becomes less important than the aesthetic it can generate. Once the aesthetic stops performing well in terms of engagement metrics, the entire package gets discarded. The algorithm does not care about preservation or reverence. It cares about what is getting clicks and views today.

The appetite that cannot be satisfied

Social media platforms are built around a fundamental economic problem: they need to hold attention, but attention is finite and easily exhausted. The solution is constant novelty. If users get bored, they leave. If they leave, ad revenue drops. So the feed must always be serving something new, something that feels fresh enough to justify another scroll, another click, another few seconds of eyeball time.

This creates a culture of planned obsolescence for aesthetics. A look can only stay interesting for so long before it becomes familiar, then oversaturated, then tiresome. At that point, it has to be replaced. The cycle repeats endlessly, chewing through visual languages, artistic movements, and cultural traditions at a pace that would have been unthinkable even twenty years ago. What took decades to develop can be extracted, popularized, and discarded in a matter of weeks.

The speed of this churn has consequences. It trains us to engage with culture in a particular way: superficially, briefly, and without much attachment. We learn to skim surfaces rather than dig into depths. We participate in trends not because they resonate with us personally, but because participation itself is the point (the ice bucket challenge boosted ALS awareness for precisely 6 months). Being part of the moment, being visible within the current aesthetic wave, these become more valuable than any lasting connection to the work that aesthetic is borrowed from.

What sticks when the wave recedes

The irony is that while trends are disposable, the works they feed on often are not. Ghibli films continue to be watched, analyzed, and loved by new audiences long after the selfie filters have been forgotten. Wes Anderson’s movies did not become less meaningful because people used his color palettes for Instagram posts. The underlying art survives because it contains something that cannot be reduced to a visual shorthand.

What separates durable culture from disposable trends is substance that exceeds its surface. A Ghibli film rewards attention over time. The more you watch, the more you notice: the way labor is animated with dignity, the long quiet stretches that mirror real life’s pace, the refusal to offer simple moral answers. None of that fits in a filter. None of that can be mass-produced. It requires the viewer to bring time, focus, and openness to complexity.

This is what the trend cycle cannot replicate. It can borrow the look, but it cannot borrow the experience. It can create a momentary association with the aesthetic, but it cannot create the slow, layered engagement that builds lasting attachment. So the original work persists beneath the churn, waiting for the people who want more than a costume, who are looking for something to return to rather than something to discard.

Resisting the rhythm of disposability

Recognizing this pattern is not the same as escaping it. We are all embedded in systems that reward rapid consumption and constant novelty. The feed is designed to keep us moving, to prevent us from lingering too long on any one thing. Resisting that rhythm requires deliberate effort, a conscious choice to slow down when everything around us is accelerating.

That resistance can look small and personal: rewatching a film instead of merely watching a snippet of it on YouTube Shorts, reading longform essays instead of liking someone’s reel about it, spending time with art that does not immediately reveal itself. If anything, the pandemic allowed us to spend days culturing sourdough starter so we could bake our bread. The curfew ended and sourdough became a distant memory… but for those 6 months, we actually indulged in immersion. These acts do not change the structure of the platforms, but they change our relationship to culture. They create space for depth in an environment optimized for surface.

The broader question is whether we can build cultural spaces that do not treat everything as disposable. Platforms will not do this on their own; their incentives run in the opposite direction. But audiences, creators, and critics can push back by valuing longevity over virality, by rewarding substance over aesthetic repackaging, by choosing to engage with work in ways that cannot be reduced to a trend cycle.

Ghibli survived its moment as a disposable aesthetic because it was never fully captured by it. The films remain too slow, too strange, too resistant to easy consumption. They stand as a reminder that some things are built to last, even in an environment designed to make everything temporary. The real work is recognizing that difference and choosing to treat what matters accordingly.

The post Remember “The Ghiblification”? We Treated Ghibli As Disposable Because That’s How We Treat Everything first appeared on Yanko Design.

Remember “The Ghiblification”? We Treated Ghibli As Disposable Because That’s How We Treat Everything

First, it was cottagecore, filling our feeds with sourdough starters and rustic linen. Then came the sharp, symmetrical pastels of the Wes Anderson trend, followed by a tidal wave of Barbie pink that painted the internet for a summer. Each aesthetic arrived like a weather front, dominating the landscape completely for a short time before vanishing just as quickly, leaving behind only a faint digital echo. They were cultural costumes, tried on for a season and then relegated to the back of the closet.

Into this cycle stepped Studio Ghibli, its decades of patient, handcrafted animation compressed into a one-click selfie generator. The resulting “Ghibli-fication” of our profiles was not a deep engagement with Hayao Miyazaki’s themes of environmentalism and pacifism; it was simply the next costume off the rack. The speed with which we adopted and then abandoned it reveals a difficult truth. Our treatment of Ghibli was a symptom of a much larger cultural pattern, one where even the most profound art is rendered disposable by the internet’s insatiable appetite for the new.

When everything becomes an aesthetic, nothing remains itself

Platforms thrive on legibility. Content needs to be instantly recognizable, easily categorized, and simple enough to reproduce at scale. This creates enormous pressure to reduce complex cultural artifacts into their most surface-level visual markers. A Wes Anderson film becomes “symmetrical shots in pastel.” A hit song from Raye (that marked her leaving a music label and following creative freedom) becomes just a fleeting 20-second TikTok dance about rings on fingers and finding husbands. Ghibli’s intricate storytelling about war, labor, and the natural world gets flattened into “soft colors and big eyes.”

The reduction is not accidental. It is the cost of entry into viral circulation. An aesthetic can only spread if it can be copied quickly, applied broadly, and understood immediately. Nuance, context, and depth are friction. They slow down the sharing, complicate the reproduction, and limit the audience. So they get stripped away, not out of malice, but out of structural necessity. What remains is a shell, a visual shorthand that gestures toward the original without containing any of its substance.

This process turns cultural works into raw material. A film, a book, a philosophical tradition, any of these can be mined for their most photogenic elements and reconfigured into something that fits neatly into a grid post or a TikTok filter. The original becomes less important than the aesthetic it can generate. Once the aesthetic stops performing well in terms of engagement metrics, the entire package gets discarded. The algorithm does not care about preservation or reverence. It cares about what is getting clicks and views today.

The appetite that cannot be satisfied

Social media platforms are built around a fundamental economic problem: they need to hold attention, but attention is finite and easily exhausted. The solution is constant novelty. If users get bored, they leave. If they leave, ad revenue drops. So the feed must always be serving something new, something that feels fresh enough to justify another scroll, another click, another few seconds of eyeball time.

This creates a culture of planned obsolescence for aesthetics. A look can only stay interesting for so long before it becomes familiar, then oversaturated, then tiresome. At that point, it has to be replaced. The cycle repeats endlessly, chewing through visual languages, artistic movements, and cultural traditions at a pace that would have been unthinkable even twenty years ago. What took decades to develop can be extracted, popularized, and discarded in a matter of weeks.

The speed of this churn has consequences. It trains us to engage with culture in a particular way: superficially, briefly, and without much attachment. We learn to skim surfaces rather than dig into depths. We participate in trends not because they resonate with us personally, but because participation itself is the point (the ice bucket challenge boosted ALS awareness for precisely 6 months). Being part of the moment, being visible within the current aesthetic wave, these become more valuable than any lasting connection to the work that aesthetic is borrowed from.

What sticks when the wave recedes

The irony is that while trends are disposable, the works they feed on often are not. Ghibli films continue to be watched, analyzed, and loved by new audiences long after the selfie filters have been forgotten. Wes Anderson’s movies did not become less meaningful because people used his color palettes for Instagram posts. The underlying art survives because it contains something that cannot be reduced to a visual shorthand.

What separates durable culture from disposable trends is substance that exceeds its surface. A Ghibli film rewards attention over time. The more you watch, the more you notice: the way labor is animated with dignity, the long quiet stretches that mirror real life’s pace, the refusal to offer simple moral answers. None of that fits in a filter. None of that can be mass-produced. It requires the viewer to bring time, focus, and openness to complexity.

This is what the trend cycle cannot replicate. It can borrow the look, but it cannot borrow the experience. It can create a momentary association with the aesthetic, but it cannot create the slow, layered engagement that builds lasting attachment. So the original work persists beneath the churn, waiting for the people who want more than a costume, who are looking for something to return to rather than something to discard.

Resisting the rhythm of disposability

Recognizing this pattern is not the same as escaping it. We are all embedded in systems that reward rapid consumption and constant novelty. The feed is designed to keep us moving, to prevent us from lingering too long on any one thing. Resisting that rhythm requires deliberate effort, a conscious choice to slow down when everything around us is accelerating.

That resistance can look small and personal: rewatching a film instead of merely watching a snippet of it on YouTube Shorts, reading longform essays instead of liking someone’s reel about it, spending time with art that does not immediately reveal itself. If anything, the pandemic allowed us to spend days culturing sourdough starter so we could bake our bread. The curfew ended and sourdough became a distant memory… but for those 6 months, we actually indulged in immersion. These acts do not change the structure of the platforms, but they change our relationship to culture. They create space for depth in an environment optimized for surface.

The broader question is whether we can build cultural spaces that do not treat everything as disposable. Platforms will not do this on their own; their incentives run in the opposite direction. But audiences, creators, and critics can push back by valuing longevity over virality, by rewarding substance over aesthetic repackaging, by choosing to engage with work in ways that cannot be reduced to a trend cycle.

Ghibli survived its moment as a disposable aesthetic because it was never fully captured by it. The films remain too slow, too strange, too resistant to easy consumption. They stand as a reminder that some things are built to last, even in an environment designed to make everything temporary. The real work is recognizing that difference and choosing to treat what matters accordingly.

The post Remember “The Ghiblification”? We Treated Ghibli As Disposable Because That’s How We Treat Everything first appeared on Yanko Design.

10 Iconic Frank Gehry Buildings That Celebrate The Late “Starchitect’s” Legacy

Frank Gehry’s death will feel like a seismic event, even to people who never learned his name but knew “that crazy silver building” in their city. Born in Toronto in 1929 and raised in Los Angeles, he moved through the twentieth century like a restless experiment in motion, turning cardboard models into titanium-clad landmarks and treating cities as full-scale sketchbooks. His passing closes a chapter in which architecture stopped pretending to be purely rational infrastructure and allowed itself to be emotional, unstable, and sometimes gloriously impractical.

What lingers most is not only the spectacle of his work but the shift in attitude it made possible. Gehry treated architecture as a narrative medium, not a neutral backdrop; every warped surface and improbable curve suggested a story about risk, uncertainty, and delight. He pushed software, fabrication, and engineering to their limits long before “parametric design” became a buzzword, yet he remained suspicious of fashion and theory, insisting that buildings should be humane, tactile, and a bit mischievous. The structures he leaves behind do more than house art, music, or offices; they continue to provoke arguments, civic pride, and sometimes outrage, which may be the clearest sign that they are very much alive.

Gehry’s legacy is also institutional and generational. He helped reframe what a “starchitect” could be: not just a brand attached to luxury clients, but a public figure whose work could catalyze urban reinvention, as Bilbao discovered, or reshape how a city thinks about its cultural core, as Los Angeles learned. Dozens of younger architects cite him less for his specific forms than for his license to be disobedient, to treat the brief as a starting point rather than a boundary. In that sense, his death does not simply mark an ending; it underlines how thoroughly his once-radical sensibility has seeped into the mainstream of contemporary design.

As we return to his most iconic works, what becomes clear is how consistent his obsessions were across wildly different contexts. Light, movement, and the choreography of how a body moves through space preoccupied him as much as façades ever did. In his absence, the buildings remain as articulate as any obituary, each one a frozen fragment of his ongoing argument with gravity, convention, and taste. They stand not as monuments in the solemn sense, but as restless objects that still seem to be in the process of becoming something else.

Guggenheim Museum, Bilbao, Spain

A veritable masterpiece, the Guggenheim Museum in Bilbao redefined the very essence of museum architecture. Clad in shimmering titanium, limestone, and glass, its fluid form and undulating surfaces transformed the post-industrial city of Bilbao into a global cultural hub. Beyond its exterior, the museum offers a labyrinth of interconnected spaces, providing a dynamic environment for art display and contemplation, where visitors are constantly reoriented by shifting scales, vistas, and shafts of light.

The so-called “Bilbao Effect” grew out of this building, turning a risky cultural investment into a template for urban reinvention that countless cities tried to emulate, with varying success. The Guggenheim’s success lies not just in its photogenic skin, but in the way it engages the river, the bridges, and the city’s once-neglected waterfront, stitching art into the daily life of Bilbao. Inside, Gehry’s vast gallery volumes proved unexpectedly flexible, accommodating everything from monumental sculpture to delicate installations, and showing that radical form could coexist with curatorial practicality.

Walt Disney Concert Hall, Los Angeles, USA

Situated in Los Angeles’ cultural corridor, the Walt Disney Concert Hall is an architectural symphony in stainless steel. Its sculptural, sail-like exterior rises from the street as if peeled up from the city grid, catching the famously sharp Southern California light and scattering it in soft, shifting reflections. The building’s complex geometry masks a remarkably clear organization, guiding audiences from the plaza and terraces into the heart of the hall through a sequence of compressed entries and soaring atriums.

Inside, the vineyard-style auditorium, wrapped in warm Douglas fir and oak, embodies Gehry’s close collaboration with acoustician Yasuhisa Toyota and the Los Angeles Philharmonic. The space is both intimate and monumental; the orchestra feels almost surrounded by the audience, and the sound is prized for its clarity and warmth. The organ, with its forest of asymmetrical wooden pipes, doubles as sculpture, echoing the exterior’s exuberance. Disney Hall did more than give Los Angeles a world-class concert venue; it anchored the city’s identity as a serious cultural capital and remains one of the rare buildings where musicians, critics, and everyday concertgoers are equally enthusiastic.

Dancing House, Prague, Czech Republic

In the heart of Prague, a city steeped in historic architectural grandeur, the Dancing House emerges as a contemporary icon. Its deconstructed silhouette, often likened to a dancing couple, stands in deliberate contrast to the neighboring Baroque and Gothic facades, signaling Prague’s evolving architectural narrative. The building’s glass “Fred” leans into the stone “Ginger,” creating a sense of motion that feels almost cinematic against the calm rhythm of the riverfront.

Beyond the playful metaphor, the Dancing House operates as a careful negotiation between old and new. Gehry and co-architect Vlado Milunić threaded the building into its tight urban site, respecting existing cornice lines while fracturing the expected symmetry and order. Offices occupy much of the interior, but the rooftop restaurant and terrace open the building to the public, offering panoramic views that reframe the city’s historic skyline. In a place where modern interventions are often contentious, the Dancing House has gradually shifted from scandal to beloved oddity, proving that contemporary architecture can coexist with, and even refresh, a deeply layered urban fabric.

Fondation Louis Vuitton, Paris, France

Gehry’s Fondation Louis Vuitton is a testament to the confluence of art, architecture, and landscape. Resembling a futuristic ship moored in the Bois de Boulogne, its glass “sails” seem to billow in the wind, catching reflections of trees, sky, and water. Set within the historic Jardin d’Acclimatation, the building plays a game of concealment and revelation; from some angles it appears almost transparent, from others it asserts itself as a crystalline object hovering above the park.

Inside, a series of white, box-like galleries are wrapped by the glass sails and linked through terraces, stairways, and bridges, creating a rich sequence of indoor-outdoor experiences. The museum’s program of contemporary art and performance takes advantage of these varied spaces, from intimate rooms to large, flexible volumes. At night, the Fondation becomes a lantern in the forest, a glowing presence that underscores Gehry’s fascination with light as a building material. It also represents a late-career synthesis for him: digital design and fabrication techniques are pushed to the extreme, yet the result feels surprisingly light, almost improvised, rather than technologically overdetermined.

Binoculars Building, Venice, Los Angeles, USA

Characterized by its monumental binocular facade, this office building exemplifies Gehry’s mischievous side. The structure is a hybrid of architecture and sculpture, with the colossal binoculars, originally a work by Claes Oldenburg and Coosje van Bruggen, serving as the principal entrance. Cars and pedestrians pass through the lenses, turning a familiar object into an inhabitable threshold and gently mocking the solemnity usually associated with corporate architecture.

The rest of the building, composed of irregular volumes clad in rough stucco and brick, plays foil to the central object, creating a streetscape that feels more like an assemblage of found pieces than a single, unified block. Over the years, the building has housed creative offices, including tech tenants, and has become a kind of mascot for the neighborhood’s informal, experimental energy. It demonstrates Gehry’s comfort with pop culture and humor, and his willingness to let another artist’s work literally occupy center stage, reinforcing his belief that architecture can be a generous collaborator rather than a jealous frame.

Lou Ruvo Center for Brain Health, Las Vegas, USA

In a city known for its flamboyant spectacles, the Lou Ruvo Center for Brain Health stands out with its cascading stainless steel forms that seem to melt and twist in the desert sun. The building is split into two distinct parts: a relatively rectilinear clinical wing that houses examination and treatment rooms, and a wildly contorted event hall whose warped grid and skewed windows evoke the tangled pathways of the brain. This juxtaposition turns the complex into a physical metaphor for cognitive disorder and the search for clarity within it.

Beyond its sculptural bravado, the center represents an attempt to bring architectural attention and philanthropic energy to the often invisible struggles of neurological disease and dementia. The event space helps fund the medical and research programs, hosting gatherings that place patients’ stories at the center of civic life. For Gehry, who has spoken publicly about friends and family affected by these conditions, the project had a personal resonance, and it shows in the building’s emotional charge. It is one of the clearest examples of his belief that dramatic form can serve not just commerce or culture, but also care and advocacy.

Neuer Zollhof, Düsseldorf, Germany

Overlooking Düsseldorf’s MedienHafen, the Neuer Zollhof complex showcases Gehry’s skill at composing buildings as a kind of urban sculpture. The trio of towers, each with its own material identity in white plaster, red brick, and shimmering stainless steel, appears to lean and sway, as if the harbor winds had pushed them out of alignment. Their undulating facades break up reflections of sky and water, adding a kinetic quality to what might otherwise be a static office district.

At the ground level, the buildings carve out irregular courtyards and passages that encourage wandering rather than straight-line commuting. This porousness allows the waterfront to feel more public, less like a sealed-off corporate enclave. Over time, Neuer Zollhof has become a visual shorthand for Düsseldorf’s transformation from industrial port to media and design hub, appearing in tourism imagery and local branding. The ensemble illustrates how Gehry could work at the scale of a neighborhood, not just a single object, using repetition and variation to give a district a distinct identity without lapsing into monotony.

Weisman Art Museum, Minneapolis, USA

The Weisman Art Museum at the University of Minnesota is a compact manifesto of Gehry’s interest in reflective surfaces and fractured forms. From the campus side, the building presents a relatively calm brick facade that aligns with neighboring structures, but facing the Mississippi River it explodes into a cascade of stainless steel planes. These facets catch the Midwestern light in constantly changing patterns, so the museum’s appearance shifts dramatically between bright winter mornings and long summer evenings.

Inside, the galleries are more restrained than the exterior might suggest, with white walls and straightforward geometries that accommodate a diverse collection, including American modernism and Native American art. The contrast between the calm interior and the exuberant shell underscores Gehry’s understanding that museums must serve art first, even when they are iconic objects in their own right. For the university and the city, the Weisman has become a landmark visible from bridges and river paths, a reminder that serious academic institutions can also embrace a bit of visual risk.

Vitra Design Museum, Weil am Rhein, Germany

Situated on the Vitra Campus, the Vitra Design Museum is one of Gehry’s earliest European works and a key piece in his evolution toward the more fluid forms of later years. The small building is composed of intersecting white plastered volumes, pitched roofs, and cylindrical elements, all twisted and stacked in a way that feels both familiar and disorienting. It reads like a collage of fragments from traditional architecture, reassembled into a dynamic, almost cubist object.

The museum’s interiors are intimate and idiosyncratic, with sloping ceilings and unexpected vistas that suit exhibitions on furniture, industrial design, and everyday objects. As part of a campus that later attracted buildings by Zaha Hadid, Tadao Ando, and others, Gehry’s museum helped establish Vitra’s reputation as a patron of experimental architecture. The project also marked one of the first major uses of his now-signature white sculptural volumes in Europe, setting the stage for the more complex geometries of Bilbao and beyond while reminding us that his work has always been as much about composition and light as about metallic skins.

8 Spruce Street (Beekman Tower), New York, USA

Rising above Lower Manhattan’s skyline, 8 Spruce Street, often branded as New York by Gehry, demonstrates his ability to bring a sense of movement to the rigid logic of the skyscraper. Its rippling stainless steel facade wraps a conventional concrete frame, creating the illusion of draped fabric caught in a vertical breeze. As daylight moves across the tower, the folds deepen and flatten, giving the building a constantly shifting presence against the more static grid of neighboring high-rises.

Inside, the residential tower combines rental apartments with amenities that were, at the time of its completion, notably generous for downtown living, including schools and community facilities at the base. The project signaled a shift in Lower Manhattan from a primarily financial district to a more mixed, residential neighborhood, and it showed that expressive architecture did not have to be reserved for cultural institutions or luxury condos. By applying his vocabulary to everyday housing, Gehry suggested that the pleasures of complex form and careful detailing could, at least occasionally, reach beyond elite enclaves and into the fabric of ordinary urban life.

The post 10 Iconic Frank Gehry Buildings That Celebrate The Late “Starchitect’s” Legacy first appeared on Yanko Design.

Pantone’s 2026 Color of the Year Finally Admits We’re All Exhausted

Pantone has officially called it: the prevailing mood for 2026 is exhaustion. This marks a sharp departure from recent years, when the annual announcement felt like a conversation happening in a different room. The world was navigating a pandemic hangover and digital burnout, while Pantone was prescribing electric purples for creativity and defiant magentas for bravery. Each choice, while commercially friendly, felt like a wellness influencer telling a tired person to simply manifest more energy.

This year, however, their choice of Cloud Dancer, a soft, billowy white, functions less like a statement and more like a surrender. It is the color of a blank page, an empty inbox, a quiet sky, a white flag, if you will. By choosing a hue defined by its peaceful lack of saturation, Pantone is finally acknowledging the dominant cultural mood – burnout. They are admitting that the most aspirational feeling right now is not vigor or joy, but rest.

Designer: Pantone

To understand why this feels so significant, you have to look at the recent track record. The disconnect between Pantone’s narrative and the world’s reality has been the core of the critique, which I made back in 2022, calling Pantone’s Very Peri an exercise in blind futility. The argument was that Pantone was no longer reading culture but trying to write it, pushing a top-down color prophecy that served its own marketing ecosystem more than it reflected any genuine grassroots sentiment. This critique felt especially potent with the last two selections, Peach Fuzz and Mocha Mousse.

Peach Fuzz, the choice for 2024, was sold with a story of tenderness, community, and tactile comfort. It was a lovely, gentle shade, but it landed in a year defined by rising inflation, geopolitical instability, and a pervasive anxiety about the acceleration of artificial intelligence. The narrative felt like a beautifully packaged lie of omission. Then came Mocha Mousse for 2025, a comforting brown meant to evoke groundedness and stability. It was a safe, aesthetically pleasing choice that aligned perfectly with coffee-shop interior trends, but it felt more like an algorithmic pick from a Pinterest board labeled “cozy” than a meaningful cultural statement. It was a color for a lifestyle, not for a life.

Which brings us to Cloud Dancer. On the surface, choosing white seems like the ultimate cop-out. It is the absence, the default, the non-choice. But Pantone’s justification is, for the first time in a long while, deeply resonant. Leatrice Eiseman, the executive director of the Pantone Color Institute, describes it as a “conscious statement of simplification” meant to provide “release from the distraction of external influences.” Laurie Pressman, the vice president, is even more direct, stating, “We’re looking for respite, looking for relief… we just want to step back.”

This is not the language of aspirational marketing; it is the language of burnout. Pantone is explicitly naming the problem: overstimulation, digital noise, and the overwhelming “cacophony that surrounds us.” Cloud Dancer is positioned as the visual antidote, a quiet space in a world that refuses to shut up. It is a breath of fresh air, a lofty vantage point above the chaos. By framing the color as a tool for focus and a symbol of a much-needed pause, Pantone has shifted from prescribing an emotion to validating one. It feels less like they are telling us how to feel and more like they are saying, “We hear you. You’re tired.”

Of course, we should not mistake this newfound self-awareness for a complete abandonment of the marketing machine. The Color of the Year is, and always will be, a commercial enterprise. But the choice of Cloud Dancer is a savvier, more sophisticated move. Choosing white cleverly sidesteps the pressure to project forced optimism. It aligns perfectly with existing design trends like soft minimalism and quiet luxury, making it an easy sell to brands. Most importantly, it allows Pantone to craft a story about retreat and renewal, a narrative that feels both authentic and highly marketable in a wellness-obsessed culture.

So, is the ‘marketing fluff’ gone? Not entirely. But it has been supplemented with something much more compelling. Instead of a tone-deaf declaration, we have a confession that feels a little more aware of a global sentiment. Cloud Dancer works because it is an admission of defeat. It is a white flag, a symbol of surrender to the relentless pace of modern life. In a world saturated with color, demanding our attention at every turn, the most radical and desired hue might just be the one that asks for nothing. Pantone did not just pick a color for 2026; it picked a feeling, and for the first time in a long time, it feels like our own.

The post Pantone’s 2026 Color of the Year Finally Admits We’re All Exhausted first appeared on Yanko Design.

Nothing Phone 3a Lite or CMF Phone 2 Pro? The Choice Is Just Glyph vs. Zoom

Glyph Light, more like Glyph Gaslight… Nothing just dropped its fifth phone this year, the 3a Lite, and the instant I looked at it, I was first shocked… then confused. Shocked because the phone looks exactly like Nothing’s CMF Phone 2 Pro. No seriously, the camera placement is EXACTLY the same, the chipset is the same, the battery, screen, most of its internals are the same. It took me a full minute for my shock to subside before it was replaced by confusion. Why? Why would Nothing introduce a ‘new’ phone into its lineup when it’s selling the exact same phone (for the exact same price) under its sub-brand?

I have no definite answers (we’re waiting for Carl Pei to reveal his underlying strategy), which is why it honestly feels so confusing. Two phones, practically twins (with probably just 2 small differences), and arguably running the same software on the same hardware for the same price. It goes against Nothing’s entire vision of disrupting the tech space by producing game-changing tech that injects fun into itself. Tech that builds a design-centric audience. Tech that prides itself on transparency. The fact that the Nothing Phone 3a Lite is just a ‘rebadged’ (and I use that term in the most calculated capacity) version of the CMF Phone 2 Pro feels like the opposite of transparent.

Designer: Nothing

Nothing Phone 3a Lite (Left) vs CMF Phone 2 Pro (Right)

Here’s where the phones are identical. They both have the same screen – a FHD+ 6.77″ AMOLED running 120Hz at 300 nits max brightness. They both have the same chip too, a MediaTek Dimensity 7300 Pro with 8 cores. Both phones run 8GB of RAM and max out at 256GB of storage. The OS is the same too, Nothing OS 3.5 (with a 6-year software update promise)… and even the battery is exactly the same, a 5,000mAh cell with 33W fast charging and 5W reverse wired charging. No wireless charging on either of the models. As far as the cameras go, the placement (if you look below) is the same too. Two of the three lenses in the camera array are the same, a 50MP main and 8MP ultrawide. The front has a 16MP shooter on both. And both phones pack that Essential button on the side that Nothing began rolling out this year. On paper, it’s as if you were looking at a Xiaomi vs Redmi phone, or a Huawei vs Honor phone. The same build, barring a few minor cosmetic changes.

Nothing Phone 3a Lite (Left) vs CMF Phone 2 Pro (Right)

The changes aren’t drastic, but they’re worth noting. For starters, the third camera on both the CMF Phone 2 Pro and the Nothing Phone 3a Lite are different. While the CMF gizmo packs a nifty 50MP telephoto lens, the 3a Lite swaps that out for a 2MP macro lens. That’s while keeping the price exactly the same, so make of that what you will. Meanwhile, look above and you’ll notice that the flashlight gets moved just a couple of notches downwards on the 3a Lite, so I’d assume most cases for the Phone 2 Pro will work seamlessly on the 3a Lite if they have a running cutout for the camera and the flashlight. Barring these two features, the design (obviously) is the most noticeable difference. The CMF phone sports a plastic back, with the customizable modular design, while the Nothing phone resorts to its thematic transparent rear, with a glass back. The 3a Lite also has the Glyph, although instead of an interface it’s just a tiny little dot on the bottom right corner. The final difference lies in the offerings – the CMF Phone 2 Pro comes in 4 colors and one single spec variant – a 256GB model. The Nothing 3a Lite comes in just Black or White options, although you can choose between a 256GB model, or a lower 128GB model that’s just €30 cheaper.

So why exactly did Nothing go down this road? All I can do is speculate, but the more I do, the more I’m inclined to believe that this is a diversity play rather than an innovation play. The company wants to corner the market with as many phones across a price range. Currently, the 3a and 3a Pro represent a budget range, but not the sub $300 category. People who are fans of the transparent phone design wouldn’t want to splurge on a CMF phone, even though it’s objectively better out of the two we’re comparing here today. If you told me I had to choose between a glass back and a small blinking LED, versus a plastic-back phone that packs a 50MP telephoto camera, the choice wouldn’t be a tough one at all.

The post Nothing Phone 3a Lite or CMF Phone 2 Pro? The Choice Is Just Glyph vs. Zoom first appeared on Yanko Design.