AI-powered Conway’s Arcade not only plays classic games, it invents them in real-time

Arcade machines once thrived as cultural objects as much as entertainment devices, combining bold industrial design and tactile controls to pull people into endless play. Over time, those cabinets became symbols of fixed experiences, each game defined by predictable patterns and tactically programmed outcomes. Conway’s Arcade revisits that familiar physical form but challenges the very idea of what an arcade game is supposed to be. This is done using computation, not as hidden infrastructure but as the driving force behind play itself.

Created by technology agency SpecialGuestX for Google, Conway’s Arcade is a generative gaming installation that transforms classic arcade logic into an evolving, rule-based system. Unveiled at the NeurIPS 2025 conference, the project was designed to communicate complex computational ideas through direct interaction, replacing static gameplay with experiences that emerge in real time.

Designer: SpecialGuestX

Instead of loading pre-existing games, the system generates new gameplay variations inspired by well-known titles such as Space Invaders, Breakout, Flappy Bird, and the Chrome Dino game. The smart system recomposes the game’s mechanics through adaptive logic. The conceptual backbone of Conway’s Arcade is John Conway’s Game of Life, a mathematical model where simple rules governing cells lead to unexpectedly complex patterns.

SpecialGuestX translated this principle into a playable framework where movement, collision, and behavior are determined dynamically rather than scripted in advance. Player input influences how these rules evolve, meaning each session becomes a unique computational outcome rather than a repeatable level sequence. Familiar visual language and controls anchor the experience, while the underlying logic continually reshapes how the game behaves.

This generative approach is powered by adaptive systems that respond to interactions in real time, making the arcade gaming feel intuitive while remaining unpredictable. Players begin to sense patterns and relationships as they play, learning the logic through experimentation rather than instruction. The result is an experience that rewards curiosity, turning gameplay into a form of exploration rather than mastery over fixed mechanics.

The physical design of Conway’s Arcade reinforces this philosophy. The cabinet is constructed entirely from aluminum and designed as a lightweight, modular structure that can be assembled by a single person in under an hour. Fabricated by Barcelona-based workshop 6punyales, the hardware balances durability with portability, making it suitable for exhibitions and travel. Mechanical joysticks, tactile buttons, and red latched switches reference classic arcade interfaces, while clean lines, exposed structure, and a custom typeface give the machine a distinctly contemporary presence.

Visuals follow a restrained 8-bit aesthetic, not as nostalgia for its own sake but as a clear, readable interface for generative behavior. On screen, game elements act like independent agents within a system, making the effects of rule changes visible and understandable. Rather than hiding computation behind spectacle, Conway’s Arcade puts logic on display, using play as the medium for comprehension.

Commissioned by Google and presented to an audience deeply familiar with artificial intelligence and machine learning, Conway’s Arcade succeeds by making abstract ideas accessible. It reframes the arcade cabinet as a tool for communication, showing how simple rules can generate complexity, creativity, and the element of surprise.

The post AI-powered Conway’s Arcade not only plays classic games, it invents them in real-time first appeared on Yanko Design.

AI-powered Razer Motoko headphones with 4k cameras do what smart glasses can

Every year at CES, Razer has some exciting tech on offer. This year is no different as they’ve come up with headphones that go beyond audio nirvana. Dubbed Project Motoko, the over-ear headphones are the next frontier of wearable AI, since they have eyes. Yes, the concept cans are loaded with a pair of Sony 4K cameras (with 12MP resolution), to make you ditch your smart glasses for good reason. Since most of us wear headphones more than smart glasses, this innovation makes complete sense.

AI is the name of the game at this year’s CES, even though we’ve seen cramming of machine learning technology in things where it makes no sense or is not useful at all. The Motoko headphones are different as they build on an accessory we already use a lot. The in-built cameras analyse the world around you, seeing what the user sees, in first-person view. They can do pretty much what other smart glasses can, and yes, they play ear-pleasing music when you need to zone out.

Designer: Razer

According to Nick Bourne, Global Head of Mobile Console Division, Razer, “By partnering with Qualcomm Technologies, we’re building a platform that enhances gameplay while transforming how technology integrates into everyday life. This is the next frontier for immersive experiences.”

Motoko headphones can do translations in real time, beam weather updates, provide navigation input, and a whole lot more. The biggest advantage Razer should be appreciated for compared to smart glasses is that the Motko can fetch information from multiple AI assistants, including Grok, Gemini, ChatGPT, Perplexity, and Meta. Most basic functions run on the headphones, like checking the calendar updates and schedules. For other deeply embedded tasks, you have to pair them with a phone or PC. For the most part, someone unassuming won’t be able to tell the difference between a normal pair of headphones and these.

The headphones are built on the Qualcomm Snapdragon platform, making them AI-powered in a real sense. You can recognize objects, track exercises, or even summarize information. The stereoscopic vision extends the field of view beyond the human peripheral vision capabilities. In combination with the audio input and the far-and-near field microphones, the headphones detect dialogue, voice commands, or ambient noises. Thereby, the headphones use all this data in machine learning applications, which ultimately assist the user in daily tasks, work, and, of course, gaming. Down the line, you could be using them for preparing meals in the kitchen, immersive AI guidance in strategy games, or translating in real-time when travelling abroad.

As per Ziad Asghar, SVP and GM of XR at Qualcomm Technologies, they are thrilled to work with Razer to push “AI wearable computing into a new era where intelligence, performance, and immersive experiences converge.” There is no word yet on the probable timeline for the release of the headphones, but they definitely are exciting tech to experiment with and use in daily life. The AI-assisted feature should work at a deeper level with the headphones, and it’ll be exciting to use them hands-on.

The post AI-powered Razer Motoko headphones with 4k cameras do what smart glasses can first appeared on Yanko Design.

Record setting Pocket Lab shrinks a full AI supercomputer into the size of a power bank

We have come a long way from the computers the size of entire rooms to the sleek personal computers that sit comfortably on our desks. The evolution of computing has consistently pushed toward smaller form factors and greater efficiency. The Mac mini, for example, illustrates how compact modern PCs have become. Yet the question persists: how miniature can a powerful computing device truly be? A recent Guinness World Records certification offers a striking answer.

Tiiny AI, a US-based deep-tech startup, has unveiled the Pocket Lab, officially verified as the “world’s smallest personal AI supercomputer.” This palm-sized device, no larger than a typical power bank, is capable of running large language models (LLMs) with up to 120 billion parameters entirely on-device, without relying on cloud servers or external GPUs.

Designer: Tiiny AI

At its core, the Pocket Lab aims to make advanced artificial intelligence both personal and private. Traditional AI systems often depend on cloud infrastructure, which can raise concerns around data privacy, latency, and carbon emissions associated with large server farms. The Pocket Lab addresses these issues by enabling fully offline AI computation. All processing, data storage, and inference happen locally on the device, reducing dependence on internet connectivity or cloud resources.

Despite its compact size, measuring 14.2 × 8 × 2.53 centimeters and weighing roughly 300 grams, this mini supercomputer delivers noteworthy computing power. The system operates within a typical 65-watt energy envelope, comparable to a conventional desktop PC, yet manages to support extensive AI workloads. The hardware architecture combines a 12-core ARMv9.2 CPU with a custom heterogeneous module that includes a dedicated Neural Processing Unit (NPU), together achieving approximately 190 TOPS (tera-operations per second) of AI compute performance. This configuration is backed by 80 GB of LPDDR5X memory and a 1 TB solid-state drive, allowing large AI models to run efficiently without external accelerators.

Two key technologies underpin the Pocket Lab’s ability to run large models so efficiently in such a small package. TurboSparse improves inference efficiency through neuron-level sparse activation, reducing computational overhead while preserving model intelligence. PowerInfer, an open-source heterogeneous inference engine with a significant developer following, dynamically distributes workloads across the CPU and NPU, delivering server-grade performance at far lower power and cost than traditional GPU-based solutions.

In practical terms, the Pocket Lab supports a wide ecosystem of open-source AI models and tools. Users can deploy popular LLMs such as GPT-OSS, Llama, Qwen, DeepSeek, Mistral, and Phi, alongside agent frameworks and automation tools, all with one-click installation. This broad software compatibility extends the device’s usefulness beyond enthusiasts and hobbyists to developers, researchers, professionals, and students.

By storing all user data and interactions locally with bank-level encryption, the Pocket Lab also emphasizes privacy and long-term personal memory. This feature contrasts with many cloud-based AI services that retain data on remote servers. Tiiny AI plans to showcase the Pocket Lab at CES 2026, but has not yet disclosed full details on pricing or release dates.

The post Record setting Pocket Lab shrinks a full AI supercomputer into the size of a power bank first appeared on Yanko Design.