A first look at Metro 2039 shows how its Ukrainian developer turned the darkness up to 11

If the real world isn’t grim enough for you, Ukranian developer 4A Games has your back: Metro 2039 has been announced and is scheduled to arrive this winter. And based on the developer’s first look at the title, Metro 2039 looks to be an even darker affair than previous titles in the series. A tall order, but the real-world turmoil that has enveloped 4A Games since Russia’s invasion of Ukraine sounds like it has turned into a painful inspiration for the developer.

The lengthy cinematic reveal, which also contains a brief bit of gameplay at the end, doesn’t give much of the story away. But it does serve to place you right in the ruined, terrifying world of the Metro series. Metro 2039 arrives about 25 years after a nuclear apocalypse wiped out most life on the planet. The series focuses on survivors who live in Moscow’s ruined metro system. 4A says that this time out, the different underground factions have been united by a group known as “the Novoreich,” complete with a new ruler, the Spartan known as Hunter.

Despite Hunter promising “salvation and a new life” for the survivors left on the surface, things aren’t exactly rosy underground. As you might expect, this supposedly “united” society is still a complete disaster, with propaganda, authoritarian rule and violence the hallmark of the regime.

Screenshot from Metro 2039.
Screenshot from Metro 2039.
4A Games

The Metro series is based on novels by Dmitry Glukhovsky, a Russian author who has been in exile due to his public denouncement of Russia’s invasion of Ukraine. 4A Studios says that while this new game isn’t based specifically on one of his works, they worked in collaboration with Glukhovsky on the story for Metro 2039 “shaped by shared values of freedom and truth, and informed by the harsh realities of the world today.”

In statements from the studio, 4A directly acknowledges the conditions that Metro 2039 was created under. “Many developers continue to work from multiple locations, facing daily challenges never anticipated,” the studio says. “Through power outages, reliance on generators, and disruptions from missile and drone attacks, development has continued – driven by resilience, shared support, and a commitment to the work.”

It goes on to state that: “The war has directly shaped the development of Metro 2039, with its story focused acutely on choices, actions, consequences, and the cost of securing a future. While told from a distinctly Ukrainian perspective, Metro 2039 remains an authentic Metro story.” While the Metro series has been unfailingly bleak, it’s not hard to imagine how Russia’s invasion could have influenced the storytelling coming out of a Ukranian studio with an exiled Russian being part of the story team. But the limited bit of the game we’ve seen so far doesn’t make anything too explicit.

Screenshot from Metro 2039's reveal trailer.
Screenshot from Metro 2039's reveal trailer.
4A Games

The trailer shows off the new player-character known as The Stranger, the first voiced protagonist in the series (though we don’t hear him do anything but scream in the preview). The Stranger has apparently been surviving in the above-ground wasteland but is forced to return to the metro. The little bit of gameplay we saw was the standard first-person shooter view of The Stranger heading underground to be immediately ambushed by a pretty horrific monster that he barely escapes from — he’s then dragged to “safety” by a group of survivors who just get the doors to their shelter shut before being overrun by a larger horde. Creepy stuff.

The rest of the preview largely feels like a dream (or nightmare) sequence — but while it’s hard to put together what is going on, there’s no doubt that the detail in the environments and characters is top-notch. Given that the last metro game, Metro Exodus, was released way back in 2019, it’s fair to say that we’re getting a more graphically impressive rendering of ruined Moscow and the tunnels beneath it.

There’s no exact release date yet, but 4A Games says Metro 2039 will arrive this winter for Xbox Series X/S, PlayStation 5 and PC.

This article originally appeared on Engadget at https://www.engadget.com/gaming/a-first-look-at-metro-2039-shows-how-its-ukrainian-developer-turned-the-darkness-up-to-11-171500713.html?src=rss

OpenAI’s latest Codex update builds the groundwork for its upcoming super app

Last month, following reporting from The Wall Street Journal, OpenAI confirmed it was working on a desktop super app that would combine ChatGPT, its Codex coding agent and Atlas web browser into one cohesive experience. OpenAI is not releasing that application today. Instead, it's pushing out a major update to Codex that significantly expands what that software can do. However, the new release offers a glimpse of what OpenAI hopes to build with its latest effort.  

"We're building the super app out in the open," said Thibault Sottiaux, the head of Codex, during a press briefing held by OpenAI. "This release is about developers. In the future, we will broaden it up to a wider audience." Until then, the latest version of Codex offers developers multi-purpose AI agents that can work across a "larger surface area," while being more proactive. In practice, that translates to a host of new capabilities, starting with computer use. 

The agents inside of Codex can interact with other apps on your PC. When prompting one of OpenAI's models, you can name a specific program or let it determine the best application for the job. Computer use is available in competing apps like Claude Cowork, but where OpenAI believes Codex offers an edge in that department is in the "secret sauce" it built to allow an agent to run an app without bogging down your entire system, so the two of you can work in tandem. At the same time, OpenAI is releasing 111 new plugins for Codex that combine skills, app integrations and model context protocol server connections to give Codex more ways to gather context and use the tools developers depend on for their work.

The company has also added a built-in browser, with a commenting system that allows you to prompt Codex to make tweaks to specific parts of a webpage or web app you're building. In the demo OpenAI showed, one member of the Codex team used this tool to instruct Codex to change the margins on a graph so that the y axis wasn't cut off. Complementing this is built-in image generation. Codex can use gpt-image-1.5 to create product concepts, mockups, frontend designs and even assets for simple games. It also allows Codex to use screenshots to verify it's on the right track with a user request.   

With today's update, OpenAI is also previewing a pair of memory features. The first allows Codex to recall context from previous tasks to inform how it goes about future prompts. According to OpenAI, with time, this will allow Codex to complete requests faster and to a higher standard. The app will also use the context it's gathered to suggest proactive actions. For example, at the start of your day, it might suggest you respond to a comment a coworker left on a Google Doc draft you wrote. 

If you want to try the updated Codex for yourself, OpenAI is starting to roll out the new version to desktop app users who are logged in with their ChatGPT account. Computer use is available to macOS users first, with availability for people in the EU and UK to follow soon. Similarly, Brits and Europeans will need to wait to try the memory features OpenAI has built into Codex.  

This article originally appeared on Engadget at https://www.engadget.com/ai/openais-latest-codex-update-builds-the-groundwork-for-its-upcoming-super-app-170000019.html?src=rss

Google Chrome makes it easier to wrangle different tabs in AI Mode

Love 'em or hate 'em, no modern browser is complete without robust tab support, and so too would it seem Google's AI Mode. Starting today, the company is rolling out an update to users in the US that makes the tool better at interacting and understanding tabs. 

To start, the next time you use AI Mode on Chrome for desktop and click on a link, the chatbot will open a new side-by-side interface that allows you to both browse the new webpage and ask questions of AI Mode. The connection allows the chatbot to maintain the context of the search that brought you to that website in the first place. 

For instance, say you're looking for a new coffee maker to buy for your apartment. After AI Mode finds a handful of different models for you to compare, you can click on one to go to the manufacturer's website and ask additional questions of the chatbot like "how easy is this to clean?" Thanks to the expanded context window, you don't need to refer to the specific name of the model.   

Meanwhile, if you have an existing tab or group of tabs that you'd like AI Mode to factor into a new search, you can do that now too. From the redesigned Plus menu, just click the new option that's there. While you're in the Plus menu, you can also prompt AI Mode to consider other materials, including images and PDFs, alongside any relevant tabs.   

In testing, Google says users found the integration translated to less tab switching, and made it easier to focus. Mike Torres, vice-president of product for Chrome, said the new features represent a broader effort by Google to bring practical AI capabilities to its web browser. Torres added the company would soon bring today's updates to more places around the world.

This article originally appeared on Engadget at https://www.engadget.com/ai/google-chrome-makes-it-easier-to-wrangle-different-tabs-in-ai-mode-170000914.html?src=rss

Intel launches new Core Series 3 chips for mainstream laptops

Intel has unveiled its new Core Series 3 chips, the official title for its Wildcat Lake-codenamed series intended for mainstream and value-oriented laptops. Built using the same Intel 18A process as its Core Ultra Series 3 chips, they’re significantly more powerful than the previous generation and promise "exceptional battery life" and "boosted AI-ready performance."

Intel says the Core Series 3 offers up to 47 percent better single-thread performance and 41 percent better multi-thread performance, as well as 2.8x better GPU AI performance compared to a five-year-old PC. Stacked up against its last-gen Intel Core 7 150U processors, the new mobile chip uses up to 64 percent lower processor power and is capable of 2.7x AI GPU performance. In other words, expect more grunt and improved efficiency.

At the top end of the lineup sits the six-core Intel Core 7 360, which has a P-core Max Turbo frequency of 4.8GHz and NPU TOPS performance of 17. This scales down as you move through the other six-core options, and there’s also a five-core Core 3 processor at the entry level with a more modest GPU.

Intel promises all-day battery life, rated at 12.5 hours in the office and 18.5 hours for streaming from Netflix. As for connectivity, there’s support for Wi-Fi 7, Bluetooth 6 and two Thunderbolt 4 ports. The Core Series 3 chips will be making their way into a variety of laptops throughout 2026, including Acer’s Aspire Go 14, 15 and 16, the ASUS Vivobook 14/15/17 and ExpertBook B5 Flip, B3 G2 and P3 G2. The likes of Dell, Samsung and Lenovo will announce their own Core Series 3 devices in the near future.

This article originally appeared on Engadget at https://www.engadget.com/computing/laptops/intel-launches-new-core-series-3-chips-for-mainstream-laptops-164821846.html?src=rss

vivo X300 Ultra Review: Putting the Camera at the Center of Everything

PROS:


  • Excellent photography performance even without accessory

  • Modular photography ecosystem with extenders, grips, and cages

  • Simple yet stylish design

  • Flagship performance now avialabe globally


CONS:


  • Quite heavy for one-handed use

  • Premium pricing might only appeal to mobile shutterbugs


RATINGS:

AESTHETICS
ERGONOMICS
PERFORMANCE
SUSTAINABILITY / REPAIRABILITY
VALUE FOR MONEY

EDITOR'S QUOTE:

The vivo X300 Ultra is a camera platform that happens to run Android, built for people who shoot with purpose and want their phone to keep up.

The premium smartphone market has gotten very good at producing flagships that look and feel essentially identical. Brighter displays, larger sensors, and faster chips are standard expectations now, and while the results are impressive, they rarely feel purpose-built for a specific kind of user. The phones that genuinely stand out tend to commit to a clear identity and organize everything, from hardware to aesthetics, around it.

The vivo X300 Ultra is making its global debut right now, the first time vivo’s top-tier X Series flagship has launched outside of China. It arrives with a clear, photography-first premise built around the ZEISS Master Lenses Collection, offering professional creators unprecedented creative freedom through pioneering telephoto solutions, three prime-equivalent focal lengths, and a modular telephoto system that turns the phone into something closer to a portable camera platform than a smartphone that happens to have good cameras.

Designer: vivo

Aesthetics

The X300 Ultra doesn’t hide what it’s about. The rear is dominated by a large circular camera module, a bold black disc rimmed in polished metal with ZEISS T* branding at the center. It’s a confident, unapologetic choice that reads as a statement of intent rather than a feature shoehorned into standard smartphone form. The module doesn’t merely support the design; it is the design.

Our review unit is the white colorway, and it’s a particularly considered finish. The back panel has a subtle, almost etched texture beneath the surface, giving it more depth than you’d expect from a white phone. The polished frame and classic split design, inspired by the hues of unprocessed film, create a striking visual contrast while maintaining a slim, premium presence without relying on glossy flash or loud visual contrast.

The camera-inspired detailing rewards a closer look. The device features a metal “biscuit-style” camera bump with a knurled texture and engraved lettering on the sidewall of the camera bump, adding a precision-tool quality you feel the moment you hold it. These aren’t details that show up in a spec sheet, but they make a real difference in how the phone feels to own and carry every day.

The front takes a different approach entirely. The 6.82-inch 2.5D flat screen sits behind slim, even bezels with a small centered punch-hole for the 50MP front camera, and the whole face feels clean and uncomplicated. That contrast with the expressive rear works in the phone’s favor, keeping the display experience neutral and focused while the camera side carries all the personality.

Ergonomics

The first thing you notice when picking up the vivo X300 Ultra is the weight. At 237g, the white model is among the heaviest flagship phones currently on the market, and the substantial camera module adds to that presence both physically and psychologically. The Unibody 3D Glass Fiber Design of the Black edition results in a lighter 232g, but regardless of colorway, the flat-sided metal frame distributes the weight well, making the phone feel grounded and deliberate rather than awkwardly front-heavy.

One-handed use is possible, but not the most comfortable for extended periods, which is expected for a device of this size. The flat sides help with grip, giving you a firm hold, and the 8.49mm slim profile feels justified by the optical hardware packed inside. It’s a noticeable phone in the pocket, though that’s really true of any flagship with serious camera ambitions.

The ergonomics shift noticeably when the telephoto extenders enter the picture. The protective case becomes a functional necessity, as the lens mount system requires it to interface with the accessories. Once a telephoto extender is attached, the modular grip moves from optional to practically essential, providing the stability and comfort that the added length and weight demand.

Performance

At the core lies the Snapdragon 8 Elite Gen 5, paired with vivo’s own Pro Imaging Chip VS1+ and up to 16GB of RAM with up to 1TB of storage. Day-to-day performance is exactly what you’d expect from a 2026 flagship: fast, fluid, and unfazed by demanding tasks. OriginOS 6, based on Android 16, keeps things running smoothly with an Origin Smooth Engine that keeps the interface feeling responsive even after extended sessions.

The display is a 6.82-inch 8T LTPO panel running at 3,168 x 1,440 with a 144Hz adaptive refresh rate. It’s bright enough to review shots comfortably outdoors, with 4,500 nits of local peak brightness and certifications for Dolby Vision, HDR10+, and Netflix HDR. As a viewfinder for the camera system, it performs its job well, delivering accurate colors that reflect what the camera is actually capturing.

Battery life is solid for a phone with this level of imaging ambition. The 6,600mAh BlueVolt Battery supports 100W wired FlashCharge and 40W wireless charging, making it easy to top up quickly between shoots. Bypass charging with smart temperature control also keeps heat in check during longer sessions, which matters when you’re shooting all day.

The camera system is, of course, where the X300 Ultra makes its most interesting argument. Rather than organizing three cameras as “main, ultrawide, and telephoto,” vivo builds them around three prime-equivalent focal lengths, each treated as a dedicated imaging tool. The 35mm ZEISS Documentary Camera, equipped with a Sony LYTIA 901 sensor at a 1/1.12-inch sensor size and 200MP direct output, is the natural storytelling lens with a field of view close to the human eye. It’s ideal for portraits, street photography, and everyday moments, particularly in low light, where it delivers sharp, naturally detailed results.

Color Profile: Authentic

Color Profile: Vivid

Portrait Mode

Macro Mode

The 85mm ZEISS Gimbal-Grade APO Telephoto Camera is arguably the most technically ambitious of the three. Its 200MP sensor captures extraordinary detail even at high zoom levels, meeting ZEISS APO standards for optical precision. With 3-degree gimbal-level OIS and 60fps AF tracking in Snapshot mode, it handles fast-moving subjects with a composure that most telephoto cameras on phones can’t manage. Concerts, wildlife, and sports are where this lens makes the clearest case for itself, letting you track and capture decisive moments with confidence.

Telephoto Lens (No Mode)

Telephoto Lens (Pro Sports Mode)

Telephoto Lens (Pro Sports Mode)

Ultra-wide

The 14mm ZEISS Ultra Wide-Angle Camera rounds things out at 50MP, with a large aperture that makes it more capable than the typical ultrawide found on most flagships. It isn’t an afterthought; vivo positions it as a main-camera-grade lens designed for natural landscapes and broader compositional work, and that ambition shows in the results.

Main

Telephoto Camera (No Lens Extender)

The telephoto extenders add another layer to the whole system. The 200mm equivalent vivo ZEISS Telephoto Extender Gen 2 connects to the phone via the case’s lens mount and delivers optical-grade output at a focal length that no internal module can match, all at a more manageable 153g, refined down from 210g in the previous generation. The 400mm equivalent Telephoto Extender Gen 2 Ultra takes things further still, built on a Kepler-inspired optical design with 15 high-transmittance glass elements and support for 200MP optical output. Both extenders support gimbal-grade OIS and up to 60fps AF tracking, and together they extend the X300 Ultra’s imaging range into territory that genuinely blurs the boundary between smartphone and dedicated camera.

200 mm ZEISS Telephoto Extender Gen 2

400 mm ZEISS Telephoto Extender Gen 2 Ultra

Sustainability

The X300 Ultra is built to last, and that conviction shows in the hardware choices. Armor Glass protects the exterior, and the phone carries both IP68 and IP69 dust and water resistance ratings, covering both prolonged submersion and high-pressure water exposure. These are meaningful standards for a device that’s meant to travel and shoot in varied conditions.

The strongest sustainability argument, though, is software longevity. vivo is committing to five years of OS upgrades and seven years of security maintenance, a support window that puts the X300 Ultra ahead of most Android flagships and signals genuine confidence in its long-term relevance. For a phone at this price point, that kind of assurance matters, extending the useful life of the device considerably.

Like most sealed flagship phones, however, the X300 Ultra isn’t particularly repair-friendly, and vivo doesn’t make any specific claims about recycled or sustainable materials in this build. That’s a common gap across the ultra-premium phone category, and the long support window and durable construction go some way toward compensating for it.

Value

The X300 Ultra sits squarely in the ultra-premium flagship tier, and it makes no attempt to be a broadly accessible phone. It’s a specialized, photography-first device with a modular accessory system, three prime-equivalent focal lengths, and a build quality that communicates its ambitions at every turn. The starting price in China begins at CNY 6,999, roughly in line with other high-end imaging flagships globally, though global pricing hasn’t been officially confirmed at the time of this review.

For the right buyer, that price feels well-matched to what the phone actually delivers. Photographers and creators who think in focal lengths, who want to shoot 200MP RAW files on a 35mm lens, track birds or performers at 85mm, and then extend to 200mm or 400mm with an optically serious external lens, will find it harder to justify a more generalist flagship. The X300 Ultra covers a lot of creative ground that most phones simply can’t.

That said, buyers looking for the lightest or simplest ultra-premium smartphone, something to carry easily through a full day without thinking twice about it, may find the X300 Ultra’s weight and accessory ecosystem a bit more demanding than they bargained for. It’s a phone that asks for a certain kind of engagement, and it rewards that engagement handsomely.

Verdict

The vivo X300 Ultra is one of the most coherent camera-first flagships to arrive in years. The design, the optics, the telephoto ecosystem, and the software are all pulling in the same direction, creating a product that knows its audience and delivers on their priorities with real conviction. The 237g weight and accessory dependency aren’t oversights; they’re the cost of a system this capable, and for the right user, that’s a perfectly reasonable trade.

What makes it genuinely memorable, though, isn’t any single spec. It’s the feeling that the whole thing was designed by people who actually think about photography, not just camera marketing. The focal lengths are deliberate, the extenders are optically serious, and the hardware detailing reinforces the idea that this is a tool as much as it is a phone. For anyone who shoots with intent, that kind of commitment is exactly what a flagship should offer.

The post vivo X300 Ultra Review: Putting the Camera at the Center of Everything first appeared on Yanko Design.

Gemini can now draw on your Google data to personalize the images it generates

Your Google Photos library could soon influence the kind of images you can generate with Gemini. After letting users personalize the AI assistant's responses with data from Gmail, Search and YouTube, Google says it's bringing that same "Personal Intelligence" to Nano Banana 2 to make it easier for users to create personalized images with the AI model.

The goal is to have the data affiliated with your Google account — your YouTube history, emails, Google Photos, etc. — provide context to Nano Banana 2 so you don't have to. Rather than prompting Gemini's image generation model with information about you or photos of your belongings, a direction to "create a picture of my desert island essentials" should produce an image that includes the things you care about without any extra context. Similarly, if you use labels in Google Photos to identify people or pets, you can tell Gemini to "create a hand-drawn illustration of mom," and it should be able to use Google Photo's labels to find the right reference photo and create an image of the right person.

A gif of someone generating an image with Gemini using Personal Intelligence.
Google

If Gemini creates images that don't look right, you can still send a follow-up prompt to refine the result, or select a new source image from Google Photos with the "+" button. Google says you can also click the "Sources" button to view what images the AI referenced in the first place, or ask it directly for the attribution and sources used for a specific image.

Personalized user data is one of the unique advantages Google has over companies offering competing AI assistants, so expanding Personal Intelligence to an already popular feature like image generation is a natural way to build on that lead. For now, this more personalized version of Nano Banana 2 is available in the Gemini app for eligible AI Pro and AI Ultra subscribers. Google says the feature will come to Gemini in Chrome and other users "soon."

This article originally appeared on Engadget at https://www.engadget.com/ai/gemini-can-now-draw-on-your-google-data-to-personalize-the-images-it-generates-160000269.html?src=rss

The first real trailer for the Street Fighter movie is filled with crowd-pleasing moments

We finally have a real-deal trailer for the upcoming Street Fighter movie, after a short teaser dropped at The Game Awards last year. This is nearly three minutes of fighting, silly dialogue and, of course, Easter eggs from the games.

To the latter point, there's a scene of Ken beating up a car like in the bonus stages from Street Fighter II and footage of Ryu powering up one of his famous Hadoken fireballs. There's even a cheeky reference to Chun-Li's notoriously-large and powerful thighs. This is all helped along by the fact that the actors all look very silly and mostly accurate to the games.

The plot looks to be fairly standard for this type of adaptation. There's a big, important fighting tournament and Chun-Li is recruiting people from around the globe, acting like the franchise's Nick Fury or something. Ken and Ryu are beefing, M. Bison is involved in a criminal conspiracy (big surprise) and everyone else is punching and/or making snarky asides. It looks campy as hell, which is a good thing.

Street Fighter is directed by Kitao Sakurai, who made the film Bad Trip and was heavily involved with The Eric Andre Show. It hits theaters on October 16.

The cast is actually stacked. Noah Centineo and Andrew Koji lead the film as Ken and Ryu, but Jason Momoa is playing Blanka and Curtis '50 Cent' Jackson is portraying Balrog. Other actors involved include David Dastmalchian, Callina Liang, Cody Rhodes and Orville Peck.

This is the third attempt at a live-action Street Fighter adaptation. The 1994 film is famous for Raul Julia's iconic performance as M. Bison and 2009's Street Fighter: The Legend of Chun-Li is famous for being very bad.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/tv-movies/the-first-real-trailer-for-the-street-fighter-movie-is-filled-with-crowd-pleasing-moments-153145868.html?src=rss

Meta isn’t setting its Oversight Board free just yet

The Oversight Board — the policy body Meta created to weigh its most impactful moderation rulings — has seen its role within Mark Zuckerberg's empire come into question due to shifting content policy priorities and dwindling investment. The Oversight Board has taken steps to formalize its long-contemplated desire to work with other companies, but Engadget has learned Meta has thus far declined to move forward with that process. 

Over the last year, board members have become increasingly interested in artificial intelligence policy and how their experience shaping Meta's content rules could translate into advising companies in the generative AI space. That interest has intensified as some AI companies have privately signaled they would be open to working with the board, according to a source familiar with the organization who was not permitted to speak publicly. The board began talks with Meta last fall about the possibility, which would require the company to sign off on changes to the legal documents that govern the board's operations. But Meta officials have not indicated whether the company is willing to make those changes, which would likely require approval from top executives. 

Platformer, which first reported on Meta's budget negotiations with the Oversight Board, noted that the company "has long encouraged the board to seek additional funding sources." So far, no other company has publicly shown interest in working with the group, though the board has had conversations with other firms behind the scenes. 

Oversight Board co-chair Paolo Carozza told Engadget in December that there had been "really preliminary" discussions between the board and AI companies, though he declined to name which ones in particular. "It feels like quite a different moment now, largely because of generative AI, LLMs, chatbots [and] the way that a variety of retail-level users of these technologies are facing a whole new set of challenges and harms that's attracting a lot of scrutiny," he said at the time. 

Meta has readily agreed to amend the board's governing documents in the past — like when the trust that controls the Oversight Board's budget funded a new organization to mediate content moderation disputes in Europe. While Meta executives once promoted the idea of its ostensibly independent Oversight Board working with other social media platforms, the prospect of the group working with a competitor as it pursues AI superintelligence is apparently more complicated. 

Over the last five years, board members have received briefings from officials at Meta about the inner workings of its moderation systems and other non-public details as part of their work with the company. That raises practical questions about how the board would safeguard Meta's proprietary information, as well as larger strategic questions about whether Meta would want its Oversight Board to work with some of the companies it's now fiercely competing with, the source said. It's not clear how invested Meta's current leadership is in ensuring a future for the board. Former president of global affairs Nick Clegg, who was one of the most vocal champions of the board's work, left the company last year.

Meanwhile, other board members have publicly made the case that the group, which consists of free speech and human rights experts from around the world, is well-positioned to guide AI companies grappling with an increasing number of real-world harms. When Anthropic published a "Claude Constitution" earlier this year, the board published a lengthy analysis from member Suzanne Nossel arguing that Claude also needed the kind of "oversight" the board has provided for Meta. She made a similar argument for the wider AI industry in an op-ed in The Guardian last month.

While Nossel denied that she was directly pitching the Oversight Board to Anthropic, she said that AI companies face many of the "same dilemmas" as social media platforms. "When the board was first created, there was the notion that we might work across the industry," she told Engadget. "Now, as the world shifts toward an AI-centric paradigm, we're very interested in what our experience can bring to that conversation." 

Oversight Board members, who naturally have a vested interest in expanding their purview, aren't the only members of the industry who have warned that generative AI platforms are essentially speed-running social media companies' playbook. A former OpenAI researcher recently wrote that "OpenAI Is Making the Mistakes Facebook Made," citing the AI company's moves toward optimizing for engagement and its plans for in-app advertising. The researcher cited Meta's Oversight Board as an example of the kind of independent governance that's needed in the AI industry.

The question of working with other companies has taken on new urgency as the Oversight Board faces the possibility that it will lose its backing from Meta. In a statement, a Meta spokesperson pointed to previous reports that Meta has committed to funding the board through 2028 and said that "nothing has changed." But a source familiar with the board tells Engadget that Meta has so far only handed over half of the smaller tranche of 2028 funds to the board amid ongoing discussions about its future, including whether it will expand its purview beyond Meta. 

There are also very real questions about how the Oversight Board fits into Meta's current strategy around content moderation. Zuckerberg announced last year that Meta was shifting away from most proactive moderation, ending fact-checking in the United States and rolling back hate speech rules. Zuckerberg himself reportedly led the push for these changes following a meeting with then President-elect Donald Trump. The Oversight Board, which Meta has sometimes asked to advise on major policy changes, was not consulted. The company recently said it plans to reduce the number of human moderators in favor of AI-based systems.

"The Oversight Board is currently engaged in meaningful discussions with Meta regarding its future and the evolution of its model to ensure the organization can address the most urgent emerging challenges in AI governance, standards, and accountability," an Oversight Board spokesperson said in a statement. "At this time, no decisions have been made about the Board’s future, and the organization’s day-to-day work and mandate remain unchanged.”

Critics have long said that the board, which has received more than $280 million from Meta, moves far too slowly. In a little more than five years of operation, the board has published more than 200 decisions about specific moderation issues, which Meta is required to uphold. Those decisions — a tiny fraction of the millions of requests it receives — can take months, though the board can opt to move more quickly. The board has also made hundreds of policy recommendations, which Meta has to respond to but isn't required to implement. The company has agreed to at least some changes in response to 75 percent of recommendations, according to the board. 

For the Oversight Board, working with a company besides Meta would begin to address some of the challenges it now faces. It would boost the group's credibility at a time when Meta seems to be re-evaluating its relationship with the board, and it would open up the possibility of new sources of funding. But the situation underscores another long-simmering tension when it comes to the role of the "independent" oversight organization. Meta has always been in control of how much influence the group can actually have. And it's not clear that the company is ready to let the board, which has spent the last five years learning the minutiae of Meta's content moderation and policy processes, advise the companies it's now competing with.

During its work with Meta, the Oversight Board has weighed in on its rules for AI several times. The board has criticized the company's "manipulated media" policy that governs deepfakes and other content, which led to Meta adopting new rules around AI labeling. In its most recent decision dealing with AI, the board urged Meta to invest in better AI detection tools and to collaborate more closely with other platforms. The company has not yet formally responded to those recommendations. 

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-isnt-setting-its-oversight-board-free-just-yet-153000172.html?src=rss

Dozie Kanu Just Turned His Life Story Into Tables for Knoll

A table is just a table until it isn’t. That’s the kind of thinking that gets lost in a lot of design conversations, where we spend so much time talking about materiality and silhouette that we forget to ask what an object is actually carrying. The Dozie Kanu Table Collection for Knoll, debuting at Salone del Mobile 2026, makes that question impossible to ignore.

Kanu is an American artist who grew up in Texas with Nigerian immigrant parents. That detail matters enormously here, because it shaped a perspective that doesn’t fit neatly into any one cultural box. He’s spoken openly about the displacement that came with that upbringing, about not being fully accepted by the Black community, about existing in-between. “Growing up in Texas with Nigerian immigrant parents, I was not fully accepted by the Black community… it created a feeling of displacement. And that feeling is everywhere in my practice.” And that sense of in-between-ness is exactly what makes his design language so compelling to look at.

Designer: Dozie Kanu for Knoll

The collection itself is three pieces: a console, a coffee table, and a side table. All three are built with taut leather surfaces and rounded steel rod edges, and all three trail floor-length leather tassels that move with a life of their own. The tassels are the thing that catch your eye first, and they’re meant to. They pull from African drums, from African ceremonial dress, and from the fringed leather jackets of Texas cowboy culture. That last reference might seem like an odd pairing, but that’s kind of the point. Kanu isn’t choosing between his influences. He’s letting them coexist.

Available in two colorways, bronze and a dark grey manganese, the pieces have a quiet formality that makes the tassels even more striking. The restraint of the forms makes the ornamentation feel intentional rather than decorative. You don’t look at these tables and think “maximalism.” You think “precision.” The tassels earn their place because everything else is so considered.

Knoll, for the record, is not a brand that takes collaborations lightly. Their roster has historically included Eero Saarinen and Mies van der Rohe, which means choosing Kanu for this moment says something. It says they’re paying attention to who’s shaping the conversation around contemporary design. Kanu, who has built a practice across sculpture and installation, is exactly the kind of artist who brings a point of view that doesn’t get diluted in the translation to mass production. His own framing of the work says it perfectly: “It’s not screaming ‘identity’ or ‘autobiography.’ But the best thing I can do is make what I know.”

That line is worth sitting with. We’re living through a design moment where cultural narrative has become something of a selling point, and there’s a real risk of it becoming performative. What Kanu is doing feels different. It’s not a press release in object form. It’s more like a very personal shrug that happens to be beautiful. The tassels don’t announce themselves as symbols. They just exist, and they carry the weight of a story without demanding that you read it.

Running alongside the Knoll launch, Kanu also has an installation at ICA Milano in collaboration with the Nicoletta Fiorucci Foundation, featuring a structure built from reinforced cardboard. It’s a reminder that his practice spans a lot of registers, that the tables and the gallery work are part of the same ongoing conversation he’s having with himself. I appreciate that kind of consistency in an artist. You can feel the through-line even when the mediums are completely different.

If I’m being honest about what this collection does to the broader design conversation, I think it’s a useful reminder that furniture doesn’t have to be neutral to be functional. A table can have a perspective. It can come from somewhere very specific without being inaccessible. And when a brand like Knoll gives that kind of work the platform it deserves, the results are worth paying attention to far beyond the walls of Milan Design Week. Dozie Kanu’s tables are at Salone del Mobile 2026. They move when you walk past them. And they’ve got a lot to say.

The post Dozie Kanu Just Turned His Life Story Into Tables for Knoll first appeared on Yanko Design.

Anna’s Archive told to pay Spotify and record labels $322 million over unprecedented music scraping

The open-source library and search engine Anna’s Archive has been ordered to pay Spotify and the three of the world’s largest music labels $322 million in damages after it claimed to have scraped the entirety of the streaming platform’s library of music.

Spotify, Universal Music Group, Warner Music Group and Sony Music Entertainment, sued Anna’s Archive in January for a slightly comical $13 trillion. They alleged Anna's Archive had illegally scraped 86 million songs — a significant chunk of all the music on the planet — and intended to make them available for download via BitTorrent. At the time, Spotify called the scraping a "brazen theft of millions of files containing nearly all of the world’s commercial sound recordings."

In a since-deleted blog post, Anna's Archive stated the scraping was an act of preservation. Still, a New York federal judge sided with the plaintiffs after the archive's anonymous operator failed to respond to the lawsuit.

The court order finding Anna's Archive guilty of direct copyright infringement, breach of contract and violation of the Defense Contract Management Agency (DCMA) was filed on April 14. A further claim of violation of the Computer Fraud and Abuse Act (CFAA) was dismissed by the judge.

The total breakdown of damages includes $7.5 million to each of Sony and Universal Music and $7.2 million to Warner Music, with the remaining $300 million going to Spotify. The latter figure amounts to $2,500 for each of the 120,000 scraped music files already made available by Anna’s Archive. The remainder of the 86 million files were due to be released to the public at a later date.

The court also ordered Anna’s Archive to "immediately destroy all copies and phonorecords of any work ‘scraped,’ downloaded, copied or otherwise extracted from Spotify," but whether it actually does this, or indeed hands over a penny of the damages, remains to be seen. The bizarre reality of this case is that the person (or people) behind Anna’s Archive remains a mystery.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/annas-archive-told-to-pay-spotify-and-record-labels-322-million-over-unprecedented-music-scraping-151034032.html?src=rss