The Dune: Part Three trailer introduces Robert Pattinson’s villainous new character

It's only been two years since Dune: Part Two took over multiplexes, but we already have a trailer for the third installment. The appropriately-named Dune: Part Three is an adaptation of Frank Herbert's Dune Messiah book from 1969.

Just like the book, the latest film takes place a number of years after Dune: Part Two. "If the first movie was contemplation, a boy exploring a new world, and the second one is a war movie, this one is a thriller," according to The Hollywood Reporter. "It is action-packed and tense. More muscular.”

Despite the time jump, most primary actors are returning. This includes Timothée Chalamet, Zendaya and Javier Bardem. Anya Taylor-Joy, who briefly appeared in the second film, is also coming back. The same goes for Jason Momoa, despite his Duncan Idaho character dying in the first film. Book readers will likely understand what that means.

The trailer also highlights the antagonist Scytale, as portrayed by Robert Pattinson. He should be a more nuanced villain than Baron Harkonnen, though that's not exactly a high bar.

The release date is coming up fast. Dune: Part Three hits theaters on December 18. That's this year. Villeneuve had intended to take a break after making the second one to focus on a smaller and more personal film, but said that he kept "waking in the middle of the night" with potential images from the third installment.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/tv-movies/the-dune-part-three-trailer-introduces-robert-pattinsons-villainous-new-character-173758393.html?src=rss

Gamers are right to be disgusted by NVIDIA’s DLSS 5

You can sum up the gamer response to NVIDIA's DLSS 5 announcement with the ever-relevant Fallout 4 meme: "Everyone disliked that." Across social media and Reddit last night, I couldn't find anyone who's genuinely positive about the potential for DLSS 5, which uses AI to add "photorealistic" lighting and materials to in-game models and environments. Instead, it's mostly complaints about the feature being another avenue for AI slop. And you know what? I agree.

It's not unusual to see gamers being reflexively angry about new technology on the internet, especially when it's being pitched by NVIDIA as the “biggest breakthrough in computer graphics” since its RTX 20-series GPUs arrived in 2018 with real-time ray tracing. There was plenty of suspicion around DLSS's original AI upscaling model, as well as the "fake" frames generated by later iterations. But the few demos we've seen of DLSS 5 basically look like "yassified" AI filters for popular games.

Leon and Grace from Resident Evil: Requiem have more distinct facial and hair detail, but they look a bit too slick. There are more wrinkles on an old woman in Hogwarts Legacy. And the face, hair and clothing from a Starfield character gain an uncanny sheen.

None of the demos have the immediate impact of the Star Wars real-time ray tracing short ILMxLab produced with NVIDIA seven years ago. That demonstration showed us glorious reflections and lighting effects we'd never seen before in real-time. The DLSS 5 demos, on the other hand, don't look much different from the AI filters that make you look more presentable for Zoom calls. There's no genuine excitement for DLSS 5, just NVIDIA telling us that it's groundbreaking.

There's also plenty of concern about DLSS 5 straying from an artist's original intent, as well as a potential homogenization of game visuals if every developer starts using the feature. NVIDIA claims developers will have "detailed controls for intensity, color grading and masking," which will help DLSS 5 stay in line with a game's aesthetic. But we don't have any direct developer experience with the feature yet — some artists may want far more control than NVIDIA wants to give.

The difference between DLSS 5 and earlier versions NVIDIA's upscaling is like the difference between generative AI and more traditional machine learning models. NVIDIA relied on the latter to make low-resolution textures and models appear sharper, and later to insert generated frames to smooth out gameplay and raise your fps count. As Wirecutter and former Polygon editor Arthur Gies points out, you could argue those features were in service of delivering what developers originally intended. But DLSS 5's neural model applies its concept of "photorealism" on top of what games are rendering -- it's like watching a Pixar movie that let OpenAI's Sora do a final visual pass.

Part of the negative response towards DLSS 5 may stem from a widespread anti-gen AI sentiment, but that doesn’t devalue the criticisms either. Similar to AI generated text, images and video, there’s a dehumanizing aspect about DLSS 5. It can erase the work of human artists (despite how much control NVIDIA claims they have), and it also feels like a calculated attempt to appeal to gamers who just want shinier graphics. NVIDIA showed off how generative AI could be used to create dialog and voices for NPCs last year at CES, but that was also widely disliked (and I called it a genuine nightmare).

Of course, I can’t fully judge DLSS 5 until I see it in action beyond a short demo. But I think the visceral disgust is an important indicator that many gamers aren’t onboard with the AI-powered future NVIDIA is trying to sell us. And perhaps the idea of chasing “photorealism” may be a bit of a fool’s errand. It may be appropriate for some games, but as Nintendo and indie PC devs have shown, you can also make some of the best games of all time without striving for realism. Tears of the Kingdom could use a better framerate and higher resolution textures, but it certainly doesn’t need DLSS 5.

This article originally appeared on Engadget at https://www.engadget.com/gaming/pc/gamers-are-right-to-be-disgusted-by-nvidias-dlss-5-151105593.html?src=rss

NVIDIA claims DLSS 5 will deliver ‘photoreal’ image quality with AI this fall

Just months after announcing DLSS 4.5 at CES, NVIDIA has unveiled its next major upscaling technology, DLSS 5. The company is doubling-down on AI for this next iteration, claiming DLSS 5 “infuses pixels with photoreal lighting and materials” using a real-time neural rendering model when it arrives this fall.

So what does this mean in practice? In an on-stage demo at NVIDIA’s GTC 2026 keynote, CEO Jensen Huang showed off the technology with Resident Evil: Requiem, Hogwarts Legacy and Starfield. DLSS 5 adds a noticeable amount of detail to character’s hair and skin tone, but it also appears it’s being compared to those games without any DLSS features turned on. It’s unclear how much of a difference it makes compared to DLSS 4.5 with path tracing and all of its features turned on.

“DLSS 5 takes a game’s color and motion vectors for each frame as input, and uses an AI model to infuse the scene with photoreal lighting and materials that are anchored to source 3D content and consistent from frame to frame,” NVIDIA said in a blog post. The company also notes that the technology runs in real time, and it works at up to 4K.

Huang showed off DLSS 5 while running a system with two RTX 5090 GPUs. Eventually, it will be able to run on a single video card (though I’d imagine it would have to be almost as powerful as two 5090s). Huang also paints DLSS 5 as a step towards offering Hollywood-like quality for real-time rendering, without the need for the GPU horsepower required by studios. It sounds a bit like a generative AI video model that can be directly controlled by developers, instead of just AI prompts.

NVIDIA, never shy from self-aggrandizing, claims DLSS 5 is also the “biggest breakthrough in computer graphics” since real-time ray tracing arrived in 2018. But given that ray tracing itself hasn’t been mainstream for many gamers, it’ll be interesting to see if there’s any interest in NVIDIA’s AI-produced pixels.

This article originally appeared on Engadget at https://www.engadget.com/gaming/pc/nvidia-claims-dlss-5-will-deliver-photoreal-image-quality-with-ai-this-fall-193452088.html?src=rss

Firefly is getting rebooted as an animated series

Firefly aired for just one season in 2002 before Fox canceled it. In the 24 years since, the sci-fi show has skyrocketed in popularity and now fans are finally getting more. Nathan Fillion has announced that an animated Firefly series is currently in advanced development, Deadline first reported

Fillion shared the news at AwesomeCon during a live taping of his podcast Once We Were Spacemen with his Firefly co-stars Gina Torres, Morena Baccarin, Summer Glau, Sean Maher, Jewel Staite and Alan Tudyk. Tudyk co-hosts the podcast, in which the duo look back at their careers and interview past coworkers. Each of the actors present at AwesomeCon are expected to voice the animated versions of their characters.

This isn't one of those maybe one day it will happen announcements, with many steps already being taken. The animated reboot is under the direction of showrunners Tara Butters (Agent Carter, Reaper) and Marc Guggenheim (DC's Legends of Tomorrow, Arrow) — original creator Joss Whedon is not involved, but has given his blessing. It has early concept art from ShadowMachine, an Oscar- and Emmy-winning animation studio. Fillion is producing the show through Collision33, his production company, and with 20th Television Animation. There's even already a script in place. 

According to Fillion, the one thing left is a home for the series. He and his co-stars took to Once We Were Spacemen's Instagram to provide more details and implore FireFly fans to show demand for the reboot.

Firefly took place in 2517, centuries after a universal civil war. It followed a group of people living aboard a transport ship, Serenity, flying through the galaxy. In 2005, the show got a sequel in the form of a movie, Serenity

This article originally appeared on Engadget at https://www.engadget.com/entertainment/tv-movies/firefly-is-getting-rebooted-as-an-animated-series-120604649.html?src=rss

Warner Bros. dominates Oscars with 11 wins ahead of its acquisition by Paramount

Ahead of its acquisition by Paramount Skydance, Warner Bros. dominated the 2026 Oscars with 11 wins primarily for Ryan Coogler's Sinners and Paul Thomas Anderson's One Battle After Another. Netflix also put in a strong showing with seven Academy Awards, including two for KPop Demon Hunters and three for Guillermo Del Toro's Frankenstein. 

All told, dedicated streaming services chalked up eight awards, but were shut out of the major prizes. Frankenstein took the trophies for for Best Production Design, Best Costume Design and Best Makeup and Hairstyling, while KPop Demon Hunters took Best Animated Feature and Best original Song. Netflix also took prizes for All The Empty Rooms (Best Documentary Short Film) and The Singers (Best Live Action Short Film). Apple TV garnered the other streaming service Oscar for F1 (Best Sound). 

Warner Bros. dominated the more prestigious awards. The studio took its first Oscar for Best Picture (One Battle After Another) since Argo won in 2012, while also winning Anderson the prizes for Best Director and Best Adapted Screenplay, and giving Sean Penn the Best Supporting Actor Oscar. Sinners, meanwhile, won for Best Cinematography, giving Autumn Durald Arkapaw the first ever win for a woman and woman of color in that category. Michael B. Jordan took the Best Actor prize for that film, while Director Ryan Coogler won for Best Original Screenplay.  

Other notable acting prizes were won by Jessie Buckley (Best Actress, Hamnet) and Amy Madigan (Best Support Actress, Weapons). 

Host Conan O'Brien joked that it was the "first time in a theater" for Netflix CEO Ted Sarandos. It remains to be seen, however, whether Netflix losing out to Paramount Skydance on the Warner Bros. acquisition will be to the film industry's benefit or detriment. One clear loser of late is broadcast television as the 2026 Oscars will be the third-to-last aired by Walt Disney's ABC, with YouTube set to stream the ceremony exclusively starting in 2029

This article originally appeared on Engadget at https://www.engadget.com/entertainment/warner-bros-dominates-oscars-with-11-wins-ahead-of-its-acquisition-by-paramount-093916527.html?src=rss

ByteDance has reportedly suspended the global rollout of its new AI video generator

A month after Seedance 2.0's launch in China sparked cease-and-desist letters from Disney and Paramount Skydance over its use of copyrighted materials, its developer ByteDance has reportedly hit pause on the release of the AI video tool in other regions. According to The Information, which spoke to two anonymous sources with knowledge of the matter, ByteDance has suspended Seedance 2.0's global rollout. Engadget has reached out to ByteDance for comment and will update this story if we hear back with more information. 

Seedance 2.0 caught heat from Hollywood studios almost immediately upon its release, after user-generated videos including a viral AI clip of Brad Pitt fighting Tom Cruise sparked concerns that copyrighted works were used in training the model. In February, ByteDance told the BBC that it is "taking steps to strengthen current safeguards as we work to prevent the unauthorised use of intellectual property and likeness by users." It's unclear when exactly ByteDance planned to release the tool more widely. 

This article originally appeared on Engadget at https://www.engadget.com/ai/bytedance-has-reportedly-suspended-the-global-rollout-of-its-new-ai-video-generator-212326112.html?src=rss

Spotify’s new Taste Profile feature lets users fine-tune their algorithm’s recommendations

You're responsible for your own Spotify algorithm now. On stage at SXSW, Spotify's co-CEO, Gustav Söderström, announced the Taste Profile feature, which allows users to personally customize exactly what they want to listen to, whether it's music, audiobooks or podcasts. This AI-powered feature is still in beta, and it will be available to Premium users in New Zealand in the coming weeks.

From its short video demo, Spotify's Taste Profile feature will show you a summary of your listening habits and offer a "Tell us more" prompt at the bottom. With the new prompt, users can inform the AI what they want to see more of or if they want to get rid of a genre that keeps popping up in their algorithm. Spotify said that the Taste Profile will take into consideration more ambiguous prompts, too, like if you're training for a marathon and want upbeat music or want to listen to news podcasts during your commute to work. Spotify added that Taste Profile is an optional feature, and unwilling users can "leave it and enjoy Spotify as usual."

With Taste Profile, Spotify is continuing its momentum of offering AI features, like the Prompted Playlist feature that was made available last month. Unlike the existing AI Playlist feature, Prompted Playlist lets you put in specific requests to generate a playlist, like only including songs from a specific TV show. Like Taste Profile, the Prompted Playlist feature saw beta testing in New Zealand first, before expanding to US and Canadian users a month later.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/music/spotifys-new-taste-profile-feature-lets-users-fine-tune-their-algorithms-recommendations-191104626.html?src=rss

Spotify’s new Taste Profile feature lets users fine-tune their algorithm’s recommendations

You're responsible for your own Spotify algorithm now. On stage at SXSW, Spotify's co-CEO, Gustav Söderström, announced the Taste Profile feature, which allows users to personally customize exactly what they want to listen to, whether it's music, audiobooks or podcasts. This AI-powered feature is still in beta, and it will be available to Premium users in New Zealand in the coming weeks.

From its short video demo, Spotify's Taste Profile feature will show you a summary of your listening habits and offer a "Tell us more" prompt at the bottom. With the new prompt, users can inform the AI what they want to see more of or if they want to get rid of a genre that keeps popping up in their algorithm. Spotify said that the Taste Profile will take into consideration more ambiguous prompts, too, like if you're training for a marathon and want upbeat music or want to listen to news podcasts during your commute to work. Spotify added that Taste Profile is an optional feature, and unwilling users can "leave it and enjoy Spotify as usual."

With Taste Profile, Spotify is continuing its momentum of offering AI features, like the Prompted Playlist feature that was made available last month. Unlike the existing AI Playlist feature, Prompted Playlist lets you put in specific requests to generate a playlist, like only including songs from a specific TV show. Like Taste Profile, the Prompted Playlist feature saw beta testing in New Zealand first, before expanding to US and Canadian users a month later.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/music/spotifys-new-taste-profile-feature-lets-users-fine-tune-their-algorithms-recommendations-191104626.html?src=rss

This web app lets you ‘channel surf’ YouTube like a ’90s kid watching cable

Many of us remember the halcyon days of being a kid in the ‘90s, spending a weekend afternoon with remote control in hand and a seemingly endless well of stuff to watch on TV. Now you can relive the experience thanks to the appropriately named Channel Surfer web app. It's essentially a YouTube discovery tool that surfaces interesting videos, but presented in a retro homage to the cable channel screen. 

Channel Surfer is the work of developer Steven Irby. He has 40 channels on the app right now, mostly grouping content by theme. There are channels for typical cable fare like news and sports, but also music, movies and a number of more tailored tech subjects like AI, gaming, gadgets and space. 

"I built Channel Surfer because I’m tired of the algorithms and indecision fatigue," he told TechCrunch, which is where we discovered the app. "I miss channel surfing and not having to decide what to watch. I want to just sit and tune into what’s on and not think about what to watch next."

It seems Irby isn't alone, because he posted on X that the number of views he's getting for Channel Surfer already broke 10,000 on its first day.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/youtube/this-web-app-lets-you-channel-surf-youtube-like-a-90s-kid-watching-cable-220651107.html?src=rss

KPop Demon Hunters is officially getting a sequel

KPop Demon Hunters is getting a sequel, Netflix and Sony have announced. Sony Pictures Animation handed the rights to the film to Netflix in 2021 as part of a larger licensing deal, but neither company could have expected how much of a hit it would ultimately become. Besides being Netflix's "most-watched movie of all time," KPop Demon Hunters is also nominated for Best Animated Feature and Best Original Song at the 98th Academy Awards, and stands a good chance of winning.

Maggie Kang and Chris Appelhans, the directors of the first film, are returning to direct the sequel. The project will be the first in the duo's new "exclusive multiyear writing and directing partnership" with Netflix, which is focused on animation. "I feel immense pride as a Korean filmmaker that the audience wants more from this Korean story and our Korean characters," Kang said in a statement. "There's so much more to this world we have built, and I'm excited to show you. This is only the beginning."

"These characters are like family to us, their world has become our second home," Appelhans said. "We're excited to write their next chapter, challenge them, and watch them evolve — and continue pushing the boundaries of how music, animation, and story can come together."

To put KPop Demon Hunters popularity into perspective, the film had such a wide reach that Netflix was willing to set aside its aversion to theatrical releases and put it in theaters after it premiered on streaming. KPop Demon Hunters reportedly made over $19 million during its initial two-day theatrical run in August 2025, and Netflix has brought it back to theaters multiple times since then. That's on top of the more than 500 million views the film racked up on Netflix itself. Not making a sequel would essentially be leaving money on the table.

According to Puck, the structure of Netflix's deal with Sony means it will likely be the only company directly profiting off a KPop Demon Hunters follow-up, however. "While Sony has the contractual right to produce any sequels or spinoffs," Puck reports, "it will make no additional money from the runaway success of the first film." Sweetening that deal could be one reason Netflix and Sony Pictures expanded their film licensing partnership in January, a deal that reportedly cost the streaming service over $7 billion to secure.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/streaming/kpop-demon-hunters-is-officially-getting-a-sequel-195038954.html?src=rss