Paramount just announced that it's going ahead with a new video game based on Avatar: The Last Airbender, which will be developed by Saber Interactive. For the uninitiated, Saber is behind titles likeSnowrunner and Teardown. It also has plenty of experience making licensed content, as it published Evil Dead: The Game and World War Z: Aftermath, among others.
A new game in the Avatar-verse isn’t that notable on its own. After all, there have been plenty already. Paramount is already crowing about the title, though, calling it a “AAA RPG” and claiming it’ll be the “biggest video game in franchise history.” That’s not exactly a high bar, given the cartoon’s rocky history in gaming. There was that one good Bayonetta-like gamethat featured Avatar Korra, but everything else is pretty much trash.
This upcoming RPG won’t follow Aang or Korra. Players will control “an all-new, never-before-seen Avatar.” The game’s set “thousands of years” before the events of Avatar: The Last Airbender and The Legend of Korra. The story has been “developed in close collaboration with Avatar Studios”, though we don’t know if franchise creators Michael Dante DiMartino and Bryan Konietzko are involved in any way.
This looks to be an action RPG and not a turn-based affair, as a press release suggests “dynamic combat” and a quest to “master all four elements.” However, there’s no release date and no suggestion as to how far along the game is. Paramount says it’ll be available “soon”, but the company hasn't released a trailer or even artwork, so one person’s “soon” is another person’s “probably sometime in 2026.”
In any event, sign me up. I’m a big-time cabbage head, or honorary member of the Aang Gang or whatever fans are called. Saber Interactive has proven itself worthy with other pre-existing IPs, so why not this one? It could work.
The Avatar franchise has been relatively quiet lately, though the live-action Netflix show was renewed for two more seasons to finish up the story. Franchise creators DiMartino and Konietzko are making an animated film that follows an adult Aang and friends, but it’s been awhile since we’ve heard anything about that.
This article originally appeared on Engadget at https://www.engadget.com/gaming/saber-interactive-is-making-a-aaa-rpg-based-on-avatar-the-last-airbender-171655351.html?src=rss
In What We're Listening To, Engadget writers and editors discuss some of the recent music releases we've had on repeat. This installment has everything from jazz standards to The Jesus Lizard.
Lady Gaga - Harlequin
I wasn’t even a minute into Harlequin before I had the realization, Oh, I am going to become so annoying in my love for this. Unfortunately for everyone in my life (and doubly so because I’m singing along), I’ve had it blasting all weekend since the surprise drop on Friday.Gaga is a powerhouse, and as much as I adore her take on pop, I’m always blown away when I hear her do jazz. And Harlequin is brimming with it.
Harlequin is a companion album to a soon-to-be-released movie (Joker: Folie à Deux) and almost entirely comprises cover songs — a combination that might typically put me off. But Gaga’s breezy versions of classics like “World on a String” and “Smile” are almost chilling. Her energy in tracks like “Gonna Build a Mountain” is through the roof. I could have done without “Oh, When the Saints,” but I’m really just nit-picking now. There are only two original songs on the album and they are completely different beasts, each impactful in its own way. “Happy Mistake” is a clear standout, and I’ll be softly weeping to that one for years to come.
Babe Haven - Nuisance
On the exact opposite end of the spectrum, I’ve been really into punk band Babe Haven’s most recent album, Nuisance, lately. It’s 25-ish minutes of queer femme rage and I can't get enough of it. Check it out on Bandcamp.
— Cheyenne MacDonald, Weekend Editor
The Jesus Lizard - Rack
Even laudatory reviews of comeback albums lean on expectations tempered with preemptive apology or pity praise. A comparison to headier days of musical urgency is inevitable; it stings for the same reasons as hearing "you look great for your age." I wish there were some way to take stock of Rack without that baggage, because The Jesus Lizard doesn't merely sound better than a band which took three decades off has any right to, it simply does not sound as though time has passed at all.
Rack broods with baffling inconspicuousness amid their oeuvre. Sure, "What If?" doesn't reach the slash and sprawl of earlier meanderings like "Rodeo in Joliet," but "Lord Godiva" glides on the most Duane Denison of Duane Denison riffs, lightning and crude oil. The manic physicality of David Yow's voice is unaltered — neither more harried after 60+ years of swinging at ghosts, nor attenuated by the effort.
So many bands seemingly frozen in amber reemerge denuded, as though covering themselves. They'd be frantically recapturing their glory days, if they had the energy to do anything frantic anymore. Rack, through sheer ferocity, is instead a band continuing to do exactly what it always has, just as well as it always has, and sounding really fucking cool doing it.
— Avery Ellis, Deputy Editor, Reports
Sabrina Carpenter - Short n' Sweet
There's a part of me that hates keeping up with pop music, and that's the part of me that cringes when I realize the last few albums I've listened to have been the ones by pop princesses Ariana Grande, Billie Eilish, Taylor Swift and more. That's also the part of me that resisted listening to Sabrina Carpenter's latest album for months (and probably the part of me that refused to watch the incredible Schitt's Creek until this year).
I say all that only to explain why I'm so late to appreciate the goodness that is Short n' Sweet. And the non-self-judgy part of me has unabashedly loved Carpenter's new music and been asking all my friends if they've listened to her songs. When I talked to my various friend groups about her, what became clear is how there's something for everyone, regardless of the variety in our tastes.
I'm a fan of R&B, hip hop and basically anything I can dance or sing to. The tracks "bet u wanna," "Taste" and "Feather" have become highly repeated items on my playlist and yes, I did go back into her older discography for some of those titles. However, my current absolute favorite is "Espresso." It's got a catchy hook, clever lyrics and a groovy beat that delicately straddles the line between upbeat and lowkey. I love the wordplay and how, when woven with the rhythm and melody, it initially sounded to me like Carpenter was singing in a different language. And as someone who works in tech and is occasionally a gamer, I especially adored the use of the words "up down left right," "switch" and Nintendo. Truly, rhyming "espresso" with "Nintendo" wasn't something I would have expected to work, but work it did.
But back to the point I was making earlier: Even if that sort of chill dance club vibe isn't your thing, there's plenty in Short n' Sweet that might appeal to you. I wasn't as huge a fan of "Please please please," for example, but I know friends who love it. And while "Bed Chem" and "Good Graces" aren't hitting my feels the same way "Espresso" is, those two are among her highest played songs on Spotify. I'm also starting to warm up to "Juno."
All that is to say, we all have different tastes. Maybe you're more of a Chappell Roan fan. I like some of her latest tracks too, just not as much as I've enjoyed Carpenter's. I also really enjoy the brilliance that is "Die With a Smile" by Bruno Mars and Lady Gaga, which is something I'll be adding to my karaoke duet repertoire, but am already playing less frequently nowadays. If you have a preference for music from the likes of Ariana Grande, NewJeans and Doja Cat, you'll probably have a good time with Sabrina Carpenter. And since I'm so late to the party, you probably have already.
— Cherlynn Low,Deputy Editor, Reviews
This article originally appeared on Engadget at https://www.engadget.com/entertainment/music/what-were-listening-to-harlequin-or-lg-65-rack-and-more-003037241.html?src=rss
Update, September 30, 4:30PM ET: YouTube says it has reached a deal with SESAC, and that the affected songs will be returning to the platform soon. A spokesperson sent the following comment: "We're pleased that SESAC reconsidered our offer. We've reached a deal and content will come back up shortly. We appreciate everyone's patience during this time."
The original story, headlined "YouTube blocks songs from artists including Adele and Green Day amid licensing negotiations," follows unedited.
Songs from popular artists have begun to disappear from YouTube as the platform’s deal with the performing rights organization SESAC (Society of European Stage Authors and Composers) approaches its expiration date. As reported by Variety, certain songs by Adele, Green Day, Bob Dylan, R.E.M., Burna Boy and other artists have been blocked in the US, though their entire catalogs aren’t necessarily affected. Videos that have been pulled, like Adele’s “Rolling in the Deep,” now just show a black screen with the message: “This video contains content from SESAC. It is not available in your country.”
In a statement to Engadget, a YouTube spokesperson said the platform has been in talks with SESAC to renew the deal, but “despite our best efforts, we were unable to reach an equitable agreement before its expiration. We take copyright very seriously and as a result, content represented by SESAC is no longer available on YouTube in the US. We are in active conversations with SESAC and are hoping to reach a new deal as soon as possible.” According to a source that spoke to Variety, however, the deal hasn’t even expired yet — it’ll reportedly terminate sometime next week — and the move on YouTube’s part may be a negotiation tactic. SESAC has not yet released a statement.
This article originally appeared on Engadget at https://www.engadget.com/entertainment/youtube/songs-from-adele-and-others-are-returning-to-youtube-as-sesac-agrees-to-a-new-deal-151741508.html?src=rss
The team behind the upcoming Minecraft movie shared a new clip during Minecraft Live that expands on the brief crafting moment we saw in the polarizing first teaser. The scene comes in the middle of a discussion between Mojang creative director Torfi Frans Olafsson and A Minecraft Movie director Jared Hess, at 4:51. The segment also gives us our first look at the movie’s interpretation of a Minecraft bee, which I’m not quite sure how to feel about yet. That you can find toward the end of the video.
A Minecraft Movie is slated for release in April 2025 and stars Jack Black as Steve, alongside Jason Momoa, Danielle Brooks, Emma Myers and Sebastian Eugene Hansen. Plans for it were first announced a decade ago, and potential release dates were set and scrapped on multiple occasions in the time since. At long last, it’s actually now happening — for better or worse.
This article originally appeared on Engadget at https://www.engadget.com/entertainment/tv-movies/heres-a-peek-at-how-a-minecraft-movie-will-handle-crafting-220454126.html?src=rss
Over the last year, Meta has made its AI assistant so ubiquitous in its apps it’s almost hard to believe that Meta AI is only a year old. But, one year after its launch at the last Connect, the company is infusing Meta AI with a load of new features in the hopes that more people will find its assistant useful.
One of the biggest changes is that users will be able to have voice chats with Meta AI. Up till now, the only way to speak with Meta AI was via the Ray-Ban Meta smart glasses. And like last year’s Meta AI launch, the company tapped a group of celebrities for the change.
Meta AI will be able to take on the voices of Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key and Kristen Bell, in addition to a handful of more generic voices. While the company is hoping the celebrities will sell users on Meta AI’s new abilities, it’s worth noting that the company quietly phased out its celebrity chatbot personas that launched at last year’s Connect.
In addition to voice chat support, Meta AI is also getting new image capabilities. Meta AI will be able to respond to requests to change and edit photos from text chats within Instagram, Messenger and WhatsApp. The company says that users can ask the AI to add or remove objects or to change elements of an image, like swapping a background or clothing item.
Meta is testing AI-generated content recommendations in the main feed of Facebook and Instagram.
Meta
The new abilities arrive alongside the company’s latest Llama 3.2 model. The new iteration, which comes barely two months after the Llama 3.1 release, is the first to have vision capabilities and can “bridge the gap between vision and language by extracting details from an image, understanding the scene, and then crafting a sentence or two that could be used as an image caption to help tell the story.” Llama 3.2 is “competitive” on “image recognition and a range of visual understanding tasks” compared with similar offerings from ChatGPT and Claude, Meta says.
The social network is testing other, potentially controversial, ways to bring AI into the core features of its main apps. The company will test AI-generated translation features for Reels with “automatic dubbing and lip syncing.” According to Meta, that “will simulate the speaker’s voice in another language and sync their lips to match.” It will arrive first to “some creators’ videos” in English and Spanish in the US and Latin America, though the company hasn't shared details on rollout timing.
Meta also plans to experiment with AI-generated content directly in the main feeds on Facebook and Instagram. With the test, Meta AI will surface AI-generated images that are meant to be personalized to each users’ interests and past activity. For example, Meta AI could surface an image “imagined for you” that features your face.
This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-ai-can-now-talk-to-you-and-edit-your-photos-172853219.html?src=rss
Over the last year, Meta has made its AI assistant so ubiquitous in its apps it’s almost hard to believe that Meta AI is only a year old. But, one year after its launch at the last Connect, the company is infusing Meta AI with a load of new features in the hopes that more people will find its assistant useful.
One of the biggest changes is that users will be able to have voice chats with Meta AI. Up till now, the only way to speak with Meta AI was via the Ray-Ban Meta smart glasses. And like last year’s Meta AI launch, the company tapped a group of celebrities for the change.
Meta AI will be able to take on the voices of Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key and Kristen Bell, in addition to a handful of more generic voices. While the company is hoping the celebrities will sell users on Meta AI’s new abilities, it’s worth noting that the company quietly phased out its celebrity chatbot personas that launched at last year’s Connect.
In addition to voice chat support, Meta AI is also getting new image capabilities. Meta AI will be able to respond to requests to change and edit photos from text chats within Instagram, Messenger and WhatsApp. The company says that users can ask the AI to add or remove objects or to change elements of an image, like swapping a background or clothing item.
Meta is testing AI-generated content recommendations in the main feed of Facebook and Instagram.
Meta
The new abilities arrive alongside the company’s latest Llama 3.2 model. The new iteration, which comes barely two months after the Llama 3.1 release, is the first to have vision capabilities and can “bridge the gap between vision and language by extracting details from an image, understanding the scene, and then crafting a sentence or two that could be used as an image caption to help tell the story.” Llama 3.2 is “competitive” on “image recognition and a range of visual understanding tasks” compared with similar offerings from ChatGPT and Claude, Meta says.
The social network is testing other, potentially controversial, ways to bring AI into the core features of its main apps. The company will test AI-generated translation features for Reels with “automatic dubbing and lip syncing.” According to Meta, that “will simulate the speaker’s voice in another language and sync their lips to match.” It will arrive first to “some creators’ videos” in English and Spanish in the US and Latin America, though the company hasn't shared details on rollout timing.
Meta also plans to experiment with AI-generated content directly in the main feeds on Facebook and Instagram. With the test, Meta AI will surface AI-generated images that are meant to be personalized to each users’ interests and past activity. For example, Meta AI could surface an image “imagined for you” that features your face.
This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-ai-can-now-talk-to-you-and-edit-your-photos-172853219.html?src=rss
Over the last year, Meta has made its AI assistant so ubiquitous in its apps it’s almost hard to believe that Meta AI is only a year old. But, one year after its launch at the last Connect, the company is infusing Meta AI with a load of new features in the hopes that more people will find its assistant useful.
One of the biggest changes is that users will be able to have voice chats with Meta AI. Up till now, the only way to speak with Meta AI was via the Ray-Ban Meta smart glasses. And like last year’s Meta AI launch, the company tapped a group of celebrities for the change.
Meta AI will be able to take on the voices of Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key and Kristen Bell, in addition to a handful of more generic voices. While the company is hoping the celebrities will sell users on Meta AI’s new abilities, it’s worth noting that the company quietly phased out its celebrity chatbot personas that launched at last year’s Connect.
In addition to voice chat support, Meta AI is also getting new image capabilities. Meta AI will be able to respond to requests to change and edit photos from text chats within Instagram, Messenger and WhatsApp. The company says that users can ask the AI to add or remove objects or to change elements of an image, like swapping a background or clothing item.
Meta is testing AI-generated content recommendations in the main feed of Facebook and Instagram.
Meta
The new abilities arrive alongside the company’s latest Llama 3.2 model. The new iteration, which comes barely two months after the Llama 3.1 release, is the first to have vision capabilities and can “bridge the gap between vision and language by extracting details from an image, understanding the scene, and then crafting a sentence or two that could be used as an image caption to help tell the story.” Llama 3.2 is “competitive” on “image recognition and a range of visual understanding tasks” compared with similar offerings from ChatGPT and Claude, Meta says.
The social network is testing other, potentially controversial, ways to bring AI into the core features of its main apps. The company will test AI-generated translation features for Reels with “automatic dubbing and lip syncing.” According to Meta, that “will simulate the speaker’s voice in another language and sync their lips to match.” It will arrive first to “some creators’ videos” in English and Spanish in the US and Latin America, though the company hasn't shared details on rollout timing.
Meta also plans to experiment with AI-generated content directly in the main feeds on Facebook and Instagram. With the test, Meta AI will surface AI-generated images that are meant to be personalized to each users’ interests and past activity. For example, Meta AI could surface an image “imagined for you” that features your face.
This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-ai-can-now-talk-to-you-and-edit-your-photos-172853219.html?src=rss
Meta has secured deals with several actors, including Kristen Bell, John Cena and Judi Dench, to use their voices for the Meta AI chatbot, Reuters reports. Users will be able to talk to the chatbot while listening to answers in the voice of their favorite celebrities. Other celebrities include Awkwafina and Keegan-Michael Key, a source told Reuters.
Besides these five voices, the source also said that there are more generic voice options if users prefer them. All voices will be available this week in the US and other English-speaking regions, though the source didn’t give any other specific locations.
The news follows a report from last month which claimed Meta was negotiating with actors to secure the rights to use their voices for its AI projects. Now, the deals have reportedly been struck, and the chatbot found when using Facebook, Messenger, Instagram and WhatsApp could feature these famous voices soon. The company had intended to finalize agreements before the Connect conference, where Reuters’ source says it will announce the new voice options.
This article originally appeared on Engadget at https://www.engadget.com/ai/metas-ai-chatbot-will-soon-speak-in-the-voices-of-john-cena-and-other-celebrities-160603365.html?src=rss
Meta has secured deals with several actors, including Kristen Bell, John Cena and Judi Dench, to use their voices for the Meta AI chatbot, Reuters reports. Users will be able to talk to the chatbot while listening to answers in the voice of their favorite celebrities. Other celebrities include Awkwafina and Keegan-Michael Key, a source told Reuters.
Besides these five voices, the source also said that there are more generic voice options if users prefer them. All voices will be available this week in the US and other English-speaking regions, though the source didn’t give any other specific locations.
The news follows a report from last month which claimed Meta was negotiating with actors to secure the rights to use their voices for its AI projects. Now, the deals have reportedly been struck, and the chatbot found when using Facebook, Messenger, Instagram and WhatsApp could feature these famous voices soon. The company had intended to finalize agreements before the Connect conference, where Reuters’ source says it will announce the new voice options.
This article originally appeared on Engadget at https://www.engadget.com/ai/metas-ai-chatbot-will-soon-speak-in-the-voices-of-john-cena-and-other-celebrities-160603365.html?src=rss
Danny Boyle’s zombie sequel 28 Years Later was shot using several iPhone 15 Pro Max smartphones, according to a report by Wired. This makes it the biggest movie ever made using iPhones, as the budget was around $75 million.
There are some major caveats worth going over. First of all, the sourcing on the story is anonymous, as the film’s staff was required to sign an NDA. Also, the entire film wasn’t shot using last year’s high-end Apple smartphone. Engadget has confirmed that Boyle and his team used a bunch of different cameras, with the iPhone 15 Pro Max being just one tool.
Finally, it’s not like the director just plopped the smartphone on a tripod and called it a day. Each iPhone looks to have been adapted to integrate with full-frame DSLR lenses. Speaking of, those professional-grade lenses cost a small fortune. The phones were also nestled in protective cages.
Even if the phones weren’t exclusively used to make this movie, it’s still something of a full-circle moment for Boyle and his team. The original 28 Days Later was shot primarily on a prosumer-grade camcorder that cost $4,000 at the time. This camcorder recorded footage to MiniDV tapes.
28 Years Later is the third entry in the franchise and is due to hit theaters in June 2025. The film stars Jodie Comer, Aaron Taylor-Johnson, Ralph Fiennes and Cillian Murphy. This will be the first of three new films set in the universe of fast-moving rage zombies. Plot details are non-existent, but all three upcoming movies are being written by Alex Garland. He co-wrote the first one and has since gone on to direct genre fare like Ex Machina, Annihilation and, most recently, Civil War. He also made a truly underrated TV show called Devs.
As for the intersection of smartphones and Hollywood, several films have been shot with iPhones. These include Sean Baker’s Tangerine and Steven Soderbergh’s Unsane.
This article originally appeared on Engadget at https://www.engadget.com/entertainment/tv-movies/28-years-later-was-partially-shot-on-an-iphone-15-pro-max-182036483.html?src=rss