What we’re listening to: Harlequin (or LG 6.5), Rack and more

In What We're Listening To, Engadget writers and editors discuss some of the recent music releases we've had on repeat. This installment has everything from jazz standards to The Jesus Lizard.

I wasn’t even a minute into Harlequin before I had the realization, Oh, I am going to become so annoying in my love for this. Unfortunately for everyone in my life (and doubly so because I’m singing along), I’ve had it blasting all weekend since the surprise drop on Friday. Gaga is a powerhouse, and as much as I adore her take on pop, I’m always blown away when I hear her do jazz. And Harlequin is brimming with it. 

Harlequin is a companion album to a soon-to-be-released movie (Joker: Folie à Deux) and almost entirely comprises cover songs — a combination that might typically put me off. But Gaga’s breezy versions of classics like “World on a String” and “Smile” are almost chilling. Her energy in tracks like “Gonna Build a Mountain” is through the roof. I could have done without “Oh, When the Saints,” but I’m really just nit-picking now. There are only two original songs on the album and they are completely different beasts, each impactful in its own way. “Happy Mistake” is a clear standout, and I’ll be softly weeping to that one for years to come.

On the exact opposite end of the spectrum, I’ve been really into punk band Babe Haven’s most recent album, Nuisance, lately. It’s 25-ish minutes of queer femme rage and I can't get enough of it. Check it out on Bandcamp

— Cheyenne MacDonald, Weekend Editor

Even laudatory reviews of comeback albums lean on expectations tempered with preemptive apology or pity praise. A comparison to headier days of musical urgency is inevitable; it stings for the same reasons as hearing "you look great for your age." I wish there were some way to take stock of Rack without that baggage, because The Jesus Lizard doesn't merely sound better than a band which took three decades off has any right to, it simply does not sound as though time has passed at all

Rack broods with baffling inconspicuousness amid their oeuvre. Sure, "What If?" doesn't reach the slash and sprawl of earlier meanderings like "Rodeo in Joliet," but "Lord Godiva" glides on the most Duane Denison of Duane Denison riffs, lightning and crude oil. The manic physicality of David Yow's voice is unaltered — neither more harried after 60+ years of swinging at ghosts, nor attenuated by the effort. 

So many bands seemingly frozen in amber reemerge denuded, as though covering themselves. They'd be frantically recapturing their glory days, if they had the energy to do anything frantic anymore. Rack, through sheer ferocity, is instead a band continuing to do exactly what it always has, just as well as it always has, and sounding really fucking cool doing it.

Avery Ellis, Deputy Editor, Reports

There's a part of me that hates keeping up with pop music, and that's the part of me that cringes when I realize the last few albums I've listened to have been the ones by pop princesses Ariana Grande, Billie Eilish, Taylor Swift and more. That's also the part of me that resisted listening to Sabrina Carpenter's latest album for months (and probably the part of me that refused to watch the incredible Schitt's Creek until this year).

I say all that only to explain why I'm so late to appreciate the goodness that is Short n' Sweet. And the non-self-judgy part of me has unabashedly loved Carpenter's new music and been asking all my friends if they've listened to her songs. When I talked to my various friend groups about her, what became clear is how there's something for everyone, regardless of the variety in our tastes.

I'm a fan of R&B, hip hop and basically anything I can dance or sing to. The tracks "bet u wanna," "Taste" and "Feather" have become highly repeated items on my playlist and yes, I did go back into her older discography for some of those titles. However, my current absolute favorite is "Espresso." It's got a catchy hook, clever lyrics and a groovy beat that delicately straddles the line between upbeat and lowkey. I love the wordplay and how, when woven with the rhythm and melody, it initially sounded to me like Carpenter was singing in a different language. And as someone who works in tech and is occasionally a gamer, I especially adored the use of the words "up down left right," "switch" and Nintendo. Truly, rhyming "espresso" with "Nintendo" wasn't something I would have expected to work, but work it did.

But back to the point I was making earlier: Even if that sort of chill dance club vibe isn't your thing, there's plenty in Short n' Sweet that might appeal to you. I wasn't as huge a fan of "Please please please," for example, but I know friends who love it. And while "Bed Chem" and "Good Graces" aren't hitting my feels the same way "Espresso" is, those two are among her highest played songs on Spotify. I'm also starting to warm up to "Juno."

All that is to say, we all have different tastes. Maybe you're more of a Chappell Roan fan. I like some of her latest tracks too, just not as much as I've enjoyed Carpenter's. I also really enjoy the brilliance that is "Die With a Smile" by Bruno Mars and Lady Gaga, which is something I'll be adding to my karaoke duet repertoire, but am already playing less frequently nowadays. If you have a preference for music from the likes of Ariana Grande, NewJeans and Doja Cat, you'll probably have a good time with Sabrina Carpenter. And since I'm so late to the party, you probably have already.

Cherlynn Low, Deputy Editor, Reviews

This article originally appeared on Engadget at https://www.engadget.com/entertainment/music/what-were-listening-to-harlequin-or-lg-65-rack-and-more-003037241.html?src=rss

Songs from Adele and others are returning to YouTube as SESAC agrees to a new deal

Update, September 30, 4:30PM ET: YouTube says it has reached a deal with SESAC, and that the affected songs will be returning to the platform soon. A spokesperson sent the following comment: "We're pleased that SESAC reconsidered our offer. We've reached a deal and content will come back up shortly. We appreciate everyone's patience during this time." 

The original story, headlined "YouTube blocks songs from artists including Adele and Green Day amid licensing negotiations," follows unedited.


Songs from popular artists have begun to disappear from YouTube as the platform’s deal with the performing rights organization SESAC (Society of European Stage Authors and Composers) approaches its expiration date. As reported by Variety, certain songs by Adele, Green Day, Bob Dylan, R.E.M., Burna Boy and other artists have been blocked in the US, though their entire catalogs aren’t necessarily affected. Videos that have been pulled, like Adele’s “Rolling in the Deep,” now just show a black screen with the message: “This video contains content from SESAC. It is not available in your country.”

A black screen with the message: Video unavailable. This video contains content from SESAC. It is not available in your country

In a statement to Engadget, a YouTube spokesperson said the platform has been in talks with SESAC to renew the deal, but “despite our best efforts, we were unable to reach an equitable agreement before its expiration. We take copyright very seriously and as a result, content represented by SESAC is no longer available on YouTube in the US. We are in active conversations with SESAC and are hoping to reach a new deal as soon as possible.” According to a source that spoke to Variety, however, the deal hasn’t even expired yet — it’ll reportedly terminate sometime next week — and the move on YouTube’s part may be a negotiation tactic. SESAC has not yet released a statement.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/youtube/songs-from-adele-and-others-are-returning-to-youtube-as-sesac-agrees-to-a-new-deal-151741508.html?src=rss

Here’s a peek at how A Minecraft Movie will handle crafting

The team behind the upcoming Minecraft movie shared a new clip during Minecraft Live that expands on the brief crafting moment we saw in the polarizing first teaser. The scene comes in the middle of a discussion between Mojang creative director Torfi Frans Olafsson and A Minecraft Movie director Jared Hess, at 4:51. The segment also gives us our first look at the movie’s interpretation of a Minecraft bee, which I’m not quite sure how to feel about yet. That you can find toward the end of the video.

A Minecraft Movie is slated for release in April 2025 and stars Jack Black as Steve, alongside Jason Momoa, Danielle Brooks, Emma Myers and Sebastian Eugene Hansen. Plans for it were first announced a decade ago, and potential release dates were set and scrapped on multiple occasions in the time since. At long last, it’s actually now happening — for better or worse.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/tv-movies/heres-a-peek-at-how-a-minecraft-movie-will-handle-crafting-220454126.html?src=rss

Meta AI can now talk to you and edit your photos

Over the last year, Meta has made its AI assistant so ubiquitous in its apps it’s almost hard to believe that Meta AI is only a year old. But, one year after its launch at the last Connect, the company is infusing Meta AI with a load of new features in the hopes that more people will find its assistant useful.

One of the biggest changes is that users will be able to have voice chats with Meta AI. Up till now, the only way to speak with Meta AI was via the Ray-Ban Meta smart glasses. And like last year’s Meta AI launch, the company tapped a group of celebrities for the change.

Meta AI will be able to take on the voices of Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key and Kristen Bell, in addition to a handful of more generic voices. While the company is hoping the celebrities will sell users on Meta AI’s new abilities, it’s worth noting that the company quietly phased out its celebrity chatbot personas that launched at last year’s Connect.

In addition to voice chat support, Meta AI is also getting new image capabilities. Meta AI will be able to respond to requests to change and edit photos from text chats within Instagram, Messenger and WhatsApp. The company says that users can ask the AI to add or remove objects or to change elements of an image, like swapping a background or clothing item.

Meta is testing AI-generated content recommendations in the main feed of Facebook and Instagram.
Meta is testing AI-generated content recommendations in the main feed of Facebook and Instagram.
Meta

The new abilities arrive alongside the company’s latest Llama 3.2 model. The new iteration, which comes barely two months after the Llama 3.1 release, is the first to have vision capabilities and can “bridge the gap between vision and language by extracting details from an image, understanding the scene, and then crafting a sentence or two that could be used as an image caption to help tell the story.” Llama 3.2 is “competitive” on “image recognition and a range of visual understanding tasks” compared with similar offerings from ChatGPT and Claude, Meta says.

The social network is testing other, potentially controversial, ways to bring AI into the core features of its main apps. The company will test AI-generated translation features for Reels with “automatic dubbing and lip syncing.” According to Meta, that “will simulate the speaker’s voice in another language and sync their lips to match.” It will arrive first to “some creators’ videos” in English and Spanish in the US and Latin America, though the company hasn't shared details on rollout timing.

Meta also plans to experiment with AI-generated content directly in the main feeds on Facebook and Instagram. With the test, Meta AI will surface AI-generated images that are meant to be personalized to each users’ interests and past activity. For example, Meta AI could surface an image “imagined for you” that features your face.

Catch up on all the news from Meta Connect 2024!

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-ai-can-now-talk-to-you-and-edit-your-photos-172853219.html?src=rss

Meta AI can now talk to you and edit your photos

Over the last year, Meta has made its AI assistant so ubiquitous in its apps it’s almost hard to believe that Meta AI is only a year old. But, one year after its launch at the last Connect, the company is infusing Meta AI with a load of new features in the hopes that more people will find its assistant useful.

One of the biggest changes is that users will be able to have voice chats with Meta AI. Up till now, the only way to speak with Meta AI was via the Ray-Ban Meta smart glasses. And like last year’s Meta AI launch, the company tapped a group of celebrities for the change.

Meta AI will be able to take on the voices of Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key and Kristen Bell, in addition to a handful of more generic voices. While the company is hoping the celebrities will sell users on Meta AI’s new abilities, it’s worth noting that the company quietly phased out its celebrity chatbot personas that launched at last year’s Connect.

In addition to voice chat support, Meta AI is also getting new image capabilities. Meta AI will be able to respond to requests to change and edit photos from text chats within Instagram, Messenger and WhatsApp. The company says that users can ask the AI to add or remove objects or to change elements of an image, like swapping a background or clothing item.

Meta is testing AI-generated content recommendations in the main feed of Facebook and Instagram.
Meta is testing AI-generated content recommendations in the main feed of Facebook and Instagram.
Meta

The new abilities arrive alongside the company’s latest Llama 3.2 model. The new iteration, which comes barely two months after the Llama 3.1 release, is the first to have vision capabilities and can “bridge the gap between vision and language by extracting details from an image, understanding the scene, and then crafting a sentence or two that could be used as an image caption to help tell the story.” Llama 3.2 is “competitive” on “image recognition and a range of visual understanding tasks” compared with similar offerings from ChatGPT and Claude, Meta says.

The social network is testing other, potentially controversial, ways to bring AI into the core features of its main apps. The company will test AI-generated translation features for Reels with “automatic dubbing and lip syncing.” According to Meta, that “will simulate the speaker’s voice in another language and sync their lips to match.” It will arrive first to “some creators’ videos” in English and Spanish in the US and Latin America, though the company hasn't shared details on rollout timing.

Meta also plans to experiment with AI-generated content directly in the main feeds on Facebook and Instagram. With the test, Meta AI will surface AI-generated images that are meant to be personalized to each users’ interests and past activity. For example, Meta AI could surface an image “imagined for you” that features your face.

Catch up on all the news from Meta Connect 2024!

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-ai-can-now-talk-to-you-and-edit-your-photos-172853219.html?src=rss

Meta AI can now talk to you and edit your photos

Over the last year, Meta has made its AI assistant so ubiquitous in its apps it’s almost hard to believe that Meta AI is only a year old. But, one year after its launch at the last Connect, the company is infusing Meta AI with a load of new features in the hopes that more people will find its assistant useful.

One of the biggest changes is that users will be able to have voice chats with Meta AI. Up till now, the only way to speak with Meta AI was via the Ray-Ban Meta smart glasses. And like last year’s Meta AI launch, the company tapped a group of celebrities for the change.

Meta AI will be able to take on the voices of Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key and Kristen Bell, in addition to a handful of more generic voices. While the company is hoping the celebrities will sell users on Meta AI’s new abilities, it’s worth noting that the company quietly phased out its celebrity chatbot personas that launched at last year’s Connect.

In addition to voice chat support, Meta AI is also getting new image capabilities. Meta AI will be able to respond to requests to change and edit photos from text chats within Instagram, Messenger and WhatsApp. The company says that users can ask the AI to add or remove objects or to change elements of an image, like swapping a background or clothing item.

Meta is testing AI-generated content recommendations in the main feed of Facebook and Instagram.
Meta is testing AI-generated content recommendations in the main feed of Facebook and Instagram.
Meta

The new abilities arrive alongside the company’s latest Llama 3.2 model. The new iteration, which comes barely two months after the Llama 3.1 release, is the first to have vision capabilities and can “bridge the gap between vision and language by extracting details from an image, understanding the scene, and then crafting a sentence or two that could be used as an image caption to help tell the story.” Llama 3.2 is “competitive” on “image recognition and a range of visual understanding tasks” compared with similar offerings from ChatGPT and Claude, Meta says.

The social network is testing other, potentially controversial, ways to bring AI into the core features of its main apps. The company will test AI-generated translation features for Reels with “automatic dubbing and lip syncing.” According to Meta, that “will simulate the speaker’s voice in another language and sync their lips to match.” It will arrive first to “some creators’ videos” in English and Spanish in the US and Latin America, though the company hasn't shared details on rollout timing.

Meta also plans to experiment with AI-generated content directly in the main feeds on Facebook and Instagram. With the test, Meta AI will surface AI-generated images that are meant to be personalized to each users’ interests and past activity. For example, Meta AI could surface an image “imagined for you” that features your face.

Catch up on all the news from Meta Connect 2024!

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-ai-can-now-talk-to-you-and-edit-your-photos-172853219.html?src=rss

Meta’s AI chatbot will soon speak in the voices of John Cena and other celebrities

Meta has secured deals with several actors, including Kristen Bell, John Cena and Judi Dench, to use their voices for the Meta AI chatbot, Reuters reports. Users will be able to talk to the chatbot while listening to answers in the voice of their favorite celebrities. Other celebrities include Awkwafina and Keegan-Michael Key, a source told Reuters.

Besides these five voices, the source also said that there are more generic voice options if users prefer them. All voices will be available this week in the US and other English-speaking regions, though the source didn’t give any other specific locations.

The news follows a report from last month which claimed Meta was negotiating with actors to secure the rights to use their voices for its AI projects. Now, the deals have reportedly been struck, and the chatbot found when using Facebook, Messenger, Instagram and WhatsApp could feature these famous voices soon. The company had intended to finalize agreements before the Connect conference, where Reuters’ source says it will announce the new voice options.

This article originally appeared on Engadget at https://www.engadget.com/ai/metas-ai-chatbot-will-soon-speak-in-the-voices-of-john-cena-and-other-celebrities-160603365.html?src=rss

Meta’s AI chatbot will soon speak in the voices of John Cena and other celebrities

Meta has secured deals with several actors, including Kristen Bell, John Cena and Judi Dench, to use their voices for the Meta AI chatbot, Reuters reports. Users will be able to talk to the chatbot while listening to answers in the voice of their favorite celebrities. Other celebrities include Awkwafina and Keegan-Michael Key, a source told Reuters.

Besides these five voices, the source also said that there are more generic voice options if users prefer them. All voices will be available this week in the US and other English-speaking regions, though the source didn’t give any other specific locations.

The news follows a report from last month which claimed Meta was negotiating with actors to secure the rights to use their voices for its AI projects. Now, the deals have reportedly been struck, and the chatbot found when using Facebook, Messenger, Instagram and WhatsApp could feature these famous voices soon. The company had intended to finalize agreements before the Connect conference, where Reuters’ source says it will announce the new voice options.

This article originally appeared on Engadget at https://www.engadget.com/ai/metas-ai-chatbot-will-soon-speak-in-the-voices-of-john-cena-and-other-celebrities-160603365.html?src=rss

28 Years Later was partially shot on an iPhone 15 Pro Max

Danny Boyle’s zombie sequel 28 Years Later was shot using several iPhone 15 Pro Max smartphones, according to a report by Wired. This makes it the biggest movie ever made using iPhones, as the budget was around $75 million.

There are some major caveats worth going over. First of all, the sourcing on the story is anonymous, as the film’s staff was required to sign an NDA. Also, the entire film wasn’t shot using last year’s high-end Apple smartphone. Engadget has confirmed that Boyle and his team used a bunch of different cameras, with the iPhone 15 Pro Max being just one tool.

Finally, it’s not like the director just plopped the smartphone on a tripod and called it a day. Each iPhone looks to have been adapted to integrate with full-frame DSLR lenses. Speaking of, those professional-grade lenses cost a small fortune. The phones were also nestled in protective cages.

Even if the phones weren’t exclusively used to make this movie, it’s still something of a full-circle moment for Boyle and his team. The original 28 Days Later was shot primarily on a prosumer-grade camcorder that cost $4,000 at the time. This camcorder recorded footage to MiniDV tapes.

28 Years Later is the third entry in the franchise and is due to hit theaters in June 2025. The film stars Jodie Comer, Aaron Taylor-Johnson, Ralph Fiennes and Cillian Murphy. This will be the first of three new films set in the universe of fast-moving rage zombies. Plot details are non-existent, but all three upcoming movies are being written by Alex Garland. He co-wrote the first one and has since gone on to direct genre fare like Ex Machina, Annihilation and, most recently, Civil War. He also made a truly underrated TV show called Devs.

As for the intersection of smartphones and Hollywood, several films have been shot with iPhones. These include Sean Baker’s Tangerine and Steven Soderbergh’s Unsane.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/tv-movies/28-years-later-was-partially-shot-on-an-iphone-15-pro-max-182036483.html?src=rss

Netflix teases the next seasons of Avatar, Squid Game and Arcane at Geeked Week

At its in-person fan event for Geeked Week this year, Netflix has shown teasers and sneak peeks of its upcoming shows, including the second season of Avatar: The Last Airbender. In addition to revealing that the new season is already in production, Netflix has also announced that Miya Cech (Are You Afraid of the Dark?) is playing earthbending master Toph. 

A teaser for Squid Game season 2 shows Lee Jung-jae wearing his player 456 uniform again to compete in another round of deadly games with other contestants hoping to win millions of dollars. The next season of Squid Game will start streaming on December 26. 

The streaming giant has also revealed that One Piece live action's Mr. 0 and Miss All-Sunday will be portrayed by Joe Mangianello and Lera Abova, respectively. And for Wednesday fans, Netflix has released a teaser for the second season of Wednesday that will arrive sometime in 2025. 

For animation fans, Netflix has released a teaser for Tom Clancy's Splinter Cell: Deathwatch, with Liev Schreiber voicing protagonist Sam Fisher. It has also given viewers a short look at a new Devil May Cry animated series by Korean company Studio Mir, which is coming in April 2025. 

Netflix has teased a new Tomb Raider animated series that's coming in October and a Rebel Moon game that's arriving in 2025, as well. Finally, the company has given Arcane fans a clear schedule for the final chapter of the critically acclaimed show: Act 1 will be available to stream on November 9, followed by Act 2 on November 16. A third and final Act will close out the show with a proper ending on November 23.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/streaming/netflix-teases-the-next-seasons-of-avatar-squid-game-and-arcane-at-geeked-week-035246559.html?src=rss