Amazon accused of using AI to ‘replicate the voices’ of actors in Road House remake

Amazon is being sued by the writer of the original 1989 Patrick Swayze version of the film Road House over alleged copyright infringement in the movie's remake, The Los Angeles Times has reported. Screenwriter R. Lance Hill accuses Amazon and MGM Studios of using AI to clone actors' voices in the new production in order to finish it before the copyright expired. 

Hill said he filed a petition with the US Copyright Office in November 2021 to reclaim the rights to his original screenplay, which forms the basis of the new film. At that point, the rights were owned by Amazon Studios, as part of its acquisition of MGM, but were set to expire in November 2023. Hill alleges that once that happened, the rights would revert back to him. 

According to the lawsuit, Amazon Studios rushed ahead with the project anyway in order to finish it before the copyright deadline. Since it was stymied by the actor's strike, Hill alleges Amazon used AI to “replicate the voices” of the actors who worked in the 2024 remake. Such use violated the terms of the deal struck between the union and major studios including Amazon. 

The claim is complicated by the fact that Hill signed a "work-made-for-hire" deal with the original producer, United Artists. That effectively means that the studio hiring the writer would be both the owner and copyright holder of the work. Hill, however, dismissed that as "boilerplate" typically used in contracts. 

The lawsuit seeks to block the release of the film, set to bow at SXSW on March 8th before (controversially) heading direct to streaming on Prime Video on March 21. 

Amazon denies the claims, with a spokesperson telling The Verge that "the studio expressly instructed the filmmakers to NOT use AI in this movie." It added that if AI was utilized, it was only done in early versions of the films. Later on, filmmakers were told to remove any "AI or non-SAG AFTRA actors" for the final version. It added that other allegations are "categorically false" and that it believes its copyright on the original Road House has yet to expire. 

This article originally appeared on Engadget at https://www.engadget.com/amazon-accused-of-using-ai-to-replicate-the-voices-of-actors-in-road-house-remake-054408057.html?src=rss

Google is reportedly paying publishers thousands of dollars to use its AI to write stories

Google has been quietly striking deals with some publishers to use new generative AI tools to publish stories, according to a report in Adweek. The deals, reportedly worth tens of thousands of dollars a year, are apparently part of the Google News Initiative (GNI), a six-year-old program that funds media literacy projects, fact-checking tools, and other resources for newsrooms. But the move into generative AI publishing tools would be a new, and likely controversial, step for the company.

According to Adweek, the program is currently targeting a “handful” of smaller publishers. “The beta tools let under-resourced publishers create aggregated content more efficiently by indexing recently published reports generated by other organizations, like government agencies and neighboring news outlets, and then summarizing and publishing them as a new article,” Adweek reports.

In a statement to Engadget, a Google spokesperson denied the tools were used being used to "re-publish" the work of other publications. "This speculation about this tool being used to re-publish other outlets’ work is inaccurate," the spokesperson said. "The experimental tool is being responsibly designed to help small, local publishers produce high quality journalism using factual content from public data sources – like a local government’s public information office or health authority. Publishers remain in full editorial control of what is ultimately published on their site."

It’s not clear exactly how much publishers are being paid under the arrangement, though Adweek says it’s a “five-figure sum” per year. In exchange, media organizations reportedly agree to publish at least three articles a day, one weekly newsletter and one monthly marketing campaign using the tools.

Of note, publishers in the program are apparently not required to disclose their use of AI, nor are the aggregated websites informed that their content is being used to create AI-written stories on other sites. The AI-generated copy reportedly uses a color-coded system to indicate the reliability of each section of text to help human editors review the content before publishing. 

In a statement to Adweek Google said it was “in the early stages of exploring ideas to potentially provide AI-enabled tools to help journalists with their work.” The spokesperson added that the AI tools “are not intended to, and cannot, replace the essential role journalists have in reporting, creating and fact-checking their articles.”

It’s not clear what Google is getting out of the arrangement, though it wouldn’t be the first tech company to pay newsrooms to use proprietary tools. The arrangement bears some similarities to the deals Facebook once struck with publishers to create live video content in 2016. The social media company made headlines as it paid publishers millions of dollars to juice its nascent video platform and dozens of media outlets opted to “pivot to video” as a result.

Those deals later evaporated after Facebook discovered it had wildly miscalculated the number of views such content was getting. The social network ended its live video deals soon after and has since tweaked its algorithm to recommend less news content. The media industry’s “pivot to video” cost hundreds of journalists their jobs, by some estimates.

While the GNI program appears to be much smaller than what Facebook attempted nearly a decade ago with live video, it will likely raise fresh scrutiny over the use of generative AI tools by publishers. Publications like CNET and Sports Illustrated have been widely criticized for attempting to pass off AI-authored articles as written by human staffers.

Update February 28, 2024, 1:10 PM ET: This story has been edited to add additional information from a Google spokesperson. 

This article originally appeared on Engadget at https://www.engadget.com/google-is-reportedly-paying-publishers-thousands-of-dollars-to-use-its-ai-to-write-stories-215943624.html?src=rss

Google is reportedly paying publishers thousands of dollars to use its AI to write stories

Google has been quietly striking deals with some publishers to use new generative AI tools to publish stories, according to a report in Adweek. The deals, reportedly worth tens of thousands of dollars a year, are apparently part of the Google News Initiative (GNI), a six-year-old program that funds media literacy projects, fact-checking tools, and other resources for newsrooms. But the move into generative AI publishing tools would be a new, and likely controversial, step for the company.

According to Adweek, the program is currently targeting a “handful” of smaller publishers. “The beta tools let under-resourced publishers create aggregated content more efficiently by indexing recently published reports generated by other organizations, like government agencies and neighboring news outlets, and then summarizing and publishing them as a new article,” Adweek reports.

In a statement to Engadget, a Google spokesperson denied the tools were used being used to "re-publish" the work of other publications. "This speculation about this tool being used to re-publish other outlets’ work is inaccurate," the spokesperson said. "The experimental tool is being responsibly designed to help small, local publishers produce high quality journalism using factual content from public data sources – like a local government’s public information office or health authority. Publishers remain in full editorial control of what is ultimately published on their site."

It’s not clear exactly how much publishers are being paid under the arrangement, though Adweek says it’s a “five-figure sum” per year. In exchange, media organizations reportedly agree to publish at least three articles a day, one weekly newsletter and one monthly marketing campaign using the tools.

Of note, publishers in the program are apparently not required to disclose their use of AI, nor are the aggregated websites informed that their content is being used to create AI-written stories on other sites. The AI-generated copy reportedly uses a color-coded system to indicate the reliability of each section of text to help human editors review the content before publishing. 

In a statement to Adweek Google said it was “in the early stages of exploring ideas to potentially provide AI-enabled tools to help journalists with their work.” The spokesperson added that the AI tools “are not intended to, and cannot, replace the essential role journalists have in reporting, creating and fact-checking their articles.”

It’s not clear what Google is getting out of the arrangement, though it wouldn’t be the first tech company to pay newsrooms to use proprietary tools. The arrangement bears some similarities to the deals Facebook once struck with publishers to create live video content in 2016. The social media company made headlines as it paid publishers millions of dollars to juice its nascent video platform and dozens of media outlets opted to “pivot to video” as a result.

Those deals later evaporated after Facebook discovered it had wildly miscalculated the number of views such content was getting. The social network ended its live video deals soon after and has since tweaked its algorithm to recommend less news content. The media industry’s “pivot to video” cost hundreds of journalists their jobs, by some estimates.

While the GNI program appears to be much smaller than what Facebook attempted nearly a decade ago with live video, it will likely raise fresh scrutiny over the use of generative AI tools by publishers. Publications like CNET and Sports Illustrated have been widely criticized for attempting to pass off AI-authored articles as written by human staffers.

Update February 28, 2024, 1:10 PM ET: This story has been edited to add additional information from a Google spokesperson. 

This article originally appeared on Engadget at https://www.engadget.com/google-is-reportedly-paying-publishers-thousands-of-dollars-to-use-its-ai-to-write-stories-215943624.html?src=rss

A Paranormal Activity game is coming in 2026 and it might actually be good

One of the most successful horror movie franchises of the last 20 years is coming to a gaming system near you. Paramount Game Studios has teamed up with DreadXP and DarkStone Digital (aka solo developer Brian Clarke) to create Paranormal Activity: Found Footage. The horror game is slated to hit multiple platforms in 2026.

Paranormal Activity: Found Footage will build on the lore and the world that was established in the seven-film series, which debuted in 2007. It will be the first non-virtual reality Paranormal Activity game.

As the title suggests, the game will use the found-footage format of the movies. Details are otherwise slim for now, though Paranormal Activity: Found Footage will feature what's said to be an advanced "haunt system" that will dynamically change the intensity and kinds of scares players will face based on their actions. Several other games have used a dynamic scare system, including Don't Scream (an early access title that picked up some buzz a few months ago), so it'll be interesting to see how DarkStone Digital uses that here.

Clarke previously created the well-reviewed first-person horror game The Mortuary Assistant. "My latest project is a Paranormal Activity game," Clarke, who is also a co-director of publisher DreadXP, wrote on X. "I am beyond excited to be doing this as I have loved this series from the very beginning and it heavily shaped my style of horror."

This article originally appeared on Engadget at https://www.engadget.com/a-paranormal-activity-game-is-coming-in-2026-and-it-might-actually-be-good-193120056.html?src=rss

A Paranormal Activity game is coming in 2026 and it might actually be good

One of the most successful horror movie franchises of the last 20 years is coming to a gaming system near you. Paramount Game Studios has teamed up with DreadXP and DarkStone Digital (aka solo developer Brian Clarke) to create Paranormal Activity: Found Footage. The horror game is slated to hit multiple platforms in 2026.

Paranormal Activity: Found Footage will build on the lore and the world that was established in the seven-film series, which debuted in 2007. It will be the first non-virtual reality Paranormal Activity game.

As the title suggests, the game will use the found-footage format of the movies. Details are otherwise slim for now, though Paranormal Activity: Found Footage will feature what's said to be an advanced "haunt system" that will dynamically change the intensity and kinds of scares players will face based on their actions. Several other games have used a dynamic scare system, including Don't Scream (an early access title that picked up some buzz a few months ago), so it'll be interesting to see how DarkStone Digital uses that here.

Clarke previously created the well-reviewed first-person horror game The Mortuary Assistant. "My latest project is a Paranormal Activity game," Clarke, who is also a co-director of publisher DreadXP, wrote on X. "I am beyond excited to be doing this as I have loved this series from the very beginning and it heavily shaped my style of horror."

This article originally appeared on Engadget at https://www.engadget.com/a-paranormal-activity-game-is-coming-in-2026-and-it-might-actually-be-good-193120056.html?src=rss

TikTok is muting all Universal Music-related songs

TikTok is being forced to take down more music from its platform as a royalties spat with Universal Music Group (UMG) rumbles on. UMG recently yanked recordings it owns or distributes from TikTok including tracks from the likes of superstars Taylor Swift, Billie Eilish and The Weeknd. The standoff is now impacting songs published by UMG.

"We are in the process of carrying out Universal Music Group's requirement to remove all songs that have been written (or co-written) by a songwriter signed to Universal Music Publishing Group (UMPG), based on information they have provided," TikTok said in a statement. "Their actions not only affect the songwriters and artists that they represent, but now also impact many artists and songwriters not signed to universal." TikTok added that it is still committed to "reaching an equitable agreement" with UMG.

Due to an issue called split copyrights, if a UMPG-contracted writer has contributed to a song in any way, that track has to be removed from TikTok. So artists who have collaborated with the likes of Swift, Adele, Justin Bieber, Mariah Carey, Ice Spice, Elton John, Harry Styles and SZA will see their songs disappearing from TikTok and being muted on videos that currently use them. The move will prevent more artists from plugging their work on the most important platform for promoting music.

According to the BBC, UMG removed around three million songs from TikTok after an agreement over its recording catalog expired. UMG's deal with TikTok over its publishing catalog (which covers some four million songs) ends later this week, at which point all relevant tracks will have vanished from the short-form video service.

Update 2/28 3:30PM ET: Added confirmation that TikTok is removing songs by UMPG-contracted songwriters.

This article originally appeared on Engadget at https://www.engadget.com/tiktok-is-muting-more-songs-amid-its-tussle-with-universal-music-161839190.html?src=rss

TikTok is muting all Universal Music-related songs

TikTok is being forced to take down more music from its platform as a royalties spat with Universal Music Group (UMG) rumbles on. UMG recently yanked recordings it owns or distributes from TikTok including tracks from the likes of superstars Taylor Swift, Billie Eilish and The Weeknd. The standoff is now impacting songs published by UMG.

"We are in the process of carrying out Universal Music Group's requirement to remove all songs that have been written (or co-written) by a songwriter signed to Universal Music Publishing Group (UMPG), based on information they have provided," TikTok said in a statement. "Their actions not only affect the songwriters and artists that they represent, but now also impact many artists and songwriters not signed to universal." TikTok added that it is still committed to "reaching an equitable agreement" with UMG.

Due to an issue called split copyrights, if a UMPG-contracted writer has contributed to a song in any way, that track has to be removed from TikTok. So artists who have collaborated with the likes of Swift, Adele, Justin Bieber, Mariah Carey, Ice Spice, Elton John, Harry Styles and SZA will see their songs disappearing from TikTok and being muted on videos that currently use them. The move will prevent more artists from plugging their work on the most important platform for promoting music.

According to the BBC, UMG removed around three million songs from TikTok after an agreement over its recording catalog expired. UMG's deal with TikTok over its publishing catalog (which covers some four million songs) ends later this week, at which point all relevant tracks will have vanished from the short-form video service.

Update 2/28 3:30PM ET: Added confirmation that TikTok is removing songs by UMPG-contracted songwriters.

This article originally appeared on Engadget at https://www.engadget.com/tiktok-is-muting-more-songs-amid-its-tussle-with-universal-music-161839190.html?src=rss

NVIDIA GeForce Now gets pre-roll ads for free users

Starting on March 5, GeForce Now users enjoying the service for free will find themselves faced with ads while they're waiting for their turn to play. NVIDIA has sent out an email to free users, telling them that they'll experience "up to two minutes of video sponsorship messages before each gaming session while in queue." It will provide support for the free service, the company said. NVIDIA also believes that the ads will lead to shorter wait times for free users. Company spokesperson Stephanie Ngo has confirmed the change to The Verge

GeForce Now gamers in the free tier can enjoy one hour of gaming at no cost, but they get cut off and have to wait in queue every time their hour-long gaming session is done. The most avid gamers who don't want to pay for GeForce Now's $10 Priority or $20 Ultimate subscription tiers will have to sit through ads multiple times. That said, the ads only show up in queue and not in the middle of a user's playtime, so they're not intrusive in the way Netflix's or Amazon Prime Videos' ads are. 

NVIDIA recently became the third most valuable company in the United States, overtaking Alphabet, and the fourth overall worldwide. The company is now valued at $1.83 trillion and has an 80 percent share in the high-end chip market, thanks to the AI boom over the past year. 

This article originally appeared on Engadget at https://www.engadget.com/nvidia-geforce-now-gets-pre-roll-ads-for-free-users-125754649.html?src=rss

NVIDIA GeForce Now gets pre-roll ads for free users

Starting on March 5, GeForce Now users enjoying the service for free will find themselves faced with ads while they're waiting for their turn to play. NVIDIA has sent out an email to free users, telling them that they'll experience "up to two minutes of video sponsorship messages before each gaming session while in queue." It will provide support for the free service, the company said. NVIDIA also believes that the ads will lead to shorter wait times for free users. Company spokesperson Stephanie Ngo has confirmed the change to The Verge

GeForce Now gamers in the free tier can enjoy one hour of gaming at no cost, but they get cut off and have to wait in queue every time their hour-long gaming session is done. The most avid gamers who don't want to pay for GeForce Now's $10 Priority or $20 Ultimate subscription tiers will have to sit through ads multiple times. That said, the ads only show up in queue and not in the middle of a user's playtime, so they're not intrusive in the way Netflix's or Amazon Prime Videos' ads are. 

NVIDIA recently became the third most valuable company in the United States, overtaking Alphabet, and the fourth overall worldwide. The company is now valued at $1.83 trillion and has an 80 percent share in the high-end chip market, thanks to the AI boom over the past year. 

This article originally appeared on Engadget at https://www.engadget.com/nvidia-geforce-now-gets-pre-roll-ads-for-free-users-125754649.html?src=rss

Google explains why Gemini’s image generation feature overcorrected for diversity

After promising to fix Gemini's image generation feature and then pausing it altogether, Google has published a blog post offering an explanation for why its technology overcorrected for diversity. Prabhakar Raghavan, the company's Senior Vice President for Knowledge & Information, explained that Google's efforts to ensure that the chatbot would generate images showing a wide range of people "failed to account for cases that should clearly not show a range." Further, its AI model grew to become "way more cautious" over time and refused to answer prompts that weren't inherently offensive. "These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong," Raghavan wrote.

Google made sure that Gemini's image generation couldn't create violent or sexually explicit images of real persons and that the photos it whips up would feature people of various ethnicities and with different characteristics. But if a user asks it to create images of people that are supposed to be of a certain ethnicity or sex, it should be able to do so. As users recently found out, Gemini would refuse to produce results for prompts that specifically request for white people. The prompt "Generate a glamour shot of a [ethnicity or nationality] couple," for instance, worked for "Chinese," "Jewish" and "South African" requests but not for ones requesting an image of white people. 

Gemini also has issues producing historically accurate images. When users requested for images of German soldiers during the second World War, Gemini generated images of Black men and Asian women wearing Nazi uniform. When we tested it out, we asked the chatbot to generate images of "America's founding fathers" and "Popes throughout the ages," and it showed us photos depicting people of color in the roles. Upon asking it to make its images of the Pope historically accurate, it refused to generate any result. 

Raghavan said that Google didn't intend for Gemini to refuse to create images of any particular group or to generate photos that were historically inaccurate. He also reiterated Google's promise that it will work on improving Gemini's image generation. That entails "extensive testing," though, so it may take some time before the company switches the feature back on. At the moment, if a user tries to get Gemini to create an image, the chatbot responds with: "We are working to improve Gemini’s ability to generate images of people. We expect this feature to return soon and will notify you in release updates when it does."

This article originally appeared on Engadget at https://www.engadget.com/google-explains-why-geminis-image-generation-feature-overcorrected-for-diversity-121532787.html?src=rss