The Motion Picture Association will work with Congress to start blocking piracy sites in the US

At CinemaCon this year, the Motion Picture Association Chairman and CEO Charles Rivkin has revealed a plan that would make "sailing the digital seas" under the Jolly Roger banner just a bit harder. Rivkin said the association is going to work with Congress to establish and enforce a site-blocking legislation in the United States. He added that almost 60 countries use site-blocking as a tool against piracy, "including leading democracies and many of America's closest allies." The only reason why the US isn't one of them, he continued, is the "lack of political will, paired with outdated understandings of what site-blocking actually is, how it functions, and who it affects."

With the rule in place, "film and television, music and book publishers, sports leagues and broadcasters" can ask the court to order ISPs to block websites that share stolen content. Rivkin, arguing in favor of site-blocking, explained that the practice doesn't impact legitimate businesses. He said legislation around the practice would require detailed evidence to prove that a certain entity is engaged in illegal activities and that alleged perpetrators can appear in court to defend themselves. 

Rivkin cited FMovies, an illegal film streamer, as an example of how site-blocking in the US would minimize traffic to piracy websites. Apparently, FMovies gets 160 million visits per month, a third of which comes from the US. If the rule also exists in the country, then the website's traffic would, theoretically, drop pretty drastically. The MPA's chairman also talked about previous efforts to enforce site-blocking in the US, which critics previously said would "break the internet" and could potentially stifle free speech. While he insisted that other countries' experiences since then had proven those predictions wrong, he promised that the organization takes those concerns seriously.

He ended his speech by asking for the support of theater owners in the country. "The MPA is leading this charge in Washington," he said. "And we need the voices of theater owners — your voices — right by our side. Because this action will be good for all of us: Content creators. Theaters. Our workforce. Our country."

This article originally appeared on Engadget at https://www.engadget.com/the-motion-picture-association-will-work-with-congress-to-start-blocking-piracy-sites-in-the-us-062111261.html?src=rss

OpenAI and Google reportedly used transcriptions of YouTube videos to train their AI models

OpenAI and Google trained their AI models on text transcribed from YouTube videos, potentially violating creators’ copyrights, according to The New York Times. The report, which describes the lengths OpenAI, Google and Meta have gone to in order to maximize the amount of data they can feed to their AIs, cites numerous people with knowledge of the companies’ practices. It comes just days after YouTube CEO Neal Mohan said in an interview with Bloomberg Originals that OpenAI’s alleged use of YouTube videos to train its new text-to-video generator, Sora, would go against the platform’s policies.

According to the NYT, OpenAI used its Whisper speech recognition tool to transcribe more than one million hours of YouTube videos, which were then used to train GPT-4. The Information previously reported that OpenAI had used YouTube videos and podcasts to train the two AI systems. OpenAI president Greg Brockman was reportedly among the people on this team. Per Google’s rules, “unauthorized scraping or downloading of YouTube content” is not allowed, Matt Bryant, a spokesperson for Google, told NYT, also saying that the company was unaware of any such use by OpenAI.

The report, however, claims there were people at Google who knew but did not take action against OpenAI because Google was using YouTube videos to train its own AI models. Google told NYT it only does so with videos from creators who have agreed to this. Engadget has reached out to Google and OpenAI for comment.

The NYT report also claims Google asked a team to tweak its privacy policy in June 2023 to more broadly cover its use of publicly available content, including Google Docs and Google Sheets, to train its AI models and products. The changes, which Google says were made for clarity's sake, were published in July. Bryant told NYT that this type of data is only used with the permission of users who opt into Google’s experimental features tests, and that the company “did not start training on additional types of data based on this language change.” The change added Bard as an example of what that data might be used for. 

Correction, April 6, 2024, 3:45PM ET: This story originally stated that Google updated its privacy policy in June 2022. The policy update was actually made in 2023. We apologize for the error.

This article originally appeared on Engadget at https://www.engadget.com/openai-and-google-reportedly-used-transcriptions-of-youtube-videos-to-train-their-ai-models-163531073.html?src=rss

OpenAI and Google reportedly used transcriptions of YouTube videos to train their AI models

OpenAI and Google trained their AI models on text transcribed from YouTube videos, potentially violating creators’ copyrights, according to The New York Times. The report, which describes the lengths OpenAI, Google and Meta have gone to in order to maximize the amount of data they can feed to their AIs, cites numerous people with knowledge of the companies’ practices. It comes just days after YouTube CEO Neal Mohan said in an interview with Bloomberg Originals that OpenAI’s alleged use of YouTube videos to train its new text-to-video generator, Sora, would go against the platform’s policies.

According to the NYT, OpenAI used its Whisper speech recognition tool to transcribe more than one million hours of YouTube videos, which were then used to train GPT-4. The Information previously reported that OpenAI had used YouTube videos and podcasts to train the two AI systems. OpenAI president Greg Brockman was reportedly among the people on this team. Per Google’s rules, “unauthorized scraping or downloading of YouTube content” is not allowed, Matt Bryant, a spokesperson for Google, told NYT, also saying that the company was unaware of any such use by OpenAI.

The report, however, claims there were people at Google who knew but did not take action against OpenAI because Google was using YouTube videos to train its own AI models. Google told NYT it only does so with videos from creators who have agreed to this. Engadget has reached out to Google and OpenAI for comment.

The NYT report also claims Google asked a team to tweak its privacy policy in June 2023 to more broadly cover its use of publicly available content, including Google Docs and Google Sheets, to train its AI models and products. The changes, which Google says were made for clarity's sake, were published in July. Bryant told NYT that this type of data is only used with the permission of users who opt into Google’s experimental features tests, and that the company “did not start training on additional types of data based on this language change.” The change added Bard as an example of what that data might be used for. 

Correction, April 6, 2024, 3:45PM ET: This story originally stated that Google updated its privacy policy in June 2022. The policy update was actually made in 2023. We apologize for the error.

This article originally appeared on Engadget at https://www.engadget.com/openai-and-google-reportedly-used-transcriptions-of-youtube-videos-to-train-their-ai-models-163531073.html?src=rss

Meta plans to more broadly label AI-generated content

Meta says that its current approach to labeling AI-generated content is too narrow and that it will soon apply a "Made with AI" badge to a broader range of videos, audio and images. Starting in May, it will append the label to media when it detects industry-standard AI image indicators or when users acknowledge that they’re uploading AI-generated content. The company may also apply the label to posts that fact-checkers flag, though it's likely to downrank content that's been identified as false or altered.

The company announced the measure in the wake of an Oversight Board decision regarding a video that was maliciously edited to depict President Joe Biden touching his granddaughter inappropriately. The Oversight Board agreed with Meta's decision not to take down the video from Facebook as it didn't violate the company's rules regarding manipulated media. However, the board suggested that Meta should “reconsider this policy quickly, given the number of elections in 2024.”

Meta says it agrees with the board's "recommendation that providing transparency and additional context is now the better way to address manipulated media and avoid the risk of unnecessarily restricting freedom of speech, so we’ll keep this content on our platforms so we can add labels and context." The company added that, in July, it will stop taking down content purely based on violations of its manipulated video policy. "This timeline gives people time to understand the self-disclosure process before we stop removing the smaller subset of manipulated media," Meta's vice president of content policy Monika Bickert wrote in a blog post.

Meta had been applying an “Imagined with AI” label to photorealistic images that users whip up using the Meta AI tool. The updated policy goes beyond the Oversight Board's labeling recommendations, Meta says. "If we determine that digitally-created or altered images, video or audio create a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label so people have more information and context," Bickert wrote.

While the company generally believes that transparency and allowing appropriately labeled AI-generated photos, images and audio to remain on its platforms is the best way forward, it will still delete material that breaks the rules. "We will remove content, regardless of whether it is created by AI or a person, if it violates our policies against voter interference, bullying and harassment, violence and incitement, or any other policy in our Community Standards," Bickert noted.

The Oversight Board told Engadget in a statement that it was pleased Meta took its recommendations on board. It added that it would review the company's implementation of them in a transparency report down the line.

"While it is always important to find ways to preserve freedom of expression while protecting against demonstrable offline harm, it is especially critical to do so in the context of such an important year for elections," the board said. "As such, we are pleased that Meta will begin labeling a wider range of video, audio and image content as 'Made with AI' when they detect AI image indicators or when people indicate they have uploaded AI content. This will provide people with greater context and transparency for more types of manipulated media, while also removing posts which violate Meta’s rules in other ways."

Update 4/5 12:55PM ET: Added comment from The Oversight Board.

This article originally appeared on Engadget at https://www.engadget.com/meta-plans-to-more-broadly-label-ai-generated-content-152945787.html?src=rss

Meta plans to more broadly label AI-generated content

Meta says that its current approach to labeling AI-generated content is too narrow and that it will soon apply a "Made with AI" badge to a broader range of videos, audio and images. Starting in May, it will append the label to media when it detects industry-standard AI image indicators or when users acknowledge that they’re uploading AI-generated content. The company may also apply the label to posts that fact-checkers flag, though it's likely to downrank content that's been identified as false or altered.

The company announced the measure in the wake of an Oversight Board decision regarding a video that was maliciously edited to depict President Joe Biden touching his granddaughter inappropriately. The Oversight Board agreed with Meta's decision not to take down the video from Facebook as it didn't violate the company's rules regarding manipulated media. However, the board suggested that Meta should “reconsider this policy quickly, given the number of elections in 2024.”

Meta says it agrees with the board's "recommendation that providing transparency and additional context is now the better way to address manipulated media and avoid the risk of unnecessarily restricting freedom of speech, so we’ll keep this content on our platforms so we can add labels and context." The company added that, in July, it will stop taking down content purely based on violations of its manipulated video policy. "This timeline gives people time to understand the self-disclosure process before we stop removing the smaller subset of manipulated media," Meta's vice president of content policy Monika Bickert wrote in a blog post.

Meta had been applying an “Imagined with AI” label to photorealistic images that users whip up using the Meta AI tool. The updated policy goes beyond the Oversight Board's labeling recommendations, Meta says. "If we determine that digitally-created or altered images, video or audio create a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label so people have more information and context," Bickert wrote.

While the company generally believes that transparency and allowing appropriately labeled AI-generated photos, images and audio to remain on its platforms is the best way forward, it will still delete material that breaks the rules. "We will remove content, regardless of whether it is created by AI or a person, if it violates our policies against voter interference, bullying and harassment, violence and incitement, or any other policy in our Community Standards," Bickert noted.

The Oversight Board told Engadget in a statement that it was pleased Meta took its recommendations on board. It added that it would review the company's implementation of them in a transparency report down the line.

"While it is always important to find ways to preserve freedom of expression while protecting against demonstrable offline harm, it is especially critical to do so in the context of such an important year for elections," the board said. "As such, we are pleased that Meta will begin labeling a wider range of video, audio and image content as 'Made with AI' when they detect AI image indicators or when people indicate they have uploaded AI content. This will provide people with greater context and transparency for more types of manipulated media, while also removing posts which violate Meta’s rules in other ways."

Update 4/5 12:55PM ET: Added comment from The Oversight Board.

This article originally appeared on Engadget at https://www.engadget.com/meta-plans-to-more-broadly-label-ai-generated-content-152945787.html?src=rss

An old SEO scam has a new AI-generated face

Over the years, Engadget has been the target of a common SEO scam, wherein someone claims ownership of an image and demands a link back to a particular website. A lot of other websites would tell you the same thing, but now the scammers are making their fake DMCA takedown notices and threats of legal action look more legit with the help of easily accessible AI tools. 

According to a report by 404Media, the publisher of the website Tedium received a "copyright infringement notice" via email from a law firm called Commonwealth Legal last week. Like older, similar attempts at duping the recipient, the sender said they're reaching out "in relation to an image" connected to their client. In this case, the sender demanded the addition of a "visible and clickable link" to a website called "tech4gods" underneath the photo that was allegedly stolen. 

Since Tedium actually used a photo from a royalty-free provider, the publisher looked into the demand, found the law firm's website, and upon closer inspection, realized that the images of its lawyers were generated by AI. As 404Media notes, the images of the lawyers had vacant looks in the eyes that's commonly seen in photos created by AI tools. If you do a reverse image search on them, you'll get results from a website with the URL generated.photos, which uses artificial intelligence to make "unique, worry-free model photos... from scratch." The publisher also found that the law firm's listed address that's supposed to be on the fourth floor of a building points to a one-floor structure on Google Street View. The owner of tech4gods said he had nothing to do with the scam but admitted that he used to buy backlinks for his website. 

This is but one example of how bad actors can use AI tools to fool and scam people, and we have to be more vigilant as instances like this will just likely keep on growing. Reverse image search engines are your friend, but they may not be infallible and may not always help. Deepfakes, for instance, have become a big problem in recent years, as bad actors continue to use them to create convincing videos and audio not just to scam people, but also to spread misinformation online. 

This article originally appeared on Engadget at https://www.engadget.com/an-old-seo-scam-has-a-new-ai-generated-face-100045758.html?src=rss

An old SEO scam has a new AI-generated face

Over the years, Engadget has been the target of a common SEO scam, wherein someone claims ownership of an image and demands a link back to a particular website. A lot of other websites would tell you the same thing, but now the scammers are making their fake DMCA takedown notices and threats of legal action look more legit with the help of easily accessible AI tools. 

According to a report by 404Media, the publisher of the website Tedium received a "copyright infringement notice" via email from a law firm called Commonwealth Legal last week. Like older, similar attempts at duping the recipient, the sender said they're reaching out "in relation to an image" connected to their client. In this case, the sender demanded the addition of a "visible and clickable link" to a website called "tech4gods" underneath the photo that was allegedly stolen. 

Since Tedium actually used a photo from a royalty-free provider, the publisher looked into the demand, found the law firm's website, and upon closer inspection, realized that the images of its lawyers were generated by AI. As 404Media notes, the images of the lawyers had vacant looks in the eyes that's commonly seen in photos created by AI tools. If you do a reverse image search on them, you'll get results from a website with the URL generated.photos, which uses artificial intelligence to make "unique, worry-free model photos... from scratch." The publisher also found that the law firm's listed address that's supposed to be on the fourth floor of a building points to a one-floor structure on Google Street View. The owner of tech4gods said he had nothing to do with the scam but admitted that he used to buy backlinks for his website. 

This is but one example of how bad actors can use AI tools to fool and scam people, and we have to be more vigilant as instances like this will just likely keep on growing. Reverse image search engines are your friend, but they may not be infallible and may not always help. Deepfakes, for instance, have become a big problem in recent years, as bad actors continue to use them to create convincing videos and audio not just to scam people, but also to spread misinformation online. 

This article originally appeared on Engadget at https://www.engadget.com/an-old-seo-scam-has-a-new-ai-generated-face-100045758.html?src=rss

Meta’s AI image generator struggles to create images of couples of different races

Meta AI is consistently unable to generate accurate images for seemingly simple prompts like “Asian man and Caucasian friend,” or “Asian man and white wife,” The Verge reports. Instead, the company’s image generator seems to be biased toward creating images of people of the same race, even when explicitly prompted otherwise.

Engadget confirmed these results in our own testing of Meta’s web-based image generator. Prompts for “an Asian man with a white woman friend” or “an Asian man with a white wife” generated images of Asian couples. When asked for “a diverse group of people,” Meta AI generated a grid of nine white faces and one person of color. There were a couple occasions when it created a single result that reflected the prompt, but in most cases it failed to accurately depict the prompt.

As The Verge points out, there are other more “subtle” signs of bias in Meta AI, like a tendency to make Asian men appear older while Asian women appeared younger. The image generator also sometimes added “culturally specific attire” even when that wasn’t part of the prompt.

It’s not clear why Meta AI is struggling with these types of prompts, though it’s not the first generative AI platform to come under scrutiny for its depiction of race. Google’s Gemini image generator paused its ability to create images of people after it overcorrected for diversity with bizarre results in response prompts about historical figures. Google later explained that its internal safeguards failed to account for situations when diverse results were inappropriate.

Meta didn’t immediately respond to a request for comment. The company has previously described Meta AI as being in “beta” and thus prone to making mistakes. Meta AI has also struggled to accurately answer simple questions about current events and public figures.

This article originally appeared on Engadget at https://www.engadget.com/metas-ai-image-generator-struggles-to-create-images-of-couples-of-different-races-231424476.html?src=rss

Meta’s AI image generator struggles to create images of couples of different races

Meta AI is consistently unable to generate accurate images for seemingly simple prompts like “Asian man and Caucasian friend,” or “Asian man and white wife,” The Verge reports. Instead, the company’s image generator seems to be biased toward creating images of people of the same race, even when explicitly prompted otherwise.

Engadget confirmed these results in our own testing of Meta’s web-based image generator. Prompts for “an Asian man with a white woman friend” or “an Asian man with a white wife” generated images of Asian couples. When asked for “a diverse group of people,” Meta AI generated a grid of nine white faces and one person of color. There were a couple occasions when it created a single result that reflected the prompt, but in most cases it failed to accurately depict the prompt.

As The Verge points out, there are other more “subtle” signs of bias in Meta AI, like a tendency to make Asian men appear older while Asian women appeared younger. The image generator also sometimes added “culturally specific attire” even when that wasn’t part of the prompt.

It’s not clear why Meta AI is struggling with these types of prompts, though it’s not the first generative AI platform to come under scrutiny for its depiction of race. Google’s Gemini image generator paused its ability to create images of people after it overcorrected for diversity with bizarre results in response prompts about historical figures. Google later explained that its internal safeguards failed to account for situations when diverse results were inappropriate.

Meta didn’t immediately respond to a request for comment. The company has previously described Meta AI as being in “beta” and thus prone to making mistakes. Meta AI has also struggled to accurately answer simple questions about current events and public figures.

This article originally appeared on Engadget at https://www.engadget.com/metas-ai-image-generator-struggles-to-create-images-of-couples-of-different-races-231424476.html?src=rss

Facebook finally adds video controls like a slide bar

The craze around Facebook Live might be a thing of the past, but Meta is still trying to make the platform video-friendly. The company has announced a new video player for uniformly displaying Reels, longer content and Live videos on the Facebook app. 

One of the biggest shifts is that all of Facebook's videos will now appear full-screen — even landscape-oriented ones. Videos will automatically play vertically, but you can now turn your phone on its side to watch most horizontal content across your entire device. 

Like many videos on TikTok, Facebook will now offer a slider at the bottom of the screen, letting you quickly move through the video. The update also brings some of the same features streamers like Netflix offer in their apps, such as the option to jump forward or backward by 10 seconds. Meta claims that you will now get "more relevant video recommendations" of all lengths appearing on the video tab and in your feed. The company will also be increasing the number of Reels shown on Facebook. 

The video player is rolling out now to Android and iOS users in the United States and Canada, with the new controls launching in the next few weeks. The entire update should be available globally in the coming months.

This article originally appeared on Engadget at https://www.engadget.com/facebook-finally-adds-video-controls-like-a-slide-bar-163014443.html?src=rss