Meta is making it a little easier for creators to avoid the dreaded “Facebook jail.” The company announced a new policy that will allow people with professional accounts to complete in-app “educational training” in order to avoid a strike on their account for first-time violations of the platform’s community standards.
In a blog post announcing the change, Meta notes that it can be frustrating for creators to navigate the company’s penalty system, which restricts Facebook accounts from certain features, including monetization tools, after multiple offenses. Under the new rules, creators who receive a warning for a first-time offense will have the option to remove the warning if they view an in-app explanation of the rule they broke.
Particularly serious offenses, “such as posting content that includes sexual exploitation, the sale of high-risk drugs, or glorification of dangerous organizations and individuals” are not able to be removed. Instead, the system is geared toward helping creators correct “unintentional mistakes,” according to the company. “We believe focusing on helping people understand why we have removed their content will be more effective at preventing re-offending, giving us not just a fairer approach, but a more effective one,” Meta explains.
It’s not the first time Meta has tried to reform its penalty system, which has been criticized by the Oversight Board and is a frequent source of frustration to users who may get strikes for mundane comments taken out of context. Last year, the company said it was trying to focus more on educating users about its rules rather than restricting their ability to post. Though the latest policy change will only affect creators with professional accounts to start, the company says it is planning to expand it “more broadly in the coming months.”
This article originally appeared on Engadget at https://www.engadget.com/social-media/facebook-will-let-creators-remove-account-warnings-if-they-complete-educational-training-181503330.html?src=rss
Reddit just wrapped up its second earnings call as a public company and CEO Steve Huffman hinted at some significant changes that could be coming to the platform. During the call, the Reddit co-founder said the company would begin testing AI-powered search results later this year.
“Later this year, we will begin testing new search result pages powered by AI to summarize and recommend content, helping users dive deeper into products, shows, games and discover new communities on Reddit,” Huffman said. He didn’t say when those tests would begin, but said it would use both first-party and third-party models.
Huffman noted that search on Reddit has “gone unchanged for a long time” but that it’s a significant opportunity to bring in new users. He also said that search could one day be a significant source of advertising revenue for the company.
Huffman hinted at other non-advertising sources of revenue as well. He suggested that the company might experiment with paywalled subreddits as it looks to monetize new features. “I think the existing, altruistic, free version of Reddit will continue to exist and grow and thrive just the way it has,” Huffman said. “But now we will unlock the door for new use cases, new types of subreddits that can be built that may have exclusive content or private areas, things of that nature.”
A Reddit spokesperson declined to elaborate on Huffman’s remarks. But it’s no secret the company has been eyeing new ways to expand its business since going public earlier this year. It’s struck multi million-dollar licensing deals with Google and OpenAI, and has blocked search engines that aren’t paying the company.
“Some players in the ecosystem have not been transparent with their use of Reddit’s content, and in those instances, we block access to protect Reddit content and user privacy,” Huffman said. “We want to know where Reddit data is going and what it's being used for, and so those are the terms of engagement.”
This article originally appeared on Engadget at https://www.engadget.com/social-media/reddit-ceo-teases-ai-search-features-and-paid-subreddits-225636988.html?src=rss
In the latest example of a troubling industry pattern, NVIDIA appears to have scraped troves of copyrighted content for AI training. On Monday, 404 Media’s Samantha Cole reported that the $2.4 trillion company asked workers to download videos from YouTube, Netflix and other datasets to develop commercial AI projects. The graphics card maker is among the tech companies appearing to have adopted a “move fast and break things” ethos as they race to establish dominance in this feverish, too-often-shameful AI gold rush.
The training was reportedly to develop models for products like its Omniverse 3D world generator, self-driving car systems and “digital human” efforts.
NVIDIA defended its practice in an email to Engadget. A company spokesperson said its research is “in full compliance with the letter and the spirit of copyright law” while claiming IP laws protect specific expressions “but not facts, ideas, data, or information.” The company equated the practice to a person’s right to “learn facts, ideas, data, or information from another source and use it to make their own expression.” Human, computer… what’s the difference?
YouTube doesn’t appear to agree. Spokesperson Jack Malon pointed us to a Bloomberg story from April, quoting CEO Neal Mohan saying using YouTube to train AI models would be a “clear violation” of its terms. “Our previous comment still stands,” the YouTube policy communications manager wrote to Engadget.
NVIDIA employees who raised ethical and legal concerns about the practice were reportedly told by their managers that it had already been green-lit by the company's highest levels. “This is an executive decision,” Ming-Yu Liu, vice president of research at NVIDIA, replied. “We have an umbrella approval for all of the data.” Others at the company allegedly described its scraping as an “open legal issue” they’d tackle down the road.
In addition to the YouTube and Netflix videos, NVIDIA reportedly instructed workers to train on movie trailer database MovieNet, internal libraries of video game footage and Github video datasets WebVid (now taken down after a cease-and-desist) and InternVid-10M. The latter is a dataset containing 10 million YouTube video IDs.
Some of the data NVIDIA allegedly trained on was only marked as eligible for academic (or otherwise non-commercial) use. HD-VG-130M, a library of 130 million YouTube videos, includes a usage license specifying that it’s only meant for academic research. NVIDIA reportedly brushed aside concerns about academic-only terms, insisting their batches were fair game for its commercial AI products.
To evade detection from YouTube, NVIDIA reportedly downloaded content using virtual machines (VMs) with rotating IP addresses to avoid bans. In response to a worker’s suggestion to use a third-party IP address-rotating tool, another NVIDIA employee reportedly wrote, “We are on [Amazon Web Services](#) and restarting a [virtual machine](#) instance gives a new public IP[.](#) So, that’s not a problem so far.”
404 Media’s full report on NVIDIA’s practices is worth a read.
This article originally appeared on Engadget at https://www.engadget.com/ai/nvidias-ai-team-reportedly-scraped-youtube-netflix-videos-without-permission-204942022.html?src=rss
A guest who appeared on the No Jumper podcast to boast about a hack and payback scheme involving his victims’ social media accounts could face federal charges. Idriss Qibaa, also known as “Dani” and “Unlocked” who authorities allege ran the social media hacking site Unlocked4Life.com, faces two criminal felony counts filed by the US Attorney's Office in Nevada for allegedly violating interstate communications laws for threats he issued in text messages to two victims and members of their families, according to documents obtained by 404 Media.
Investigators filed the sealed complaint against Qibaa on July 25 and issued a warrant the following Monday when also made his first initial appearance in court, according to federal court records.
The criminal complaint states that the FBI received a tip about Qibaa’s alleged extortion scheme on April 1 pointing to an appearance he made on the No Jumper podcast hosted by Adam22, also known as Adam Grandmaison, back in January under his pseudonym “Dani.” Qibaa outlined a financial scheme using over 200 victims’ social media accounts in which he would lock them out of their pages and charge them to regain access.
He also boasted that he made about $600,000 a month from his activities and hired two security guards to follow him.
“You’re making $2 million a month off your Instagram and Telegraph,” Qibaa says on the podcast. “I come and I take it away and make you pay for it back and I make it public and I post it and I expose you.”
Qibaa even said on the podcast episode that he pulled the scheme on celebrities who unknowingly kept paying him to get their social media back. He later noted “I’m very petty” followed by a menacing laugh.
“I’ve talked to stars who have told me that they’ve paid to get it back 20 times over and over and over they just have to keep paying to get it back,” Qibaa says, “and I’m like you realize what’s happening to you right like the same that’s getting you it back is…you’re getting extorted.”
The criminal complaint tells the story of eight victims’ encounters with Qibaa and his services. One identified as “J.T.” operated two Instagram accounts: a cannabis news aggregate account called “theblacklistxyz” and a cannabis merchandising store under “caliplug,” both of which are currently set to private. J.T. reached out to Qibaa asking if he could obtain a username. Qibaa quoted a price back between $4,000-$5,000. J.T. refused to take Qibaa up on the offer and Qibaa responded with threats.
“Qibba told J.T. that J.T. had wasted Qibaa’s time, blocked J.T.’s Instagram pages and demanded $10,000 to reinstate it,” the complaint reads. “J.T. offered Qibaa $8,500 to reinstate the account, an offer Qibaa accepted.”
The complaint asserts that Qibba reached out to J.T. two more times. The first time, Qibba asked if J.T. would promote his Instagram page under the username “unlocked4life” that’s since been taken down. J.T. agreed but when he learned Qibaa had been threatening and extorting other victims, he confronted Qibaa and “Qibaa was irate.”
A few months later, Qibaa apparently increased the scope of his threats to J.T. and members of his family. He sent threats to call the victim’s ex-wife’s lawyer and child protective services on his kids. Screenshots of the victims’ phone show Qibaa allegedly identifying the address and phone number of the victim’s sister. He texted another family member and introduced himself as “The guy that’s gonna murder your drug dealer brother. Tell him Unlocked says hi though. We have your entire family’s info.”
Another victim identified as a journalist and comedian with the initials “E.H.” learned they were a target of Qibaa’s illegal services. Qibaa blocked their Instagram account, the name of which was redacted, at the request of a dentist in California who treated them. E.H. reached out to the Unlocked4Life account and received a reply that read, “Yo its Idriss.” He then told E.H. to pull up the No Jumper podcast episode featuring his interview. Qibaa not only took the victim’s Instagram account access away but also threatened to take their Social Security number and “blast it out” if they didn’t pay him $20,000.
According to the complaint, not even restraining orders could make Qibaa leave his victims alone. One named “R.B.” received a restraining order from Los Angeles County Superior Court in July but “Unblocked” responded, “Cute restraining order..last I checked you’re still gonna die.” Then “UNLOCKED UNCENSORED” posted on Telegram, “$50,000 reward for whoever sleeps BO this week.”
Perhaps the most disturbing threats happened to several victims in which Qibaa claimed he’d happily go to jail if payments weren’t made to him. Screenshots of the text chains show a person named “Dani” and “Daniel” telling his victims, “I will come and shoot you myself,” “I’m going to bury you for this shit” and “D., L., J., T., Children-Main Targets” referring to the victims’ children.
Another text chain shows Qibaa allegedly threatening someone that he would “rather take a life sentence for murdering you then this,” “Idc if I have to shoot you my self [sic]” and “I’ll go to jail happily.” He follows the text with the threat “Here’s the last guy that came to take photos / came near my home” and sends three pictures of an unidentified bearded man, his car and a photo of his badly bruised and bloodied on the ground.”
Adam22 concluded his podcast interview with “Dani” saying he was “very excited to see the fallout from this” and “I respect the hustle even though I can’t justify it on a moral level.”
This article originally appeared on Engadget at https://www.engadget.com/hack-and-payback-instagram-scammer-gets-nabbed-after-bragging-about-it-on-a-podcast-202509349.html?src=rss
Google has rolled out updates for Search with the intention of making explicit deepfakes as hard to find as possible. As part of its long-standing and ongoing fight against realistic-looking manipulated images, the company is making it easier for people to get non-consensual fake imagery that features them removed from Search.
It has long been possible for users to request for the removal of those kinds of images under Google's policies. Now, whenever it grants someone's removal request, Google will also filter all explicit results on similar searches about them. The company's systems will scan for any duplicates of the offending image and remove them, as well. This update could help alleviate some of the victim's fears if they're worried about the same image popping up again on other websites.
In addition, Google has updated its ranking systems so that if a user specifically searches for explicit deepfakes with a person's name, the results will surface "high-quality, non-explicit content" instead. If there are news articles about that person, for instance, then the results will feature those. Based on Google's announcement, it seems it also has plans to school the user looking for deepfakes by showing them results that discuss their impact on society.
Google doesn't want to wipe out results for legitimate content, like an actor's nude scene, in its bid to banish deepfakes from its results page, though. It admits it still has a lot of work to do when it comes to separating legitimate from fake explicit images. While that's still a work in progress, one of the solutions it has implemented is to demote sites that have received a high volume of removals for manipulated images in Search. That's "a pretty strong signal that it's not a high-quality site," Google explains, adding that the approach has worked well for other types of harmful content in the past.
This article originally appeared on Engadget at https://www.engadget.com/google-makes-it-easier-to-remove-explicit-deepfakes-from-its-search-results-130058499.html?src=rss
As spotted by The New York Times, Elon Musk shared an altered version of Kamala Harris’ campaign video on Friday night that uses a deepfake voiceover to say things like, “I was selected because I am the ultimate diversity hire,” in the VP’s voice. Nowhere does the post alert users to the fact that the video has been manipulated and features comments Harris did not actually say. Under X’s own policies, users “may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm (‘misleading media’).”
The post has been up all weekend, amassing over 119 million views by early Sunday afternoon. It was originally posted by another user, @MrReaganUSA, whose post states that it is a parody. Among other things, the voice in the video says, “I had four years under the tutelage of the ultimate deep state puppet, a wonderful mentor, Joe Biden.” Musk’s post — which only says, “This is amazing,” with a laughing emoji — has not been labeled as misleading, which the site will sometimes do if it determines certain media is as such, and no Community Notes have been added, though NYT notes that several have been suggested.
Altered media is in some cases allowed to stay up on the site and won’t be labeled as misleading, according to X’s policies. That includes memes and satire, “provided these do not cause significant confusion about the authenticity of the media.” The potential for deepfakes to be used to influence voters’ opinions ahead of elections has been a growing concern in recent years. Earlier this year, 20 tech companies signed an agreement pledging to help fight the “deceptive use of AI” in the 2024 elections — including X.
This article originally appeared on Engadget at https://www.engadget.com/elon-musk-shared-a-doctored-harris-campaign-video-on-x-without-labeling-it-as-fake-172617272.html?src=rss
As spotted by The New York Times, Elon Musk shared an altered version of Kamala Harris’ campaign video on Friday night that uses a deepfake voiceover to say things like, “I was selected because I am the ultimate diversity hire,” in the VP’s voice. Nowhere does the post alert users to the fact that the video has been manipulated and features comments Harris did not actually say. Under X’s own policies, users “may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm (‘misleading media’).”
The post has been up all weekend, amassing over 119 million views by early Sunday afternoon. It was originally posted by another user, @MrReaganUSA, whose post states that it is a parody. Among other things, the voice in the video says, “I had four years under the tutelage of the ultimate deep state puppet, a wonderful mentor, Joe Biden.” Musk’s post — which only says, “This is amazing,” with a laughing emoji — has not been labeled as misleading, which the site will sometimes do if it determines certain media is as such, and no Community Notes have been added, though NYT notes that several have been suggested.
Altered media is in some cases allowed to stay up on the site and won’t be labeled as misleading, according to X’s policies. That includes memes and satire, “provided these do not cause significant confusion about the authenticity of the media.” The potential for deepfakes to be used to influence voters’ opinions ahead of elections has been a growing concern in recent years. Earlier this year, 20 tech companies signed an agreement pledging to help fight the “deceptive use of AI” in the 2024 elections — including X.
This article originally appeared on Engadget at https://www.engadget.com/elon-musk-shared-a-doctored-harris-campaign-video-on-x-without-labeling-it-as-fake-172617272.html?src=rss
OpenAI on Thursday announced a new AI-powered search engine prototype called SearchGPT. The move marks the company’s entry into a competitive search engine market dominated by Google for decades. On its website, OpenAI described SearchGPT as “a temporary prototype of new AI search features that give you fast and timely answers with clear and relevant sources.” The company plans to test out the product with 10,000 initial users and then roll it into ChatGPT after gathering feedback.
The launch of SearchGPT comes amid growing competition in AI-powered search. Google, the world’s dominant search engine, recently began integrating AI capabilities into its platform. Other startups like the Jeff Bezos-backed Perplexity have also aimed to take on Google and have marketed themselves as “answer engines” that use AI to summarize the internet.
The rise of AI-powered search engines has been controversial. Last month, Perplexity faced criticism for summarizing stories from Forbes and Wired without adequate attribution or backlinks to the publications as well as ignoring robots.txt, a way for websites to tell crawlers that scrape data to back off. Earlier this week, Wired publisher Condé Nast reportedly sent a cease and desist letter to Perplexity and accused it of plagiarism.
Perhaps because of these tensions, OpenAI appears to be taking a more collaborative approach with SearchGPT. The company's blog post emphasizes that the prototype was developed in partnership with various news organizations and includes quotes from the CEOs of The Atlantic and News Corp, two of many publishers that OpenAI has struck licensing deals with.
“SearchGPT is designed to help users connect with publishers by prominently citing and linking to them in searches,” the company’s blog post says. “Responses have clear, in-line, named attribution and links so users know where information is coming from and can quickly engage with even more results in a sidebar with source links.” OpenAI also noted that publishers will have control over how their content is presented in SearchGPT and can opt out of having their content used for training OpenAI's models while still appearing in search results.
SearchGPT's interface features a prominent textbox asking users, "What are you searching for?" Unlike traditional search engines like Google that provide a list of links, SearchGPT categorizes the results with short descriptions and visuals.
OpenAI
For example, when searching for information about music festivals, the engine provides brief descriptions of events along with links for more details. Some users have pointed out, however, that the search engine is already presenting inaccurate information in its results.
In ChatGPT's recent search engine announcement, they ask for "music festivals in Boone North Carolina in august"
There are five results in the example image in the ChatGPT blog post :
1: Festival in Boone ... that ends July 27 ... ChatGPT's dates are when the box office is… pic.twitter.com/OBwNgNcLto
AI company Runway reportedly scraped “thousands” of YouTube videos and pirated versions of copyrighted movies without permission. 404 Mediaobtained alleged internal spreadsheets suggesting the AI video-generating startup trained its Gen-3 model using YouTube content from channels like Disney, Netflix, Pixar and popular media outlets.
An alleged former Runway employee told the publication the company used the spreadsheet to flag lists of videos it wanted in its database. It would then download them without detection using open-source proxy software to cover its tracks. One sheet lists simple keywords like astronaut, fairy and rainbow, with footnotes indicating whether the company had found corresponding high-quality videos to train on. For example, the term “superhero” includes a note reading, “Lots of movie clips.” (Indeed.)
Other notes show Runway flagged YouTube channels for Unreal Engine, filmmaker Josh Neuman and a Call of Duty fan page as good sources for “high movement” training videos.
“The channels in that spreadsheet were a company-wide effort to find good quality videos to build the model with,” the former employee told 404 Media. “This was then used as input to a massive web crawler which downloaded all the videos from all those channels, using proxies to avoid getting blocked by Google.”
Runway
A list of nearly 4,000 YouTube channels, compiled in one of the spreadsheets, flagged “recommended channels” from CBS New York, AMC Theaters, Pixar, Disney Plus, Disney CD and the Monterey Bay Aquarium. (Because no AI model is complete without otters.)
In addition, Runway reportedly compiled a separate list of videos from piracy sites. A spreadsheet titled “Non-YouTube Source” includes 14 links to sources like an unauthorized online archive of Studio Ghibli films, anime and movie piracy sites, a fan site displaying Xbox game videos and the animated streaming site kisscartoon.sh.
In what could be viewed as a damning confirmation that the company used the training data, 404 Media found that prompting the video generator with the names of popular YouTubers listed in the spreadsheet spit out results bearing an uncanny resemblance. Crucially, entering the same names in Runway’s older Gen-2 model — trained before the alleged data in the spreadsheets — generated “unrelated” results like generic men in suits. Additionally, after the publication contacted Runway asking about the YouTubers’ likenesses appearing in results, the AI tool stopped generating them altogether.
“I hope that by sharing this information, people will have a better understanding of the scale of these companies and what they’re doing to make ‘cool’ videos,” the former employee told 404 Media.
When contacted for comment, a YouTube representative pointed Engadget to an interview its CEO Neal Mohan gave to Bloomberg in April. In that interview, Mohan described training on its videos as a “clear violation” of its terms. “Our previous comments on this still stand,” YouTube spokesperson Jack Mason wrote to Engadget.
Runway did not respond to a request for commeInt by the time of publication.
At least some AI companies appear to be in a race to normalize their tools and establish market leadership before users — and courts — catch onto how their sausage was made. Training with permission through licensed deals is one thing, and that’s another tactic companies like OpenAI have recently adopted. But it’s a much sketchier (if not illegal) proposition to treat the entire internet — copyrighted material and all — as up for grabs in a breakneck race for profit and dominance.
This article originally appeared on Engadget at https://www.engadget.com/ai-video-startup-runway-reportedly-trained-on-thousands-of-youtube-videos-without-permission-182314160.html?src=rss
Netflix has landed a notable new leader for its rapidly-expanding gaming endeavors. Variety reported that the streaming company has hired Alain Tascan as its new president of games. Before joining Netflix, Tascan was executive vice president for game development at a little studio you may have heard of called Epic Games. In that role, he oversaw the first-party development for some of the company's hugely successful titles, such as Fortnite, Lego Fortnite, Rocket League and Fall Guys.
The company is also recruiting talent on the creative side. Since launching the games project in 2021, Netflix has acquired notable indie studios Night School, Boss Fight, Next Games and Spry Fox, and has brought a large number of acclaimed indie games to mobile. In its second quarter earnings call, Netflix execs revealed that it has more than 80 games currently in development, which would nearly double its current library of about 100 titles.
Many of these new projects are interactive fiction based on Netflix shows and movies, with the goal of giving fans new ways to engage with their favorite titles. "I think our opportunity here to serve super fandom with games is really fun and remarkable," Co-CEO Ted Sarandos said during the call. We also learned that a multiplayer Squid Game project will be coming to Netflix Games later this year.
Although Netflix is making a sizable investment into this games division, people haven't been flocking to their titles yet. In 2022, the library had about 1.7 million daily users and its games had been downloaded 23.3 million times.
This article originally appeared on Engadget at https://www.engadget.com/netflix-hires-former-epic-games-exec-as-new-president-of-games-212614285.html?src=rss