YouTube is adding a new For You section to creator channels. The TikTok-like feature will be personalized to each visitor, recommending content from a channel based on each viewer’s watch history. The company’s support account on X posted (viaThe Verge) about the upcoming feature, which launches on November 20.
Creators can prepare for the For You section by confirming their options before its release date. You can toggle the feature’s availability and change its settings in YouTube Studio by selecting Customization > Layout and then checking each of the prompts under the For You section. It allows you to toggle whether full-length videos, shorts and livestreams are available for the feature.
✨ introducing the “For you” section of your channel Home tab that recommends a mix of content from your channel to viewers based on their watch history
🚨 creators: review your settings & select formats before we roll out to viewers on 11/20
The company had already teased the feature in a May video posted to its Creator Insider channel, a hub where YouTube employees have “direct conversations” with YouTubers. In that clip, product manager Ann Katrin Kuessner framed the feature as an alternative to the static home page. “You’re trying to find a configuration that is one-size-fits-all since the channel page looks the same today for every person that visits it,” Kuessner said, summarizing a problem creators face. She said the feature’s personalization “will be especially effective if your channel has multiple topics, languages, or content formats.”
This article originally appeared on Engadget at https://www.engadget.com/youtube-will-soon-show-visitors-a-personalized-for-you-section-on-channel-pages-213402443.html?src=rss
Lego just unveiled another set based on the Marvel Cinematic Universe, and boy is it a doozy. The massive 5,200-piece Avengers Tower set (76269) measures nearly three feet tall and ships with 31 minifigures, including Marvel Studios head honcho Kevin Feige. It also includes several dioramas that let you create many of the important scenes that took place in Avengers Tower, from the Chitauri battle of the original film to the party scene from Age of Ultron and beyond.
The set releases on November 24 and will cost an eye-watering $500. Still, this is the 17th-largest collection the company has ever made and the one with the most minifigures. Beyond Feige, other figures include Captain America, Thor, Loki, some Ultron drones and just about every other major character that appeared in Avengers Tower throughout the films. There’s even an appropriately-scaled Hulk.
In addition to the tower itself, which actually opens to allow for interior sequences, the set ships with a Quinjet and a Chitauri invasion ship. You also get plenty of accessories to help pose the minifigures in a variety of action-packed scenarios. About the only thing missing is the shawarma shop down the street.
This article originally appeared on Engadget at https://www.engadget.com/legos-5200-piece-avengers-tower-set-ships-with-31-minifigures-including-kevin-feige-193359347.html?src=rss
We Met in Virtual Reality, a documentary shot entirely inside VRChat (now available to stream on Max), was one of the highlight's of last year's Sundance Film Festival. It deftly showed how people can form genuine friendships and romantic connections inside of virtual worlds — something Mark Zuckerberg could only dream of with his failed metaverse concept. Now the director of that film, Joe Hunting, is making an even bigger bet on virtual reality: He's launching Painted Clouds, a production studio devoted to making films and series set within VR.
What's most striking about We Met in Virtual Reality, aside from the Furries and scantily-clad anime avatars, is that it looks like a traditional documentary. Hunting used VRCLens, a tool developed by the developer Hirabiki, to perform cinematic techniques like pulling focus, deliberate camera movements and executing aerial drone shots. Hunting says he aims to "build upon VRCLens to give it more scope and make it even more accessible to new filmmakers," as well as using it for his own productions.
Additionally, Hunting is launching "Painted Clouds Park," a world in VRChat that can be used for production settings and events. It's there that he also plans to run workshops and media events to teach people about the possibilities of virtual reality filmmaking.
His next project, which is set to begin pre-production next year, will be a dramedy focused on a group of online friends exploring an ongoing mystery. Notably, Hunting says it will also be shot with original avatars and production environments, not just cookie-cutter VRChat worlds. His aim is to make it look like a typical animated film — the only difference is that it'll be shot inside of VR. It's practically an evolution of the machinima concept, which involved shooting footage inside of game engines, using existing assets.
"Being present in a headset and being in the scene yourself, holding the camera and capturing the output, I find creates a much more immersive filmmaking experience for me, and a much more playful and joyful one, too," Hunting said. "I can look up and everyone is their characters. They're not wearing mo-cap [suits] to represent the characters. They just are embodying them. Obviously, that experience doesn't translate completely on screen as an audience member. But in terms of directing and the kind of relationship I can build with my actors and the team around me, I find that so fun."
Throughout all of his work, including We Met in Virtual Reality and earlier shorts, Hunting has been focused on capturing virtual worlds for playback on traditional 2D screens. But looking forward, he says he's interested in exploring 360-degree immersive VR projects as well. It could end up being part of behind-the-scenes footage for his next VR film, as a part of an experimental project in the future. In addition to his dramedy project, Hunting is also working on a short VR documentary, as well as a music video.
This article originally appeared on Engadget at https://www.engadget.com/the-director-of-sundance-darling-we-met-in-virtual-reality-launches-a-vr-studio-164532412.html?src=rss
Google is rolling out a new feature that allow advertisers to create AI generated content using the same technology as the Bard chatbot, confirming a report from earlier this year. The feature is now available in beta on Google's Performance Max advertising product, allowing US advertisers to create and scale text and image assets for campaigns using AI, the company announced in a blog post.
Performance Max is already an AI-powered product that works across multiple Google products including Youtube, search, display and others. It optimizes ads by analyzing performance data, and the new feature supplements that by using AI to assist in asset creation as well. As Google puts it, the features will allow advertisers to quickly create high-quality, personalized assets on various Google platforms.
"Asset variety is a key ingredient for a successful Performance Max campaign," wrote Google's Pallavi Naresh. "You’ve told us that creating and scaling assets can be one of the hardest parts of building and optimizing a cross-channel campaign. Now, you’ll be able to generate new text and image assets for your campaign in just a few clicks."
Google
Much like Bard or ChatGPT, users feed prompts to the AI, and it creates unique images and text for each business. Marketers can review and edit any assets created by the system prior to publication. It can be used to create versions of the same ad, or build new ads from scratch. All AI-generated imagery contains a visible watermark and is tagged as such. "We also have guardrails in place to prevent our systems from engaging with inappropriate or sensitive prompts or suggesting policy-violating creatives," Naresh wrote.
The feature should help marketers create advertising materials more quickly, while of course helping Google post those ads and make money more quickly. In that sense, it's pretty much a perfect AI use case for Google, which makes the vast majority of its revenue from advertising. The new system is currently in beta and only available in the US, but is expected to roll out more widely by the end of 2023.
This article originally appeared on Engadget at https://www.engadget.com/google-is-rolling-out-tools-that-let-advertisers-create-ai-generated-content-080255864.html?src=rss
We may get official details about Grand Theft Auto VI very, very soon. Following a Bloombergreport that said Rockstar Games would announce the next entry in the GTA franchise as early as this week, Rockstar confirmed it would release a trailer for the forthcoming game in early December, as part of its 25th anniversary celebration. It's one of the most anticipated games for the current crop of consoles, especially since the fifth main installment in the series — the second-best selling video game of all time, as Bloomberg notes — came out way back in 2013.
We are very excited to let you know that in early December, we will release the first trailer for the next Grand Theft Auto. We look forward to many more years of sharing these experiences with all of you.
While Rockstar has yet to launch the title, some fans may have already gotten a glimpse of early-days gameplay footage due to a leak that a hacker uploaded online in 2022. It contained 90 seconds of gameplay from a GTA VI test build, showing one of the two playable protagonists, a female character named Lucia, robbing a store. Another clip showed the other playable character riding the "Vice City Metro," indicating that its story takes place in Rockstar's fictionalized version of Miami. The developer later confirmed the contents of the leak and said that the game's creation would continue "as planned."
Rockstar may reveal GTA VI's release period alongside the trailer next month, but its parent company Take-Two previously hinted that it's coming out sometime in 2024.
Update, November 8, 2023, 8:15AM ET: This story has been updated to note that Rockstar has confirmed it'll release a trailer for the next Grand Theft Auto game in December.
This article originally appeared on Engadget at https://www.engadget.com/the-first-grand-theft-auto-vi-trailer-will-arrive-in-early-december-045219564.html?src=rss
It's been rumored for years, but Nintendo still managed to surprise us with a late-day announcement: a live-action film based on The Legend of Zelda is in the works, directed by Wes Ball. Ball's most recent films are the Maze Runner series, the latest of which was released in 2018. Nintendo's Shigeru Miyamoto is producing the film along with Avi Arad, who has produced or executive produced loads of Marvel movies over the last decade-plus.
Surprisingly, the film is being co-financed by Nintendo and none other than Sony Pictures Entertainment. You know, part of the same company that owns PlayStation. Nintendo was quick to point out that it is financing more than 50 percent of the film, but that Sony Pictures Entertainment will be the the theatrical distributor.
Aside from that, there's no other detail besides this tweet from Miyamoto:
This is Miyamoto. I have been working on the live-action film of The Legend of Zelda for many years now with Avi Arad-san, who has produced many mega hit films. [1]
Miyamoto goes on to say that they have officially started development on the film with Nintendo "heavily involved" in the production. He also notes that it'll "take time" before its completion but that he hopes fans look forward to seeing it.
Way back in 2015, we heard rumors from the Wall Street Journal that Nintendo and Netflix were making a live-action Zelda show, but that never came together (and there's a pretty weird story around why). But the success of The Super Mario Bros. Movie was perhaps the last thing Nintendo needed to make this project a reality. And while there's plenty of time for things to go wrong between now and the movie hitting theaters, this Zelda fan is cautiously excited about the prospect of another classic Nintendo franchise making its way to the big screen.
This article originally appeared on Engadget at https://www.engadget.com/nintendo-is-making-a-live-action-legend-of-zelda-movie-221618064.html?src=rss
Xbox has teamed up with a startup called Inworld AI to create a generative AI toolset that developers can use to create games. It's a multi-year collaboration, which the Microsoft-owned brand says can "assist and empower creators in dialogue, story and quest design." Specifically, the partners are looking to develop an "AI design copilot" that can turn prompts into detailed scripts, dialogue trees, quests and other game elements in the same way people can type ideas into generative AI chatbots and get detailed scripts in return. They're also going to work on an "AI character runtime engine" that developers can plug into their actual games, allowing players to generate new stories, quests and dialogues as they go.
On Inworld's website, it says its technology can "craft characters with distinct personalities and contextual awareness that stay in-world." Apparently, it can provide developers with a "fully integrated character engine for AI NPCs that goes beyond large language models (LLMs)." The image above was from the Droid Maker tool it developed in collaboration with Lucasfilm's storytelling studio ILM Immersive when it was accepted into the Disney Accelerator program. As Kotaku notes, though, the company's tech has yet to ship with a major game release, and it has mostly been used for mods.
Developers are understandably wary about these upcoming tools. There are growing concerns among creatives about companies using their work to train generative AI without permission — a group of authors, including John Grisham and George R.R. Martin, even sued OpenAI, accusing the company of infringing on their copyright. And then, of course, there's the ever-present worry that developers could decide to lay off writers and designers to cut costs.
Xbox believes, however, that these tools can "help make it easier for developers to realize their visions, try new things, push the boundaries of gaming today and experiment to improve gameplay, player connection and more." In the brand's announcement, Haiyan Zhang, General Manager of Gaming AI, said: "We will collaborate and innovate with game creators inside Xbox studios as well as third-party studios as we develop the tools that meet their needs and inspire new possibilities for future games."
This article originally appeared on Engadget at https://www.engadget.com/microsoft-will-let-xbox-game-makers-use-ai-tools-for-story-design-and-npcs-083027899.html?src=rss
Facebook is no stranger to moderating and mitigating misinformation on its platform, having long employed machine learning and artificial intelligence systems to help supplement its human-led moderation efforts. At the start of October, the company extended its machine learning expertise to its advertising efforts with an experimental set of generative AI tools that can perform tasks like generating backgrounds, adjusting image and creating captions for an advertiser's video content. Reuters reports Monday that Meta will specifically not make those tools available to political marketers ahead of what is expected to be a brutal and divisive national election cycle.
Meta's decision to bar the use of generative AI is in line with much of the social media ecosystem, though, as Reuters is quick to point out, the company, "has not yet publicly disclosed the decision in any updates to its advertising standards." TikTok and Snap both ban political ads on their networks, Google employs a "keyword blacklist" to prevent its generative AI advertising tools from straying into political speech and X (formerly Twitter) is, well, you've seen it.
Meta does allow for a wide latitude of exceptions to this rule. The tool ban only extends to "misleading AI-generated video in all content, including organic non-paid posts, with an exception for parody or satire," per Reuters. Those exceptions are currently under review by the company's independent Oversight Board as part of a case in which Meta left up an "altered" video of President Biden because, the company argued, it was not generated by an AI.
Facebook, along with other leading Silicon Valley AI companies, agreed in July to voluntary commitments set out by the White House enacting technical and policy safeguards in the development of their future generative AI systems. Those include expanding adversarial machine learning (aka red-teaming) efforts to root out bad model behavior, sharing trust and safety information both within the industry and with the government, as well as development of a digital watermarking scheme to authenticate official content and make clear that it is not AI-generated.
This article originally appeared on Engadget at https://www.engadget.com/meta-reportedly-wont-make-its-ai-advertising-tools-available-to-political-marketers-010659679.html?src=rss
YouTube announced two new experimental generative AI features on Monday. YouTube Premium subscribers can soon try AI-generated comment summaries and a chatbot that answers your questions about what you’re watching. The features will be opt-in, so you won’t see them unless you’re a paid member who signs up for the experiments during their test periods.
The AI-powered summaries will organize comments into “easily digestible themes.” In a Mr. Beast video YouTube used as an example, the tool generated topics including “People love Bryan the bird,” “Lazarbeam should be in more videos,” “No submarine” and “More 7 day challenges.” You can tap on the topic to view the complete list of associated comments. The tool will only run “on a small number of videos in English” with large comment sections.
YouTube
If you’re worried about YouTube’s summaries spiraling out of control the way the platform’s comment sections often do, the company says it won’t pull content from unpublished messages, those held for review, any containing blocked words or those from blocked users. Further, creators can use the tool to delete individual comments if they see problematic (or otherwise unwanted) discussions about their videos.
Meanwhile, YouTube’s conversational AI tool gives you a chatbot trained on whichever video you’re watching. Generated by large language models (LLMs), the assistant lets you “dive in deeper” by asking questions about the content and fishing for related recommendations. The company says the AI tool, which appears similar to chatting with Bard, draws on info from YouTube and the web, providing answers without interrupting playback. Eligible users can find it under a new “Ask” button in the YouTube app for Android.
Starting today, YouTube Premium subscribers can opt into the comment summarizer on YouTube’s experiments page. However, the company says you won’t see the “Topics” option for all videos. In addition, the conversational AI tool is only available now “to a small number of people on a subset of videos,” but YouTube Premium subscribers with Android devices will be able to sign up to try it in the coming weeks. The company warns the experimental features “may not always get it right,” a description that can equally apply to Google’s other AI experiments.
This article originally appeared on Engadget at https://www.engadget.com/youtube-tests-ai-generated-comment-summaries-and-a-chatbot-for-videos-213405231.html?src=rss
The internet's "enshittification," as veteran journalist and privacy advocate Cory Doctorowdescribes it, began decades before TikTok made the scene. Elder millennials remember the good old days of Napster — followed by the much worse old days of Napster being sued into oblivion along with Grokster and the rest of the P2P sharing ecosystem, until we were left with a handful of label-approved, catalog-sterilized streaming platforms like Pandora and Spotify. Three cheers for corporate copyright litigation.
In his new book The Internet Con: How to Seize the Means of Computation, Doctorow examines the modern social media landscape, cataloging and illustrating the myriad failings and short-sighted business decisions of the Big Tech companies operating the services that promised us the future but just gave us more Nazis. We have both an obligation and responsibility to dismantle these systems, Doctorow argues, and a means to do so with greater interoperability. In this week's Hitting the Books excerpt, Doctorow examines the aftermath of the lawsuits against P2P sharing services, as well as the role that the Digital Millennium Copyright Act's "notice-and-takedown" reporting system and YouTube's "ContentID" scheme play on modern streaming sites.
The harms from notice-and-takedown itself don’t directly affect the big entertainment companies. But in 2007, the entertainment industry itself engineered a new, more potent form of notice-and-takedown that manages to inflict direct harm on Big Content, while amplifying the harms to the rest of us.
That new system is “notice-and-stay-down,” a successor to notice-and-takedown that monitors everything every user uploads or types and checks to see whether it is similar to something that has been flagged as a copyrighted work. This has long been a legal goal of the entertainment industry, and in 2019 it became a feature of EU law, but back in 2007, notice-and-staydown made its debut as a voluntary modification to YouTube, called “Content ID.”
Some background: in 2007, Viacom (part of CBS) filed a billion-dollar copyright suit against YouTube, alleging that the company had encouraged its users to infringe on its programs by uploading them to YouTube. Google — which acquired YouTube in 2006 — defended itself by invoking the principles behind Betamax and notice-and-takedown, arguing that it had lived up to its legal obligations and that Betamax established that “inducement” to copyright infringement didn’t create liability for tech companies (recall that Sony had advertised the VCR as a means of violating copyright law by recording Hollywood movies and watching them at your friends’ houses, and the Supreme Court decided it didn’t matter).
But with Grokster hanging over Google’s head, there was reason to believe that this defense might not fly. There was a real possibility that Viacom could sue YouTube out of existence — indeed, profanity-laced internal communications from Viacom — which Google extracted through the legal discovery process — showed that Viacom execs had been hotly debating which one of them would add YouTube to their private empire when Google was forced to sell YouTube to the company.
Google squeaked out a victory, but was determined not to end up in a mess like the Viacom suit again. It created Content ID, an “audio fingerprinting” tool that was pitched as a way for rights holders to block, or monetize, the use of their copyrighted works by third parties. YouTube allowed large (at first) rightsholders to upload their catalogs to a blocklist, and then scanned all user uploads to check whether any of their audio matched a “claimed” clip.
Once Content ID determined that a user was attempting to post a copyrighted work without permission from its rightsholder, it consulted a database to determine the rights holder’s preference. Some rights holders blocked any uploads containing audio that matched theirs; others opted to take the ad revenue generated by that video.
There are lots of problems with this. Notably, there’s the inability of Content ID to determine whether a third party’s use of someone else’s copyright constitutes “fair use.” As discussed, fair use is the suite of uses that are permitted even if the rightsholder objects, such as taking excerpts for critical or transformational purposes. Fair use is a “fact intensive” doctrine—that is, the answer to “Is this fair use?” is almost always “It depends, let’s ask a judge.”
Computers can’t sort fair use from infringement. There is no way they ever can. That means that filters block all kinds of legitimate creative work and other expressive speech — especially work that makes use of samples or quotations.
But it’s not just creative borrowing, remixing and transformation that filters struggle with. A lot of creative work is similar to other creative work. For example, a six-note phrase from Katy Perry’s 2013 song “Dark Horse” is effectively identical to a six-note phrase in “Joyful Noise,” a 2008 song by a much less well-known Christian rapper called Flame. Flame and Perry went several rounds in the courts, with Flame accusing Perry of violating his copyright. Perry eventually prevailed, which is good news for her.
But YouTube’s filters struggle to distinguish Perry’s six-note phrase from Flame’s (as do the executives at Warner Chappell, Perry’s publisher, who have periodically accused people who post snippets of Flame’s “Joyful Noise” of infringing on Perry’s “Dark Horse”). Even when the similarity isn’t as pronounced as in Dark, Joyful, Noisy Horse, filters routinely hallucinate copyright infringements where none exist — and this is by design.
To understand why, first we have to think about filters as a security measure — that is, as a measure taken by one group of people (platforms and rightsholder groups) who want to stop another group of people (uploaders) from doing something they want to do (upload infringing material).
It’s pretty trivial to write a filter that blocks exact matches: the labels could upload losslessly encoded pristine digital masters of everything in their catalog, and any user who uploaded a track that was digitally or acoustically identical to that master would be blocked.
But it would be easy for an uploader to get around a filter like this: they could just compress the audio ever-so-slightly, below the threshold of human perception, and this new file would no longer match. Or they could cut a hundredth of a second off the beginning or end of the track, or omit a single bar from the bridge, or any of a million other modifications that listeners are unlikely to notice or complain about.
Filters don’t operate on exact matches: instead, they employ “fuzzy” matching. They don’t just block the things that rights holders have told them to block — they block stuff that’s similar to those things that rights holders have claimed. This fuzziness can be adjusted: the system can be made more or less strict about what it considers to be a match.
Rightsholder groups want the matches to be as loose as possible, because somewhere out there, there might be someone who’d be happy with a very fuzzy, truncated version of a song, and they want to stop that person from getting the song for free. The looser the matching, the more false positives. This is an especial problem for classical musicians: their performances of Bach, Beethoven and Mozart inevitably sound an awful lot like the recordings that Sony Music (the world’s largest classical music label) has claimed in Content ID. As a result, it has become nearly impossible to earn a living off of online classical performance: your videos are either blocked, or the ad revenue they generate is shunted to Sony. Even teaching classical music performance has become a minefield, as painstakingly produced, free online lessons are blocked by Content ID or, if the label is feeling generous, the lessons are left online but the ad revenue they earn is shunted to a giant corporation, stealing the creative wages of a music teacher.
Notice-and-takedown law didn’t give rights holders the internet they wanted. What kind of internet was that? Well, though entertainment giants said all they wanted was an internet free from copyright infringement, their actions — and the candid memos released in the Viacom case — make it clear that blocking infringement is a pretext for an internet where the entertainment companies get to decide who can make a new technology and how it will function.
This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-the-internet-con-cory-doctorow-verso-153018432.html?src=rss