Lego’s 5,200-piece Avengers Tower set ships with 31 minifigures, including Kevin Feige

Lego just unveiled another set based on the Marvel Cinematic Universe, and boy is it a doozy. The massive 5,200-piece Avengers Tower set (76269) measures nearly three feet tall and ships with 31 minifigures, including Marvel Studios head honcho Kevin Feige. It also includes several dioramas that let you create many of the important scenes that took place in Avengers Tower, from the Chitauri battle of the original film to the party scene from Age of Ultron and beyond.

The set releases on November 24 and will cost an eye-watering $500. Still, this is the 17th-largest collection the company has ever made and the one with the most minifigures. Beyond Feige, other figures include Captain America, Thor, Loki, some Ultron drones and just about every other major character that appeared in Avengers Tower throughout the films. There’s even an appropriately-scaled Hulk.

In addition to the tower itself, which actually opens to allow for interior sequences, the set ships with a Quinjet and a Chitauri invasion ship. You also get plenty of accessories to help pose the minifigures in a variety of action-packed scenarios. About the only thing missing is the shawarma shop down the street.

As previously mentioned, this isn’t Lego’s first MCU-adjacent set. The company has released a giant Hulkbuster suit from Age of Ultron, a battle scene based on Black Panther: Wakanda Forever and Iron Man’s armory, among others. It has also shipped some sets based on other Marvel properties, like a Miles Morales figure and a Daily Bugle collection. Beyond superheroes, Lego launched a nifty Pac-Man arcade console set this year and one based on the Xbox 360.

This article originally appeared on Engadget at https://www.engadget.com/legos-5200-piece-avengers-tower-set-ships-with-31-minifigures-including-kevin-feige-193359347.html?src=rss

Spotify subscribers in the US now get 15 hours of audiobooks every month

In addition to music and podcasts, Spotify has recently been working to cement its presence in the audiobook space. Today, the company announced Premium users in the US will be able to stream 15 hours of free audiobook content monthly as a part of their subscription. This offering was previously only available to Premium users in the UK and Australia.

The company says there's no need for users to do anything. Audiobooks that are available to stream will be marked as “Included in Premium” and users can hit play right away. Spotify notes that 15 hours is roughly two average audiobooks per month. If you end up hitting the limit, you can purchase a 10-hour top-up.

The company says its Spotify Premium audiobook catalog now has something for everyone. Users with a Premium subscription can access over 70 percent of today's bestsellers, including Britney Spears’ The Woman in Me and Jesmyn Ward’s Let Us Descend. There are also many classic pieces of literature, like Emily Brontë’s Wuthering Heights. Spotify believes its listeners will "love exploring the depths of our 200,000-strong catalog, unearthing genres from 'cozy mystery' to 'historical romance.'"

Books that aren't eligible for free streaming will need to be purchased outright. Those books will have a lock on the play button, which means you'll need to purchase the title. To make a purchase, you'll follow a link to your browser. Once that's completed, you'll be taken back to the app to listen to your new book. All your purchased titles will show up in your library and be available for offline listening. Spotify also gives you the option to control playback speed so you can listen at your own pace.

It makes sense that Spotify has included audiobooks in its app, but there are a few things that may deter users from tapping in. Yes, having a single place to listen to your music, podcasts and books is convenient but unlike with music and podcasts, you have a streaming limit here. Additionally, only a limited number of books are free to stream with your $11 subscription. While Audible also charges a subscription fee, users get one book to own every month, which may make it the more appealing and affordable option for some.

This article originally appeared on Engadget at https://www.engadget.com/spotify-subscribers-in-the-us-now-get-15-hours-of-audiobooks-every-month-192000398.html?src=rss

The director of Sundance darling ‘We Met in Virtual Reality’ launches a VR studio

We Met in Virtual Reality, a documentary shot entirely inside VRChat (now available to stream on Max), was one of the highlight's of last year's Sundance Film Festival. It deftly showed how people can form genuine friendships and romantic connections inside of virtual worlds — something Mark Zuckerberg could only dream of with his failed metaverse concept. Now the director of that film, Joe Hunting, is making an even bigger bet on virtual reality: He's launching Painted Clouds, a production studio devoted to making films and series set within VR.

What's most striking about We Met in Virtual Reality, aside from the Furries and scantily-clad anime avatars, is that it looks like a traditional documentary. Hunting used VRCLens, a tool developed by the developer Hirabiki, to perform cinematic techniques like pulling focus, deliberate camera movements and executing aerial drone shots. Hunting says he aims to "build upon VRCLens to give it more scope and make it even more accessible to new filmmakers," as well as using it for his own productions.

Additionally, Hunting is launching "Painted Clouds Park," a world in VRChat that can be used for production settings and events. It's there that he also plans to run workshops and media events to teach people about the possibilities of virtual reality filmmaking.

His next project, which is set to begin pre-production next year, will be a dramedy focused on a group of online friends exploring an ongoing mystery. Notably, Hunting says it will also be shot with original avatars and production environments, not just cookie-cutter VRChat worlds. His aim is to make it look like a typical animated film — the only difference is that it'll be shot inside of VR. It's practically an evolution of the machinima concept, which involved shooting footage inside of game engines, using existing assets.

"Being present in a headset and being in the scene yourself, holding the camera and capturing the output, I find creates a much more immersive filmmaking experience for me, and a much more playful and joyful one, too," Hunting said. "I can look up and everyone is their characters. They're not wearing mo-cap [suits] to represent the characters. They just are embodying them. Obviously, that experience doesn't translate completely on screen as an audience member. But in terms of directing and the kind of relationship I can build with my actors and the team around me, I find that so fun."

Throughout all of his work, including We Met in Virtual Reality and earlier shorts, Hunting has been focused on capturing virtual worlds for playback on traditional 2D screens. But looking forward, he says he's interested in exploring 360-degree immersive VR projects as well. It could end up being part of behind-the-scenes footage for his next VR film, as a part of an experimental project in the future. In addition to his dramedy project, Hunting is also working on a short VR documentary, as well as a music video.

This article originally appeared on Engadget at https://www.engadget.com/the-director-of-sundance-darling-we-met-in-virtual-reality-launches-a-vr-studio-164532412.html?src=rss

ChatGPT was down for more than 90 minutes after a major OpenAI API outage

OpenAI’s extremely popular ChatGPT service was down and non-functional for its 100 million weekly active users. The service went down just before 9AM ET. OpenAI has acknowledged the outage and said that it’s also impacting the company’s API services. However, the service was restored at around 10:50 AM ET. 

Instead of a working platform, ChatGPT users were greeted with a warning message that says it's “at capacity right now.” OpenAI wrote in an error report that it had “identified an issue resulting in high error rates across the API and ChatGPT, and we are working on remediation.” All told, the outage lasted around two hours.

The services were also impacted for a few hours last night, but this was just a partial outage that didn’t impact all users. OpenAI’s chatbot platform has had very few, if any, operational issues until last night. The service has grown at a steady clip, reaching that aforementioned 100 million weekly user milestone without any hiccups. Also, there are over 2 million developers on the API side of things.

OpenAI has been making announcements left and right, teasing customizable AI bots that anyone can create and even considering making its own chips to power the service. 

Update, November 8 2023, 10:55 AM ET: This story has been updated with information regarding ChatGPT's return to operational status. 

This article originally appeared on Engadget at https://www.engadget.com/chatgpt-is-down-after-a-major-openai-outage-154223315.html?src=rss

The first Grand Theft Auto VI trailer will arrive in early December

We may get official details about Grand Theft Auto VI very, very soon. Following a Bloomberg report that said Rockstar Games would announce the next entry in the GTA franchise as early as this week, Rockstar confirmed it would release a trailer for the forthcoming game in early December, as part of its 25th anniversary celebration. It's one of the most anticipated games for the current crop of consoles, especially since the fifth main installment in the series — the second-best selling video game of all time, as Bloomberg notes — came out way back in 2013. 

While Rockstar has yet to launch the title, some fans may have already gotten a glimpse of early-days gameplay footage due to a leak that a hacker uploaded online in 2022. It contained 90 seconds of gameplay from a GTA VI test build, showing one of the two playable protagonists, a female character named Lucia, robbing a store. Another clip showed the other playable character riding the "Vice City Metro," indicating that its story takes place in Rockstar's fictionalized version of Miami. The developer later confirmed the contents of the leak and said that the game's creation would continue "as planned."

Rockstar may reveal GTA VI's release period alongside the trailer next month, but its parent company Take-Two previously hinted that it's coming out sometime in 2024. 

Update, November 8, 2023, 8:15AM ET: This story has been updated to note that Rockstar has confirmed it'll release a trailer for the next Grand Theft Auto game in December.

This article originally appeared on Engadget at https://www.engadget.com/the-first-grand-theft-auto-vi-trailer-will-arrive-in-early-december-045219564.html?src=rss

Nintendo confirms a live-action Legend of Zelda movie is really happening

It's been rumored for years, but Nintendo still managed to surprise us with a late-day announcement: a live-action film based on The Legend of Zelda is in the works, directed by Wes Ball. Ball's most recent films are the Maze Runner series, the latest of which was released in 2018. Nintendo's Shigeru Miyamoto is producing the film along with Avi Arad, who has produced or executive produced loads of Marvel movies over the last decade-plus.

Surprisingly, the film is being co-financed by Nintendo and none other than Sony Pictures Entertainment. You know, part of the same company that owns PlayStation. Nintendo was quick to point out that it is financing more than 50 percent of the film, but that Sony Pictures Entertainment will be the the theatrical distributor.

Aside from that, there's no other detail besides this tweet from Miyamoto: 

Miyamoto goes on to say that they have officially started development on the film with Nintendo "heavily involved" in the production. He also notes that it'll "take time" before its completion but that he hopes fans look forward to seeing it.

Way back in 2015, we heard rumors from the Wall Street Journal that Nintendo and Netflix were making a live-action Zelda show, but that never came together (and there's a pretty weird story around why). But the success of The Super Mario Bros. Movie was perhaps the last thing Nintendo needed to make this project a reality. And while there's plenty of time for things to go wrong between now and the movie hitting theaters, this Zelda fan is cautiously excited about the prospect of another classic Nintendo franchise making its way to the big screen.

This article originally appeared on Engadget at https://www.engadget.com/nintendo-is-making-a-live-action-legend-of-zelda-movie-221618064.html?src=rss

Microsoft will let Xbox game makers use AI tools for story design and NPCs

Xbox has teamed up with a startup called Inworld AI to create a generative AI toolset that developers can use to create games. It's a multi-year collaboration, which the Microsoft-owned brand says can "assist and empower creators in dialogue, story and quest design." Specifically, the partners are looking to develop an "AI design copilot" that can turn prompts into detailed scripts, dialogue trees, quests and other game elements in the same way people can type ideas into generative AI chatbots and get detailed scripts in return. They're also going to work on an "AI character runtime engine" that developers can plug into their actual games, allowing players to generate new stories, quests and dialogues as they go. 

On Inworld's website, it says its technology can "craft characters with distinct personalities and contextual awareness that stay in-world." Apparently, it can provide developers with a "fully integrated character engine for AI NPCs that goes beyond large language models (LLMs)." The image above was from the Droid Maker tool it developed in collaboration with Lucasfilm's storytelling studio ILM Immersive when it was accepted into the Disney Accelerator program. As Kotaku notes, though, the company's tech has yet to ship with a major game release, and it has mostly been used for mods. 

Developers are understandably wary about these upcoming tools. There are growing concerns among creatives about companies using their work to train generative AI without permission — a group of authors, including John Grisham and George R.R. Martin, even sued OpenAI, accusing the company of infringing on their copyright. And then, of course, there's the ever-present worry that developers could decide to lay off writers and designers to cut costs. 

Xbox believes, however, that these tools can "help make it easier for developers to realize their visions, try new things, push the boundaries of gaming today and experiment to improve gameplay, player connection and more." In the brand's announcement, Haiyan Zhang, General Manager of Gaming AI, said: "We will collaborate and innovate with game creators inside Xbox studios as well as third-party studios as we develop the tools that meet their needs and inspire new possibilities for future games."

This article originally appeared on Engadget at https://www.engadget.com/microsoft-will-let-xbox-game-makers-use-ai-tools-for-story-design-and-npcs-083027899.html?src=rss

Meta reportedly won’t make its AI advertising tools available to political marketers

Facebook is no stranger to moderating and mitigating misinformation on its platform, having long employed machine learning and artificial intelligence systems to help supplement its human-led moderation efforts. At the start of October, the company extended its machine learning expertise to its advertising efforts with an experimental set of generative AI tools that can perform tasks like generating backgrounds, adjusting image and creating captions for an advertiser's video content. Reuters reports Monday that Meta will specifically not make those tools available to political marketers ahead of what is expected to be a brutal and divisive national election cycle. 

Meta's decision to bar the use of generative AI is in line with much of the social media ecosystem, though, as Reuters is quick to point out, the company, "has not yet publicly disclosed the decision in any updates to its advertising standards." TikTok and Snap both ban political ads on their networks, Google employs a "keyword blacklist" to prevent its generative AI advertising tools from straying into political speech and X (formerly Twitter) is, well, you've seen it

Meta does allow for a wide latitude of exceptions to this rule. The tool ban only extends to "misleading AI-generated video in all content, including organic non-paid posts, with an exception for parody or satire," per Reuters. Those exceptions are currently under review by the company's independent Oversight Board as part of a case in which Meta left up an "altered" video of President Biden because, the company argued, it was not generated by an AI.

Facebook, along with other leading Silicon Valley AI companies, agreed in July to voluntary commitments set out by the White House enacting technical and policy safeguards in the development of their future generative AI systems. Those include expanding adversarial machine learning (aka red-teaming) efforts to root out bad model behavior, sharing trust and safety information both within the industry and with the government, as well as development of a digital watermarking scheme to authenticate official content and make clear that it is not AI-generated. 

This article originally appeared on Engadget at https://www.engadget.com/meta-reportedly-wont-make-its-ai-advertising-tools-available-to-political-marketers-010659679.html?src=rss

How the meandering legal definition of ‘fair use’ cost us Napster but gave us Spotify

The internet's "enshittification," as veteran journalist and privacy advocate Cory Doctorow describes it, began decades before TikTok made the scene. Elder millennials remember the good old days of Napster — followed by the much worse old days of Napster being sued into oblivion along with Grokster and the rest of the P2P sharing ecosystem, until we were left with a handful of label-approved, catalog-sterilized streaming platforms like Pandora and Spotify. Three cheers for corporate copyright litigation.

In his new book The Internet Con: How to Seize the Means of Computation, Doctorow examines the modern social media landscape, cataloging and illustrating the myriad failings and short-sighted business decisions of the Big Tech companies operating the services that promised us the future but just gave us more Nazis. We have both an obligation and responsibility to dismantle these systems, Doctorow argues, and a means to do so with greater interoperability. In this week's Hitting the Books excerpt, Doctorow examines the aftermath of the lawsuits against P2P sharing services, as well as the role that the Digital Millennium Copyright Act's "notice-and-takedown" reporting system and YouTube's "ContentID" scheme play on modern streaming sites.

The Internet Con cover
Verso Publishing

Excerpted from by The Internet Con: How to Seize the Means of Computation by Cory Doctorow. Published by Verso. Copyright © 2023 by Cory Doctorow. All rights reserved.


Seize the Means of Computation

The harms from notice-and-takedown itself don’t directly affect the big entertainment companies. But in 2007, the entertainment industry itself engineered a new, more potent form of notice-and-takedown that manages to inflict direct harm on Big Content, while amplifying the harms to the rest of us. 

That new system is “notice-and-stay-down,” a successor to notice-and-takedown that monitors everything every user uploads or types and checks to see whether it is similar to something that has been flagged as a copyrighted work. This has long been a legal goal of the entertainment industry, and in 2019 it became a feature of EU law, but back in 2007, notice-and-staydown made its debut as a voluntary modification to YouTube, called “Content ID.” 

Some background: in 2007, Viacom (part of CBS) filed a billion-dollar copyright suit against YouTube, alleging that the company had encouraged its users to infringe on its programs by uploading them to YouTube. Google — which acquired YouTube in 2006 — defended itself by invoking the principles behind Betamax and notice-and-takedown, arguing that it had lived up to its legal obligations and that Betamax established that “inducement” to copyright infringement didn’t create liability for tech companies (recall that Sony had advertised the VCR as a means of violating copyright law by recording Hollywood movies and watching them at your friends’ houses, and the Supreme Court decided it didn’t matter). 

But with Grokster hanging over Google’s head, there was reason to believe that this defense might not fly. There was a real possibility that Viacom could sue YouTube out of existence — indeed, profanity-laced internal communications from Viacom — which Google extracted through the legal discovery process — showed that Viacom execs had been hotly debating which one of them would add YouTube to their private empire when Google was forced to sell YouTube to the company. 

Google squeaked out a victory, but was determined not to end up in a mess like the Viacom suit again. It created Content ID, an “audio fingerprinting” tool that was pitched as a way for rights holders to block, or monetize, the use of their copyrighted works by third parties. YouTube allowed large (at first) rightsholders to upload their catalogs to a blocklist, and then scanned all user uploads to check whether any of their audio matched a “claimed” clip. 

Once Content ID determined that a user was attempting to post a copyrighted work without permission from its rightsholder, it consulted a database to determine the rights holder’s preference. Some rights holders blocked any uploads containing audio that matched theirs; others opted to take the ad revenue generated by that video. 

There are lots of problems with this. Notably, there’s the inability of Content ID to determine whether a third party’s use of someone else’s copyright constitutes “fair use.” As discussed, fair use is the suite of uses that are permitted even if the rightsholder objects, such as taking excerpts for critical or transformational purposes. Fair use is a “fact intensive” doctrine—that is, the answer to “Is this fair use?” is almost always “It depends, let’s ask a judge.” 

Computers can’t sort fair use from infringement. There is no way they ever can. That means that filters block all kinds of legitimate creative work and other expressive speech — especially work that makes use of samples or quotations. 

But it’s not just creative borrowing, remixing and transformation that filters struggle with. A lot of creative work is similar to other creative work. For example, a six-note phrase from Katy Perry’s 2013 song “Dark Horse” is effectively identical to a six-note phrase in “Joyful Noise,” a 2008 song by a much less well-known Christian rapper called Flame. Flame and Perry went several rounds in the courts, with Flame accusing Perry of violating his copyright. Perry eventually prevailed, which is good news for her. 

But YouTube’s filters struggle to distinguish Perry’s six-note phrase from Flame’s (as do the executives at Warner Chappell, Perry’s publisher, who have periodically accused people who post snippets of Flame’s “Joyful Noise” of infringing on Perry’s “Dark Horse”). Even when the similarity isn’t as pronounced as in Dark, Joyful, Noisy Horse, filters routinely hallucinate copyright infringements where none exist — and this is by design. 

To understand why, first we have to think about filters as a security measure — that is, as a measure taken by one group of people (platforms and rightsholder groups) who want to stop another group of people (uploaders) from doing something they want to do (upload infringing material). 

It’s pretty trivial to write a filter that blocks exact matches: the labels could upload losslessly encoded pristine digital masters of everything in their catalog, and any user who uploaded a track that was digitally or acoustically identical to that master would be blocked. 

But it would be easy for an uploader to get around a filter like this: they could just compress the audio ever-so-slightly, below the threshold of human perception, and this new file would no longer match. Or they could cut a hundredth of a second off the beginning or end of the track, or omit a single bar from the bridge, or any of a million other modifications that listeners are unlikely to notice or complain about. 

Filters don’t operate on exact matches: instead, they employ “fuzzy” matching. They don’t just block the things that rights holders have told them to block — they block stuff that’s similar to those things that rights holders have claimed. This fuzziness can be adjusted: the system can be made more or less strict about what it considers to be a match. 

Rightsholder groups want the matches to be as loose as possible, because somewhere out there, there might be someone who’d be happy with a very fuzzy, truncated version of a song, and they want to stop that person from getting the song for free. The looser the matching, the more false positives. This is an especial problem for classical musicians: their performances of Bach, Beethoven and Mozart inevitably sound an awful lot like the recordings that Sony Music (the world’s largest classical music label) has claimed in Content ID. As a result, it has become nearly impossible to earn a living off of online classical performance: your videos are either blocked, or the ad revenue they generate is shunted to Sony. Even teaching classical music performance has become a minefield, as painstakingly produced, free online lessons are blocked by Content ID or, if the label is feeling generous, the lessons are left online but the ad revenue they earn is shunted to a giant corporation, stealing the creative wages of a music teacher.

Notice-and-takedown law didn’t give rights holders the internet they wanted. What kind of internet was that? Well, though entertainment giants said all they wanted was an internet free from copyright infringement, their actions — and the candid memos released in the Viacom case — make it clear that blocking infringement is a pretext for an internet where the entertainment companies get to decide who can make a new technology and how it will function.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-the-internet-con-cory-doctorow-verso-153018432.html?src=rss

Blizzard’s next World of Warcraft expansions make up a three-part saga

Blizzard is planning an MCU-style future for the World of Warcraft. Chris Metzen, who only recently returned to the company as the executive creative director of Warcraft, has announced that the game's next three expansions will make up a three-part interconnected saga. Since Blizzard typically releases expansions within two years of each other, "The Worldsoul Saga's" story will take years to unfold. The first installment called World of Warcraft: The War Within is slated for release sometime in 2024, followed by World of Warcraft: Midnight and World of Warcraft: The Last Titan in the years after that. 

Warcraft general manager John Hight said the trilogy encompasses "one of the most ambitious creative endeavors ever attempted for World of Warcraft." Each one is a standalone narrative, but they're connected by an overall story arc, he explained. "Alongside these epic adventures, the ongoing quality-of-life feature updates players have come to expect from us since Dragonflight will continue in The War Within, further setting us up for the next 20 years and beyond," Hight added. 

While Blizzard has yet to released an in-depth summary for The War Within, it did share a few pertinent details about the expansion. It will feature an ancient civilization underneath the surface of the planet as it rises in power, while Alliance and Horde heroes experience visions of possible futures, both good and bad. Players can grind until they reach the expansion's level cap of 80, and they can explore a new continent called Khaz Algar. There's also a new unlockable and playable Titan-forged race called the Earthen, new bite-size adventures that can support one to five players called Delves, as well as a new feature dubbed Warbands, which allows players to share banks, reputations and transmogs across characters. 

Fans can already pre-purchase The War Within for $50, and it will also give them instant access to the Dragonflight expansion. The War Within: Heroic Edition, which comes with extras, is available for $70, while the The War Within: Epic Edition that includes beta access to the expansion, along with even more extras, will set them back $90. 

This article originally appeared on Engadget at https://www.engadget.com/blizzards-next-world-of-warcraft-expansions-make-up-a-three-part-saga-154521763.html?src=rss