Prime Video’s latest Fallout trailer deftly captures the tone of the games

Amazon has released a full trailer for the live-action Fallout series that's coming to Prime Video soon. It's our most in-depth look yet at the show and early indications suggest that the creative team has captured the distinct blend of irreverence and violence that helped Bethesda's game series become so successful.

The clip focuses on Lucy (Ella Purnell), a young woman who emerges from a fallout bunker into what used to be Los Angeles, 200 years after a nuclear apocalypse. Lucy quickly discovers that life on the surface isn't quite as cushy as staying in a luxury vault. "Practically every person I've met up here has tried to kill me," she says, seconds before we see a robot attempt to harvest her organs.

The trailer (and the show) gets a helping hand from the otherworldly charm of Walton Goggins as a pitchman for living in a fallout shelter. His character somehow survives the apocalypse and is still around two centuries later, carving out a life as a mutated bounty hunter called The Ghoul. The trailer has a ton of other references to the games for fans to drink in.

Amazon also took the opportunity to reveal that Fallout will arrive on Prime Video on April 11, one day earlier than previously announced. You won't have to wait a week between episodes either, as the entire season will drop at once.

This article originally appeared on Engadget at https://www.engadget.com/prime-videos-latest-fallout-trailer-deftly-captures-the-tone-of-the-games-170210309.html?src=rss

Prime Video’s latest Fallout trailer deftly captures the tone of the games

Amazon has released a full trailer for the live-action Fallout series that's coming to Prime Video soon. It's our most in-depth look yet at the show and early indications suggest that the creative team has captured the distinct blend of irreverence and violence that helped Bethesda's game series become so successful.

The clip focuses on Lucy (Ella Purnell), a young woman who emerges from a fallout bunker into what used to be Los Angeles, 200 years after a nuclear apocalypse. Lucy quickly discovers that life on the surface isn't quite as cushy as staying in a luxury vault. "Practically every person I've met up here has tried to kill me," she says, seconds before we see a robot attempt to harvest her organs.

The trailer (and the show) gets a helping hand from the otherworldly charm of Walton Goggins as a pitchman for living in a fallout shelter. His character somehow survives the apocalypse and is still around two centuries later, carving out a life as a mutated bounty hunter called The Ghoul. The trailer has a ton of other references to the games for fans to drink in.

Amazon also took the opportunity to reveal that Fallout will arrive on Prime Video on April 11, one day earlier than previously announced. You won't have to wait a week between episodes either, as the entire season will drop at once.

This article originally appeared on Engadget at https://www.engadget.com/prime-videos-latest-fallout-trailer-deftly-captures-the-tone-of-the-games-170210309.html?src=rss

Facebook is using AI to supercharge the algorithm that recommends you videos

Meta is revamping how Facebook recommends videos across Reels, Groups, and the main Facebook Feed, by using AI to power its video recommendation algorithm, Facebook head Tom Alison revealed on Wednesday. The world's largest social network has already switched Reels, its TikTok competitor, to the new engine, and plans to use it in all places within Facebook that show video — the main Facebook feed and Groups — as part of a "technology roadmap" through 2026, Alison said at a Morgan Stanley tech conference in San Francisco.

Meta has made competing with TikTok a top priority ever since the app, which serves up vertical video clips and is known for its powerful recommendation engine that seems to know exactly what will keep users hooked, started exploding in popularity in the US in the last few years. When Facebook tested the new AI-powered recommendation engine with Reels, watch time went up by roughly 8 to 10 percent, Alison revealed. “So what that told us was this new model architecture is learning from the data much more efficiently than the previous generation,” Alison said. “So that was like a good sign that says, OK, we’re on the right track.”

So far, Facebook used different video recommendation engines for Reels, Groups, and the Facebook feed. But after seeing success with Reels, the company plans to use the same AI-powered engine across all these products.

“Instead of just powering Reels, we’re working on a project to power our entire video ecosystem with this single model, and then can we add our Feed recommendation product to also be served by this model,” Alison said. “If we get this right, not only will the recommendations be kind of more engaging and more relevant, but we think the responsiveness of them can improve as well.”

The move is a part of Meta’s strategy to infuse AI into all its products after the technology exploded with the launch of OpenAI’s ChatGPT at the end of 2022. The company is spending billions of dollars to buy up hundreds of thousands of pricey NVIDIA GPUs used to train and power AI models, Zuckerberg said in a video earlier this year.

This article originally appeared on Engadget at https://www.engadget.com/facebook-is-using-ai-to-supercharge-the-algorithm-that-recommends-you-videos-033027002.html?src=rss

Facebook is using AI to supercharge the algorithm that recommends you videos

Meta is revamping how Facebook recommends videos across Reels, Groups, and the main Facebook Feed, by using AI to power its video recommendation algorithm, Facebook head Tom Alison revealed on Wednesday. The world's largest social network has already switched Reels, its TikTok competitor, to the new engine, and plans to use it in all places within Facebook that show video — the main Facebook feed and Groups — as part of a "technology roadmap" through 2026, Alison said at a Morgan Stanley tech conference in San Francisco.

Meta has made competing with TikTok a top priority ever since the app, which serves up vertical video clips and is known for its powerful recommendation engine that seems to know exactly what will keep users hooked, started exploding in popularity in the US in the last few years. When Facebook tested the new AI-powered recommendation engine with Reels, watch time went up by roughly 8 to 10 percent, Alison revealed. “So what that told us was this new model architecture is learning from the data much more efficiently than the previous generation,” Alison said. “So that was like a good sign that says, OK, we’re on the right track.”

So far, Facebook used different video recommendation engines for Reels, Groups, and the Facebook feed. But after seeing success with Reels, the company plans to use the same AI-powered engine across all these products.

“Instead of just powering Reels, we’re working on a project to power our entire video ecosystem with this single model, and then can we add our Feed recommendation product to also be served by this model,” Alison said. “If we get this right, not only will the recommendations be kind of more engaging and more relevant, but we think the responsiveness of them can improve as well.”

The move is a part of Meta’s strategy to infuse AI into all its products after the technology exploded with the launch of OpenAI’s ChatGPT at the end of 2022. The company is spending billions of dollars to buy up hundreds of thousands of pricey NVIDIA GPUs used to train and power AI models, Zuckerberg said in a video earlier this year.

This article originally appeared on Engadget at https://www.engadget.com/facebook-is-using-ai-to-supercharge-the-algorithm-that-recommends-you-videos-033027002.html?src=rss

Sleight of Hand is a new noir game from the creator of Framed

Framed creator Joshua Briggs is back with another mystery game, and, this time, it has a supernatural element. RiffRaff Games has announced the upcoming release of Sleight of Hand, a "third-person card-slinging occult noir stealth sim" — a collection of words that alone have us very intrigued. 

Sleight of Hand follows Lady Luck, a former occult detective who must track down and defeat her former coven. Yes, excitingly, a woman is the noir protagonist, and she comes to Steeple City with a cursed deck (necessary, given she lost her left hand the last time she saw her fellow witches). Each card has a unique ability, such as the Hex card, which latches onto a hidden enemy, and the Peekaboo card, which thrusts them into view. Another useful sounding one is called the Chain Smoker card: it ties the fates of multiple adversaries together so Lady Luck can use one card to stall them all. 

Gameplay also includes solving puzzles to use secret passageways and interrogating coven members, all in hopes of getting to the leader. The entirety of Sleight of Hand is grounded in the very relatable reason Lady Luck puts herself back in danger: she has overdue bills and old debts to settle. Lady Luck herself is voiced by Debi Mae West, who you might recognize as Metal Gear Solid's Meryl Silverburgh, and should make her arrival on Steam and through a day one Game Pass launch for Xbox Series X|S and Windows PC in 2025. 

This article originally appeared on Engadget at https://www.engadget.com/sleight-of-hand-is-a-new-noir-game-from-the-creator-of-framed-180934026.html?src=rss

Sleight of Hand is a new noir game from the creator of Framed

Framed creator Joshua Briggs is back with another mystery game, and, this time, it has a supernatural element. RiffRaff Games has announced the upcoming release of Sleight of Hand, a "third-person card-slinging occult noir stealth sim" — a collection of words that alone have us very intrigued. 

Sleight of Hand follows Lady Luck, a former occult detective who must track down and defeat her former coven. Yes, excitingly, a woman is the noir protagonist, and she comes to Steeple City with a cursed deck (necessary, given she lost her left hand the last time she saw her fellow witches). Each card has a unique ability, such as the Hex card, which latches onto a hidden enemy, and the Peekaboo card, which thrusts them into view. Another useful sounding one is called the Chain Smoker card: it ties the fates of multiple adversaries together so Lady Luck can use one card to stall them all. 

Gameplay also includes solving puzzles to use secret passageways and interrogating coven members, all in hopes of getting to the leader. The entirety of Sleight of Hand is grounded in the very relatable reason Lady Luck puts herself back in danger: she has overdue bills and old debts to settle. Lady Luck herself is voiced by Debi Mae West, who you might recognize as Metal Gear Solid's Meryl Silverburgh, and should make her arrival on Steam and through a day one Game Pass launch for Xbox Series X|S and Windows PC in 2025. 

This article originally appeared on Engadget at https://www.engadget.com/sleight-of-hand-is-a-new-noir-game-from-the-creator-of-framed-180934026.html?src=rss

Microsoft engineer who raised concerns about Copilot image creator pens letter to the FTC

Microsoft engineer Shane Jones raised concerns about the safety of OpenAI’s DALL-E 3 back in January, suggesting the product has security vulnerabilities that make it easy to create violent or sexually explicit images. He also alleged that Microsoft’s legal team blocked his attempts to alert the public to the issue. Now, he has taken his complaint directly to the FTC, as reported by CNBC.

“I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place,” Jones wrote in a letter to FTC Chair Lina Khan. He noted that Microsoft “refused that recommendation” so now he’s asking the company to add disclosures to the product to alert consumers to the alleged danger. Jones also wants the company to change the rating on the app to make sure it’s only for adult audiences. Copilot Designer’s Android app is currently rated “E for Everyone.”

Microsoft continues “to market the product to ‘Anyone. Anywhere. Any Device,’” he wrote, referring to a promotional slogan recently used by company CEO Satya Nadella. Jones penned a separate letter to the company’s board of directors, urging them to begin “an independent review of Microsoft’s responsible AI incident reporting processes.”

An image of a banana bed.
A sample image (a banana couch) generated by DALL-E 3 (OpenAI)

This all boils down to whether or not Microsoft's implementation of DALL-E 3 will create violent or sexual imagery, despite the guardrails put in place. Jones says it’s all too easy to “trick” the platform into making the grossest stuff imaginable. The engineer and red teamer says he regularly witnessed the software whip up unsavory images from innocuous prompts. The prompt “pro-choice," for instance, created images of demons feasting on infants and Darth Vader holding a drill to the head of a baby. The prompt “car accident” generated pictures of sexualized women, alongside violent depictions of automobile crashes. Other prompts created images of teens holding assault rifles, kids using drugs and pictures that ran afoul of copyright law.

These aren’t just allegations. CNBC was able to recreate just about every scenario that Jones called out using the standard version of the software. According to Jones, many consumers are encountering these issues, but Microsoft isn’t doing much about it. He alleges that the Copilot team receives more than 1,000 daily product feedback complaints, but that he’s been told there aren’t enough resources available to fully investigate and solve these problems.

“If this product starts spreading harmful, disturbing images globally, there’s no place to report it, no phone number to call and no way to escalate this to get it taken care of immediately,” he told CNBC.

OpenAI told Engadget back in January when Jones issued his first complaint that the prompting technique he shared “does not bypass security systems” and that the company has “developed robust image classifiers that steer the model away from generating harmful images.”

A Microsoft spokesperson added that the company has “established robust internal reporting channels to properly investigate and remediate any issues”, going on to say that Jones should “appropriately validate and test his concerns before escalating it publicly.” The company also said that it's “connecting with this colleague to address any remaining concerns he may have.” However, that was in January, so it looks like Jones’ remaining concerns were not properly addressed. We reached out to both companies for an updated statement. 

This is happening just after Google’s Gemini chatbot encountered its own image generation controversy. The bot was found to be making historically inaccurate images, like Native American Catholic Popes. Google disabled the image generation platform while it continues to work on a fix.

This article originally appeared on Engadget at https://www.engadget.com/microsoft-engineer-who-raised-concerns-about-copilot-image-creator-pens-letter-to-the-ftc-165414095.html?src=rss

Microsoft engineer who raised concerns about Copilot image creator pens letter to the FTC

Microsoft engineer Shane Jones raised concerns about the safety of OpenAI’s DALL-E 3 back in January, suggesting the product has security vulnerabilities that make it easy to create violent or sexually explicit images. He also alleged that Microsoft’s legal team blocked his attempts to alert the public to the issue. Now, he has taken his complaint directly to the FTC, as reported by CNBC.

“I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place,” Jones wrote in a letter to FTC Chair Lina Khan. He noted that Microsoft “refused that recommendation” so now he’s asking the company to add disclosures to the product to alert consumers to the alleged danger. Jones also wants the company to change the rating on the app to make sure it’s only for adult audiences. Copilot Designer’s Android app is currently rated “E for Everyone.”

Microsoft continues “to market the product to ‘Anyone. Anywhere. Any Device,’” he wrote, referring to a promotional slogan recently used by company CEO Satya Nadella. Jones penned a separate letter to the company’s board of directors, urging them to begin “an independent review of Microsoft’s responsible AI incident reporting processes.”

An image of a banana bed.
A sample image (a banana couch) generated by DALL-E 3 (OpenAI)

This all boils down to whether or not Microsoft's implementation of DALL-E 3 will create violent or sexual imagery, despite the guardrails put in place. Jones says it’s all too easy to “trick” the platform into making the grossest stuff imaginable. The engineer and red teamer says he regularly witnessed the software whip up unsavory images from innocuous prompts. The prompt “pro-choice," for instance, created images of demons feasting on infants and Darth Vader holding a drill to the head of a baby. The prompt “car accident” generated pictures of sexualized women, alongside violent depictions of automobile crashes. Other prompts created images of teens holding assault rifles, kids using drugs and pictures that ran afoul of copyright law.

These aren’t just allegations. CNBC was able to recreate just about every scenario that Jones called out using the standard version of the software. According to Jones, many consumers are encountering these issues, but Microsoft isn’t doing much about it. He alleges that the Copilot team receives more than 1,000 daily product feedback complaints, but that he’s been told there aren’t enough resources available to fully investigate and solve these problems.

“If this product starts spreading harmful, disturbing images globally, there’s no place to report it, no phone number to call and no way to escalate this to get it taken care of immediately,” he told CNBC.

OpenAI told Engadget back in January when Jones issued his first complaint that the prompting technique he shared “does not bypass security systems” and that the company has “developed robust image classifiers that steer the model away from generating harmful images.”

A Microsoft spokesperson added that the company has “established robust internal reporting channels to properly investigate and remediate any issues”, going on to say that Jones should “appropriately validate and test his concerns before escalating it publicly.” The company also said that it's “connecting with this colleague to address any remaining concerns he may have.” However, that was in January, so it looks like Jones’ remaining concerns were not properly addressed. We reached out to both companies for an updated statement. 

This is happening just after Google’s Gemini chatbot encountered its own image generation controversy. The bot was found to be making historically inaccurate images, like Native American Catholic Popes. Google disabled the image generation platform while it continues to work on a fix.

This article originally appeared on Engadget at https://www.engadget.com/microsoft-engineer-who-raised-concerns-about-copilot-image-creator-pens-letter-to-the-ftc-165414095.html?src=rss

Microsoft accuses the New York Times of doom-mongering in OpenAI lawsuit

Microsoft has filed a motion seeking to dismiss key parts of a lawsuit The New York Times filed against the company and Open AI, accusing them of copyright infringement. If you'll recall, The Times sued both companies for using its published articles to train their GPT large language models (LLMs) without permission and compensation. In its filing, the company has accused The Times of pushing "doomsday futurology" by claiming that AI technologies pose a threat to independent journalism. It follows OpenAI's court filing from late February that's also seeking to dismiss some important elements on the case. 

Like OpenAI before it, Microsoft accused The Times of crafting "unrealistic prompts" in an effort to "coax the GPT-based tools" to spit out responses matching its content. It also compared the media organization's lawsuit to Hollywood studios' efforts to " stop a groundbreaking new technology:" The VCR. Instead of destroying Hollywood, Microsoft explained, the VCR helped the entertainment industry flourish by opening up revenue streams. LLMs are a breakthrough in artificial intelligence, it continued, and Microsoft collaborated with OpenAI to "help bring their extraordinary power to the public" because it "firmly believes in LLMs' capacity to improve the way people live and work."

The company is asking the court to dismiss three claims, including one saying it's liable for end-user copyright infringement through the use of GPT-based tools and another that says it violates the Digital Millennium Copyright Act. Microsoft also wants the court to dismiss the element of the case wherein The Times accused it of misappropriating time-sensitive breaking news and consumer purchasing recommendations. As an example, The Times argued in its lawsuit that it will lose revenue if users ask ChatGPT to research articles on Wirecutter, which the news company owns, because potential buyers will no longer click on its referral links. But that's "mere speculation about what The Times apparently fears might happen," and it didn't give a single real-world example in its complaint, Microsoft said.

"Microsoft doesn't dispute that it worked with OpenAI to copy millions of The Times's works without its permission to build its tools," Ian Crosby, lead counsel for The Times, told the publication." Instead, it oddly compares L.L.M.s to the VCR even though VCR makers never argued that it was necessary to engage in massive copyright infringement to build their products."

OpenAI and Microsoft are facing more lawsuits related to the content used to train the former's LLMs other than this particular one. Nonfiction writers and fiction authors, including Michael Chabon, George R.R. Martin, John Grisham and Jodi Picoult, accused the companies of stealing their work for AI training. More recently, The Intercept, Raw Story and AlterNet filed separate lawsuits against the company, because ChatGPT allegedly reproduces their content "verbatim or nearly verbatim" while removing proper attribution. 

This article originally appeared on Engadget at https://www.engadget.com/microsoft-accuses-the-new-york-times-of-doom-mongering-in-openai-lawsuit-133025748.html?src=rss

Microsoft accuses the New York Times of doom-mongering in OpenAI lawsuit

Microsoft has filed a motion seeking to dismiss key parts of a lawsuit The New York Times filed against the company and Open AI, accusing them of copyright infringement. If you'll recall, The Times sued both companies for using its published articles to train their GPT large language models (LLMs) without permission and compensation. In its filing, the company has accused The Times of pushing "doomsday futurology" by claiming that AI technologies pose a threat to independent journalism. It follows OpenAI's court filing from late February that's also seeking to dismiss some important elements on the case. 

Like OpenAI before it, Microsoft accused The Times of crafting "unrealistic prompts" in an effort to "coax the GPT-based tools" to spit out responses matching its content. It also compared the media organization's lawsuit to Hollywood studios' efforts to " stop a groundbreaking new technology:" The VCR. Instead of destroying Hollywood, Microsoft explained, the VCR helped the entertainment industry flourish by opening up revenue streams. LLMs are a breakthrough in artificial intelligence, it continued, and Microsoft collaborated with OpenAI to "help bring their extraordinary power to the public" because it "firmly believes in LLMs' capacity to improve the way people live and work."

The company is asking the court to dismiss three claims, including one saying it's liable for end-user copyright infringement through the use of GPT-based tools and another that says it violates the Digital Millennium Copyright Act. Microsoft also wants the court to dismiss the element of the case wherein The Times accused it of misappropriating time-sensitive breaking news and consumer purchasing recommendations. As an example, The Times argued in its lawsuit that it will lose revenue if users ask ChatGPT to research articles on Wirecutter, which the news company owns, because potential buyers will no longer click on its referral links. But that's "mere speculation about what The Times apparently fears might happen," and it didn't give a single real-world example in its complaint, Microsoft said.

"Microsoft doesn't dispute that it worked with OpenAI to copy millions of The Times's works without its permission to build its tools," Ian Crosby, lead counsel for The Times, told the publication." Instead, it oddly compares L.L.M.s to the VCR even though VCR makers never argued that it was necessary to engage in massive copyright infringement to build their products."

OpenAI and Microsoft are facing more lawsuits related to the content used to train the former's LLMs other than this particular one. Nonfiction writers and fiction authors, including Michael Chabon, George R.R. Martin, John Grisham and Jodi Picoult, accused the companies of stealing their work for AI training. More recently, The Intercept, Raw Story and AlterNet filed separate lawsuits against the company, because ChatGPT allegedly reproduces their content "verbatim or nearly verbatim" while removing proper attribution. 

This article originally appeared on Engadget at https://www.engadget.com/microsoft-accuses-the-new-york-times-of-doom-mongering-in-openai-lawsuit-133025748.html?src=rss