Paper Trail, the game that lets you fold the world around you, finally has a release date after multiple delays. The top-down puzzler is now scheduled to launch on May 21.
Developed and published by the UK-based Newfangled Games, Paper Trail combines craft-inspired art with a unique folding mechanic that lets you crease and bend your environment to connect new paths and solve puzzles. “Alter the fabric of your world, contorting, spinning, rotating, twisting around — as you try to untangle the puzzle of the Paper Trail,” the game’s Steam description reads.
The game’s art style matches its folding mechanic, drawing inspiration from flat aesthetic styles, including printmaking and watercolor. You play as Paige (get it?), an 18-year-old aspiring astrophysicist with fuddy-duddy parents, making her way to University to pursue her calling in scientific research. The developer describes Paper Trail as easy to grasp but difficult to master, and you can imagine how the game could rack your brain when it ramps up in intensity and complexity as you reach the later levels.
Paper Trail will be available on PC, consoles (PS5 / PS4, Xbox Series X / S, Xbox One and Switch) and the Netflix mobile app (iOS and Android) on May 21. If PC is your platform of choice, you can already wishlist the game on Steam.
This article originally appeared on Engadget at https://www.engadget.com/origami-inspired-adventure-game-paper-trail-finally-launches-on-may-21-195913317.html?src=rss
Around 2.25 billion cups of coffee are consumed on a daily basis, and even if a fraction of them are produced in Keurig or Nespresso machines, those are a LOT of coffee pods that get thrown in the waste after they’re used. Keurig’s finally tackling this persisting problem with the K-Round, an alternative to the pod that’s biodegradable, plastic-free, and still manages to produce a great brew. The K-Round is essentially a compressed disc or puck of coffee grounds (sort of how your local barista tamps down coffee into a puck), bound together with plant-based materials like cellulose. The K-Rounds go into Keurig’s upcoming machine, the Alta, which can process these rounds, extracting coffee from them without leaving you with a throwaway plastic and metal coffee pod like your regular Keurig machine currently does.
Every company reaches a level of scale where it suddenly becomes difficult to sustain growth, and Keurig’s CEO Bob Gamgort mentioned that the company had reached that point. Creating great coffee is easy, but that isn’t precisely what Keurig does. The company creates great ‘single-serve’ coffee, helping users brew exactly one cup at a time instead of an entire pot and then having to either consume more coffee than needed or throw the rest. The company pioneered the single-serve coffee movement, and now, in order to grow even further, has realized that generating more waste in the form of use-and-throw pods isn’t particularly tenable.
Enter the K-Round, a puck of compressed coffee that achieves a few things. For starters, it does away with the pod entirely, using only plant-based natural materials in its design. The K-Round is entirely biodegradable and leaves no waste apart from a small leftover disc that can easily be composted or discarded with natural waste. But more distinctly, the K-Round reinvents the perception of the pod by allowing users to have a sensorial experience BEFORE the coffee is even brewed. Most coffee pods are shrouded in mystery – nobody knows what’s in them or how they work, and all you really have is a label on top that tells you what’s inside the pod. The K-Round on the other hand, is much more sensorial. Users can actually look at the pod and see how coarse or fine the grounds are, or if they’re light or dark-roasted. The pods also give off a distinct coffee aroma, helping prepare you for the brewing/drinking journey you’re about to embark on, all while keeping the process relatively simple – place the pod in the machine, shut the lid, hit the button, and voila! Barista-level coffee brewed in mere minutes.
The K-Rounds are essentially just roasted/ground coffee that’s been compressed into the shape of a puck, and bound together using a plant-based coating of cellulose and alginate (the same stuff used to create those bursting pearls in boba tea). Different variants also have sorbitol, a form of sugar that’s 50% as sweet as sucrose, and is non-fermenting (you don’t want the coffee turning into alcohol in the pod). The engineers at Keurig Dr. Pepper (yes, that’s the name of the company, I didn’t know they were co-owned either) developed the K-Rounds to be space-saving, shelf-stable, and entirely plant-based, while still ensuring that the resulting coffee tastes great and doesn’t have any underlying undesired flavors or aromas. Their inspiration for the puck shape came from the way baristas tamped down coffee into pucks before loading them into coffee machines. The pucks come in a variety of sizes, depending on the type of brew. Espressos are smaller and flatter, while other ‘larger’ brews like double shots or tall cold-brews result in taller pucks. The K-Rounds currently only work with the upcoming Keurig Alta coffee machine, which can apparently identify each puck and automatically adjust temperature, water-level, and brew time accordingly. Notably, the Alta is designed to be backwards compatible too, and will accept the older use-and-throw K-Cups coffee-pods too. The Alta and K-Rounds don’t have an official date – Keurig says it’s still fine-tuning the two based on consumer feedback. If you want to be a part of the beta test, Keurig’s inviting coffee aficionados to sign up on their website.
The European Parliament has approved sweeping legislation to regulate artificial intelligence, nearly three years after the draft rules were first proposed. Officials reached an agreement on AI development in December. On Wednesday, members of the parliament approved the AI Act with 523 votes in favor and 46 against, There were 49 abstentions.
The EU says the regulations seek to "protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field." The act defines obligations for AI applications based on potential risks and impact.
The legislation has not become law yet. It's still subject to lawyer-linguist checks, while the European Council needs to formally enforce it. But the AI Act is likely to come into force before the end of the legislature, ahead of the next parliamentary election in early June.
Most of the provisions will take effect 24 months after the AI Act becomes law, but bans on prohibited applications will apply after six months. The EU is banning practices that it believes will threaten citizens' rights. "Biometric categorization systems based on sensitive characteristics" will be outlawed, as will the "untargeted scraping" of images of faces from CCTV footage and the web to create facial recognition databases. Clearview AI's activity would fall under that category.
Other applications that will be banned include social scoring; emotion recognition in schools and workplaces; and "AI that manipulates human behavior or exploits people’s vulnerabilities." Some aspects of predictive policing will be prohibited i.e. when it's based entirely on assessing someone's characteristics (such as inferring their sexual orientation or political opinions) or profiling them. Although the AI Act by and large bans law enforcement's use of biometric identification systems, it will be allowed in certain circumstances with prior authorization, such as to help find a missing person or prevent a terrorist attack.
Applications that are deemed high-risk — including the use of AI in law enforcement and healthcare— are subject to certain conditions. They must not discriminate and they need to abide by privacy rules. Developers have to show that the systems are transparent, safe and explainable to users too. As for AI systems that the EU deems low-risk (like spam filters), developers still have to inform users that they're interacting with AI-generated content.
The law has some rules when it comes to generative AI and manipulated media too. Deepfakes and any other AI-generated images, videos and audio will need to be clearly labeled. AI models will have to respect copyright laws too. "Rightsholders may choose to reserve their rights over their works or other subject matter to prevent text and data mining, unless this is done for the purposes of scientific research," the text of the AI Act reads. "Where the rights to opt out has been expressly reserved in an appropriate manner, providers of general-purpose AI models need to obtain an authorization from rightsholders if they want to carry out text and data mining over such works." However, AI models built purely for research, development and prototyping are exempt.
The most powerful general-purpose and generative AI models (those trained using a total computing power of more than 10^25 FLOPs) are deemed to have systemic risks under the rules. The threshold may be adjusted over time, but OpenAI's GPT-4 and DeepMind's Gemini are believed to fall into this category.
The providers of such models will have to assess and mitigate risks, report serious incidents, provide details of their systems' energy consumption, ensure they meet cybersecurity standards and carry out state-of-the-art tests and model evaluations.
As with other EU regulations targeting tech, the penalties for violating the AI Act's provisions can be steep. Companies that break the rules will be subject to fines of up to €35 million ($51.6 million) or up to seven percent of their global annual profits, whichever is higher.
The AI Act applies to any model operating in the EU, so US-based AI providers will need to abide by them, at least in Europe. Sam Altman, CEO of OpenAI creator OpenAI, suggested last May that his company might pull out of Europe were the AI Act to become law, but later said the company had no plans to do so.
To enforce the law, each member country will create its own AI watchdog and the European Commission will set up an AI Office. This will develop methods to evaluate models and monitor risks in general-purpose models. Providers of general-purpose models that are deemed to carry systemic risks will be asked to work with the office to draw up codes of conduct.
This article originally appeared on Engadget at https://www.engadget.com/eu-regulators-pass-the-planets-first-sweeping-ai-regulations-190654561.html?src=rss
The European Parliament has approved sweeping legislation to regulate artificial intelligence, nearly three years after the draft rules were first proposed. Officials reached an agreement on AI development in December. On Wednesday, members of the parliament approved the AI Act with 523 votes in favor and 46 against, There were 49 abstentions.
The EU says the regulations seek to "protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field." The act defines obligations for AI applications based on potential risks and impact.
The legislation has not become law yet. It's still subject to lawyer-linguist checks, while the European Council needs to formally enforce it. But the AI Act is likely to come into force before the end of the legislature, ahead of the next parliamentary election in early June.
Most of the provisions will take effect 24 months after the AI Act becomes law, but bans on prohibited applications will apply after six months. The EU is banning practices that it believes will threaten citizens' rights. "Biometric categorization systems based on sensitive characteristics" will be outlawed, as will the "untargeted scraping" of images of faces from CCTV footage and the web to create facial recognition databases. Clearview AI's activity would fall under that category.
Other applications that will be banned include social scoring; emotion recognition in schools and workplaces; and "AI that manipulates human behavior or exploits people’s vulnerabilities." Some aspects of predictive policing will be prohibited i.e. when it's based entirely on assessing someone's characteristics (such as inferring their sexual orientation or political opinions) or profiling them. Although the AI Act by and large bans law enforcement's use of biometric identification systems, it will be allowed in certain circumstances with prior authorization, such as to help find a missing person or prevent a terrorist attack.
Applications that are deemed high-risk — including the use of AI in law enforcement and healthcare— are subject to certain conditions. They must not discriminate and they need to abide by privacy rules. Developers have to show that the systems are transparent, safe and explainable to users too. As for AI systems that the EU deems low-risk (like spam filters), developers still have to inform users that they're interacting with AI-generated content.
The law has some rules when it comes to generative AI and manipulated media too. Deepfakes and any other AI-generated images, videos and audio will need to be clearly labeled. AI models will have to respect copyright laws too. "Rightsholders may choose to reserve their rights over their works or other subject matter to prevent text and data mining, unless this is done for the purposes of scientific research," the text of the AI Act reads. "Where the rights to opt out has been expressly reserved in an appropriate manner, providers of general-purpose AI models need to obtain an authorization from rightsholders if they want to carry out text and data mining over such works." However, AI models built purely for research, development and prototyping are exempt.
The most powerful general-purpose and generative AI models (those trained using a total computing power of more than 10^25 FLOPs) are deemed to have systemic risks under the rules. The threshold may be adjusted over time, but OpenAI's GPT-4 and DeepMind's Gemini are believed to fall into this category.
The providers of such models will have to assess and mitigate risks, report serious incidents, provide details of their systems' energy consumption, ensure they meet cybersecurity standards and carry out state-of-the-art tests and model evaluations.
As with other EU regulations targeting tech, the penalties for violating the AI Act's provisions can be steep. Companies that break the rules will be subject to fines of up to €35 million ($51.6 million) or up to seven percent of their global annual profits, whichever is higher.
The AI Act applies to any model operating in the EU, so US-based AI providers will need to abide by them, at least in Europe. Sam Altman, CEO of OpenAI creator OpenAI, suggested last May that his company might pull out of Europe were the AI Act to become law, but later said the company had no plans to do so.
To enforce the law, each member country will create its own AI watchdog and the European Commission will set up an AI Office. This will develop methods to evaluate models and monitor risks in general-purpose models. Providers of general-purpose models that are deemed to carry systemic risks will be asked to work with the office to draw up codes of conduct.
This article originally appeared on Engadget at https://www.engadget.com/eu-regulators-pass-the-planets-first-sweeping-ai-regulations-190654561.html?src=rss
YouTube just announced that it’ll be rolling out a redesign for its TV app over the next few weeks. Concrete details are scant, but the streaming platform says the new design will “open the door for a broad range of new experiences such as shopping for your creators’ favorite products.”
Beyond the pivot to shopping, the update should also improve existing features, with easier access to “video descriptions and comments.” To that end, both the descriptions and comment feed will take up a larger amount of room, when selected, with the actual video shrinking in size. YouTube says that users regularly request a smaller video feed and a prioritization of comments. As it stands, the comment feed lays over the video, so this refresh will allow users to engage with comments without covering up the actual content.
I use the YouTube app on my TV every single day, and I want improved search, an easier way to refresh my personal feed and, most importantly, the ability to look for what I want to watch next as the current video plays. You know, just like with a phone. YouTube acknowledges that the push and pull between the TV-based “lean back” experience and the smartphone-adjacent “lean in” experience was at the heart of this redesign, but there’s no mention of anything I just brought up. You will, however, be able to buy a shirt someone is wearing in a video with a simple click of the remote.
YouTube did tease that sports fans will be able to check on live scores without interrupting a video, but didn’t get into the how of it all. We reached out to the the platform and a spokesperson told us it's working on adding the feature but has nothing to announce at this time. It also said that the redesign will make it easier to both see and access video chapters, which should be useful.
It’s worth noting that these updates are for the standard YouTube app for TVs, and not the live-service YouTube TV platform. However, the latter is getting its own update in a few days, with the ability to peruse Views without interrupting live content like sporting events.
This article originally appeared on Engadget at https://www.engadget.com/youtubes-redesigned-tv-app-focuses-on-everything-but-video-183722152.html?src=rss
YouTube just announced that it’ll be rolling out a redesign for its TV app over the next few weeks. Concrete details are scant, but the streaming platform says the new design will “open the door for a broad range of new experiences such as shopping for your creators’ favorite products.”
Beyond the pivot to shopping, the update should also improve existing features, with easier access to “video descriptions and comments.” To that end, both the descriptions and comment feed will take up a larger amount of room, when selected, with the actual video shrinking in size. YouTube says that users regularly request a smaller video feed and a prioritization of comments. As it stands, the comment feed lays over the video, so this refresh will allow users to engage with comments without covering up the actual content.
I use the YouTube app on my TV every single day, and I want improved search, an easier way to refresh my personal feed and, most importantly, the ability to look for what I want to watch next as the current video plays. You know, just like with a phone. YouTube acknowledges that the push and pull between the TV-based “lean back” experience and the smartphone-adjacent “lean in” experience was at the heart of this redesign, but there’s no mention of anything I just brought up. You will, however, be able to buy a shirt someone is wearing in a video with a simple click of the remote.
YouTube did tease that sports fans will be able to check on live scores without interrupting a video, but didn’t get into the how of it all. We reached out to the the platform and a spokesperson told us it's working on adding the feature but has nothing to announce at this time. It also said that the redesign will make it easier to both see and access video chapters, which should be useful.
It’s worth noting that these updates are for the standard YouTube app for TVs, and not the live-service YouTube TV platform. However, the latter is getting its own update in a few days, with the ability to peruse Views without interrupting live content like sporting events.
This article originally appeared on Engadget at https://www.engadget.com/youtubes-redesigned-tv-app-focuses-on-everything-but-video-183722152.html?src=rss
Former professional esports player Dennis Fong founded GGWP in 2022, more than a year before companies like Microsoft and Google debuted their natural-language search engines and the AI revolution officially gripped the globe. GGWP is an AI-powered moderation system that identifies and takes action against in-game harassment and hate speech, and after two years on the scene, it’s now integrated into titles at more than 25 studios.
Fong may be a veteran of the Doom and Quake esports scenes, but he’s interested in protecting players from abuse in every genre, especially as social features become easier to implement for studios of all sizes. GGWP is live in thatgamecompany’s social adventure title Sky: Children of the Light, the meditation app TRIPP VR, the kids-focused MMO Toontown Rewritten, the first-person MOBA Predecessor, Fatshark’s action shooter Warhammer 40,000: Darktide, the metaverse platform The Sandbox, and it powers Unity’s anti-abuse toolset.
These aren’t all gritty military sims or hardcore competitive franchises like Counter-Strike or League of Legends, where you might expect emotional outbursts and increased toxicity. One-third of the games that utilize GGWP are co-op and PvE experiences, rather than competitive PvP settings, according to Fong. Turns out, cozy games need moderation too.
“Cozy games tend to see a lot more chat activity when compared to competitive games, so naturally there tend to be far more incidents that are chat-related as compared to gameplay,” Fong said. “That said, users are clever and are always discovering new ways to turn something intended to be positive, like a ‘thank you’ emote, into something negative by using it after a player makes a mistake. We help companies understand what’s happening and then implement tools to help curb that behavior.”
GGWP’s Unity partnership is particularly notable, if only because of its potential scale. GGWP powers Unity’s Safe Text and Safe Voice products, including its Vivox voice chat system, and it’s integrated into the uDash dashboard. Unity developers can activate GGWP in their games with a click and have billing handled through their existing Unity partnerships.
Outside of Unity, it takes just a few lines of code to activate GGWP in a game. There’s a free tier that allows studios to try out the system, and a self-service portal for the truly independent developer. Custom contracts for larger titles aside, GGWP charges based on the volume of API calls a game generates.
"There are companies that do a subset of what we do, but we’re the only comprehensive platform for positive play," Fong said.
In-game moderation is a massive problem for any game with a social feature, and the bigger the audience, the more harassment there is to sift through. One studio executive told Fong in 2022 that their game received more than 200 million player-submitted reports in just one year, and this volume was common among popular online titles. During his research phase, Fong found that most AAA studios addressed just 0.1 percent of all reports they received annually, and some had anti-toxicity teams of fewer than 10 people.
GGWP exists because most game companies, even the largest ones, are awful at moderating their spaces. Clicking the “report” button in many games feels like sending a strongly worded letter to a trash incinerator inside a black hole. Here’s how Fong described it to Engadget in 2022:
“I'm not gonna name names, but some of the biggest games in the world were like, you know, honestly it does go nowhere. It goes to an inbox that no one looks at. You feel that as a gamer, right? You feel despondent because you’re like, I’ve reported the same guy 15 times and nothing’s happened.”
GGWP has successfully blocked hundreds of millions of abusive messages and it’s being used to protect billions of user interactions monthly. Games that use the system have seen a 65 percent reduction in toxic behavior and a 15 percent improvement in player retention — meaning, GGWP is preventing harassment from happening in the first place, and this helps players feel comfortable enough to keep coming back.
This article originally appeared on Engadget at https://www.engadget.com/even-cozy-games-can-get-toxic-184517894.html?src=rss
Former professional esports player Dennis Fong founded GGWP in 2022, more than a year before companies like Microsoft and Google debuted their natural-language search engines and the AI revolution officially gripped the globe. GGWP is an AI-powered moderation system that identifies and takes action against in-game harassment and hate speech, and after two years on the scene, it’s now integrated into titles at more than 25 studios.
Fong may be a veteran of the Doom and Quake esports scenes, but he’s interested in protecting players from abuse in every genre, especially as social features become easier to implement for studios of all sizes. GGWP is live in thatgamecompany’s social adventure title Sky: Children of the Light, the meditation app TRIPP VR, the kids-focused MMO Toontown Rewritten, the first-person MOBA Predecessor, Fatshark’s action shooter Warhammer 40,000: Darktide, the metaverse platform The Sandbox, and it powers Unity’s anti-abuse toolset.
These aren’t all gritty military sims or hardcore competitive franchises like Counter-Strike or League of Legends, where you might expect emotional outbursts and increased toxicity. One-third of the games that utilize GGWP are co-op and PvE experiences, rather than competitive PvP settings, according to Fong. Turns out, cozy games need moderation too.
“Cozy games tend to see a lot more chat activity when compared to competitive games, so naturally there tend to be far more incidents that are chat-related as compared to gameplay,” Fong said. “That said, users are clever and are always discovering new ways to turn something intended to be positive, like a ‘thank you’ emote, into something negative by using it after a player makes a mistake. We help companies understand what’s happening and then implement tools to help curb that behavior.”
GGWP’s Unity partnership is particularly notable, if only because of its potential scale. GGWP powers Unity’s Safe Text and Safe Voice products, including its Vivox voice chat system, and it’s integrated into the uDash dashboard. Unity developers can activate GGWP in their games with a click and have billing handled through their existing Unity partnerships.
Outside of Unity, it takes just a few lines of code to activate GGWP in a game. There’s a free tier that allows studios to try out the system, and a self-service portal for the truly independent developer. Custom contracts for larger titles aside, GGWP charges based on the volume of API calls a game generates.
"There are companies that do a subset of what we do, but we’re the only comprehensive platform for positive play," Fong said.
In-game moderation is a massive problem for any game with a social feature, and the bigger the audience, the more harassment there is to sift through. One studio executive told Fong in 2022 that their game received more than 200 million player-submitted reports in just one year, and this volume was common among popular online titles. During his research phase, Fong found that most AAA studios addressed just 0.1 percent of all reports they received annually, and some had anti-toxicity teams of fewer than 10 people.
GGWP exists because most game companies, even the largest ones, are awful at moderating their spaces. Clicking the “report” button in many games feels like sending a strongly worded letter to a trash incinerator inside a black hole. Here’s how Fong described it to Engadget in 2022:
“I'm not gonna name names, but some of the biggest games in the world were like, you know, honestly it does go nowhere. It goes to an inbox that no one looks at. You feel that as a gamer, right? You feel despondent because you’re like, I’ve reported the same guy 15 times and nothing’s happened.”
GGWP has successfully blocked hundreds of millions of abusive messages and it’s being used to protect billions of user interactions monthly. Games that use the system have seen a 65 percent reduction in toxic behavior and a 15 percent improvement in player retention — meaning, GGWP is preventing harassment from happening in the first place, and this helps players feel comfortable enough to keep coming back.
This article originally appeared on Engadget at https://www.engadget.com/even-cozy-games-can-get-toxic-184517894.html?src=rss
On Wednesday, Sony unveiled the latest catalog of games for PlayStation Plus Extra and Premium subscribers. The latest batch of titles includes the Resident Evil 3 remake, Marvel’s Midnight Suns and NBA 2K24 (among others). You can play the games for free starting on Tuesday, March 19.
Capcom’s Resident Evil 3 remake (PS5 / PS4) arrived in early 2020. You play as Jill Valentine as you try to escape the virus-infected and zombie-overrun Raccoon City. Meanwhile, Marvel’s Midnight Suns (PS5 / PS4) is a tactical RPG set “in the darker side of the Marvel Universe.” Playable characters include Iron Man, Spider-Man, Wolverine, Captain America, Doctor Strange, Hulk, Deadpool and Captain Marvel.
Spurs rookie Victor Wembanyama in NBA 2K24
2K Sports / Take-Two Interactive
You can also claim the NBA 2K24 Kobe Bryant Edition (PS5 / PS4). The most current version of 2K’s long-running basketball franchise has updated rosters and historic teams, along with a “Mamba Moments” mode that relives some of the late Lakers Hall of Famer’s most memorable career highlights.
Lego DC Supervillains (PS4 only) is a 2018 game that takes the Lego franchise’s goofy, family-friendly fun and flips the script — letting you play as the bad guys. You can control villains like The Joker, Harley Quinn, Lex Luthor, Catwoman, Two-Face and the Penguin.
Other claimable titles include turn-based death match Blood Bowl 3 (PS5, PS4), puzzler Mystic Pillars: Remastered (PS5), side-scrolling RPG Super Neptune (PS4) and action RPG Dragon Ball Z: Kakarot (PS5). The classics appearing this month include the Phoenix Wright: Ace Attorney Trilogy (PS4), Jak and Daxter: The Lost Frontier (PS5, PS4), Cool Boarders (PS5, PS4), Gods Eater Burst (PS5, PS4) and JoJos Bizarre Adventure: All-Star Battle R.
This article originally appeared on Engadget at https://www.engadget.com/ps-plus-latest-free-games-include-resident-evil-3-midnight-suns-and-nba-2k24-181904818.html?src=rss
On Wednesday, Sony unveiled the latest catalog of games for PlayStation Plus Extra and Premium subscribers. The latest batch of titles includes the Resident Evil 3 remake, Marvel’s Midnight Suns and NBA 2K24 (among others). You can play the games for free starting on Tuesday, March 19.
Capcom’s Resident Evil 3 remake (PS5 / PS4) arrived in early 2020. You play as Jill Valentine as you try to escape the virus-infected and zombie-overrun Raccoon City. Meanwhile, Marvel’s Midnight Suns (PS5 / PS4) is a tactical RPG set “in the darker side of the Marvel Universe.” Playable characters include Iron Man, Spider-Man, Wolverine, Captain America, Doctor Strange, Hulk, Deadpool and Captain Marvel.
Spurs rookie Victor Wembanyama in NBA 2K24
2K Sports / Take-Two Interactive
You can also claim the NBA 2K24 Kobe Bryant Edition (PS5 / PS4). The most current version of 2K’s long-running basketball franchise has updated rosters and historic teams, along with a “Mamba Moments” mode that relives some of the late Lakers Hall of Famer’s most memorable career highlights.
Lego DC Supervillains (PS4 only) is a 2018 game that takes the Lego franchise’s goofy, family-friendly fun and flips the script — letting you play as the bad guys. You can control villains like The Joker, Harley Quinn, Lex Luthor, Catwoman, Two-Face and the Penguin.
Other claimable titles include turn-based death match Blood Bowl 3 (PS5, PS4), puzzler Mystic Pillars: Remastered (PS5), side-scrolling RPG Super Neptune (PS4) and action RPG Dragon Ball Z: Kakarot (PS5). The classics appearing this month include the Phoenix Wright: Ace Attorney Trilogy (PS4), Jak and Daxter: The Lost Frontier (PS5, PS4), Cool Boarders (PS5, PS4), Gods Eater Burst (PS5, PS4) and JoJos Bizarre Adventure: All-Star Battle R.
This article originally appeared on Engadget at https://www.engadget.com/ps-plus-latest-free-games-include-resident-evil-3-midnight-suns-and-nba-2k24-181904818.html?src=rss