Meta says the future of Facebook is young adults (again)

When you think of the 20-year-old social network that is Facebook, its popularity among “young adults” is probably not what comes to mind. Naturally, Meta wants to change that and the company is once again telling the world it intends to reorient its platform in order to appeal to that demographic.

In an update from Tom Alison, who heads up the Facebook app for Meta, he says that the service is shifting to reflect an “increased focus on young adults” compared with other users. “Facebook is still for everyone, but in order to build for the next generation of social media consumers, we’ve made significant changes with young adults in mind,” he wrote.

If any of this sounds familiar, it’s because Meta executives have been trying to win over “young adults” for years in an effort to better compete with TikTok. Mark Zuckerberg said almost three years ago that he wanted to make young adults the company’s “North Star.” And Alison and Zuckerberg have both been talking about the Facebook app’s pivot to a discovery-focused feed rather than one based on users’ connections.

That shift is now well underway. Alison said that the company’s AI advancements have already improved recommendations for Reels and feed, and that “advanced recommendations technology will power more products” over the next year. He added that private sharing among users is also on the rise, with more users sharing video (though no word on the once-rumored plan to bring messaging back into the main app).

Notably, Alison’s note makes no mention of the “metaverse,” which Zuckerberg also once saw as a central part of the company’s future. Instead, he says that “leaning into new product capabilities enabled by AI” is a significant goal, along with luring younger users. That’s also not surprising, given that Meta and Zuckerberg have recently tried to rebrand some of the company’s metaverse ambitions as AI advancements.

But it’s also not clear how successful Meta will be in its efforts to win over young adults. Though Alison says Facebook has seen “five quarters of healthy growth in young adult app usage in the US and Canada,” with 40 million young adult daily active users, that’s still a relatively small percentage of the 205 million daily US Facebook users the company reported in February, the last time it would break out user numbers for the app.

This article originally appeared on Engadget at https://www.engadget.com/meta-says-the-future-of-facebook-is-young-adults-again-203500866.html?src=rss

The Tribeca Film Festival will debut a bunch of short films made by AI

The Tribeca Film Festival will debut five short films made by AI, as detailed by The Hollywood Reporter. The shorts will use OpenAI’s Sora model, which transforms text inputs into create video clips. This is the first time this type of technology will take center stage at the long-running film festival.

“Tribeca is rooted in the foundational belief that storytelling inspires change. Humans need stories to thrive and make sense of our wonderful and broken world,” said co-founder and CEO of Tribeca Enterprises Jane Rosenthal. Who better to chronicle our wonderful and broken world than some lines of code owned by a company that just dissolved its dedicated safety team to let CEO Sam Altman and other board members self-police everything?

The unnamed filmmakers were all given access to the Sora model, which isn’t yet available to the public, though they have to follow the terms of the agreements negotiated during the recent strikes as they pertain to AI. OpenAI’s COO, Brad Lightcap, says the feedback provided by these filmmakers will be used to “make Sora a better tool for all creatives.”

When we last covered Sora, it could only handle 60 seconds of video from a single prompt. If that’s still the case, these short films will make Quibi shows look like a Ken Burns documentary. The software also struggles with cause and effect and, well, that’s basically what a story is. However, all of these limitations come from the ancient days of February, and this tech tends to move quickly. Also, I assume there’s no rule against using prompts to create single scenes, which the filmmaker can string together to make a story.

We don’t have that long to find out if cold technology can accurately peer into our warm human hearts. The shorts will screen on June 15 and there’s a conversation with the various filmmakers immediately following the debut.

This follows a spate of agreements between OpenAI and various media companies. Vox Media, The Atlantic, News Corp, Dotdash Meredith and even Reddit have all struck deals with OpenAI to let the company train its models on their content. Meanwhile, Meta and Google are looking for similar partnerships with Hollywood film studios to train its models. It looks like we are going to get this “AI creates everything” future, whether we want it or not.

This article originally appeared on Engadget at https://www.engadget.com/the-tribeca-film-festival-will-debut-a-bunch-of-short-films-made-by-ai-181534064.html?src=rss

The Tribeca Film Festival will debut a bunch of short films made by AI

The Tribeca Film Festival will debut five short films made by AI, as detailed by The Hollywood Reporter. The shorts will use OpenAI’s Sora model, which transforms text inputs into create video clips. This is the first time this type of technology will take center stage at the long-running film festival.

“Tribeca is rooted in the foundational belief that storytelling inspires change. Humans need stories to thrive and make sense of our wonderful and broken world,” said co-founder and CEO of Tribeca Enterprises Jane Rosenthal. Who better to chronicle our wonderful and broken world than some lines of code owned by a company that just dissolved its dedicated safety team to let CEO Sam Altman and other board members self-police everything?

The unnamed filmmakers were all given access to the Sora model, which isn’t yet available to the public, though they have to follow the terms of the agreements negotiated during the recent strikes as they pertain to AI. OpenAI’s COO, Brad Lightcap, says the feedback provided by these filmmakers will be used to “make Sora a better tool for all creatives.”

When we last covered Sora, it could only handle 60 seconds of video from a single prompt. If that’s still the case, these short films will make Quibi shows look like a Ken Burns documentary. The software also struggles with cause and effect and, well, that’s basically what a story is. However, all of these limitations come from the ancient days of February, and this tech tends to move quickly. Also, I assume there’s no rule against using prompts to create single scenes, which the filmmaker can string together to make a story.

We don’t have that long to find out if cold technology can accurately peer into our warm human hearts. The shorts will screen on June 15 and there’s a conversation with the various filmmakers immediately following the debut.

This follows a spate of agreements between OpenAI and various media companies. Vox Media, The Atlantic, News Corp, Dotdash Meredith and even Reddit have all struck deals with OpenAI to let the company train its models on their content. Meanwhile, Meta and Google are looking for similar partnerships with Hollywood film studios to train its models. It looks like we are going to get this “AI creates everything” future, whether we want it or not.

This article originally appeared on Engadget at https://www.engadget.com/the-tribeca-film-festival-will-debut-a-bunch-of-short-films-made-by-ai-181534064.html?src=rss

Marvel’s “What If…?” for Apple Vision Pro looks incredible, but plays terribly

The Watcher stood tall in my family room, bald and berobed, nestled amongst my kids' toys, sleeping cats and TV. I was being asked to help save the multiverse! So began Marvel and ILM Interactive's What If...? on the Apple Vision Pro. Like the Disney+ series and comics of the same name, this interactive experience recontextualizes Marvel's characters in a variety of intriguing ways — what if the Allies never won World War 2 and the Captain America experiment was a failure, for example.

What If...? has always been a fun concept, but can it actually be transformed into a worthwhile augmented reality showpiece? Well, yes and no — at least, based on the hour I spent with it on the Apple Vision Pro.

Before I dive into major criticisms, I'll say up front that What If...? is clearly an experiment, so rough edges are to be expected. I give Marvel and ILM Interactive credit for making it completely free for Vision Pro users and for taking a sizable swing at a platform without many users. The entire experience also looks wonderfully detailed, thanks to the combination of Marvel and ILM's immersive environments and character animation, as well as the sheer power of the Vision Pro's M2 processor. It's the closest you'll get to living inside of a comic.

Marvel has already dabbled in virtual reality with Iron Man for the PSVR and Quest, as well as Marvel Powers United VR, but What If...? is an attempt to accomplish something even more immersive: What if you could interact with superheroes right in your home? Mostly, though, I found myself asking "What if this experience was actually fun to play?"

Marvel's What If...? on the Apple Vision Pro
Marvel

You're placed in the role of a mystical apprentice, wielding powers similar to Doctor Strange. Initially you can hold up a fist to manifest a shield, or look towards objects to use telekinesis. But you eventually gain the ability to shoot mystical blasts and trap enemies. It all sounds incredibly cool in theory, but in practice it felt worse than the first-gen VR games I played a decade ago.

Mostly, that's because What If...? relies on your hands for everything. The Vision Pro doesn't have a dedicated VR controller like the Oculus Quest or HTC Vive, which offer instant button inputs and could be tracked through IR sensors. Instead, you have to wait a fraction of a second for Apple's headset to recognize your hands and determine what you're trying to do. Consequently, What If...? feels more like you're sitting through a Marvel theme park ride, moving from one scenario to the next without much active participation. It's a poor way to make you feel like a multiverse-hopping adventurer.

At the very least, What If...? shows off what Marvel could do if it focused more on the Vision Pro and whatever Oculus has cooking next. Like a campy 3D film, the game wastes no time trying to blow you away with its core gimmick. It kicks off with a remixed Marvel intro montage in 2D, floating in front of you in augmented reality. As Michael Giacchino's iconic score crescendoes, you're suddenly surrounded by clips of the series drifting in from outside your field of view. It's a brief moment, but it's the sort of thing that wouldn't be as impactful in a traditional VR headset, where you're immersed in an alternate reality from the start.

Marvel's What If...? on the Apple Vision Pro
Marvel

The experience truly begins with the aforementioned Watcher — one of Marvel's cosmic beings who observe its many universes — roping you in for an adventure. You know the drill: Find all of the Infinity Stones and stop whoever is trying to destroy all known creation. Kids' stuff. Along the way, you'll run into alternate-universe versions of familiar characters: Thor's sister Hela, who only wants to save her beloved giant wolf Fenris; a version of Steve Rogers who looks eerily like the Red Skull; and a more sympathetic Thanos. 

What If...? moves between virtual environments that fully immerse you in the action and augmented reality scenarios, where The Watcher and a few companions putter around your room. You can do the same, sometimes, but within the VR segments, the game expects you to stay still. You'll also have to click through Vision Pro pop-ups about being mindful of your surroundings before every VR scene — a necessary evil for people unfamiliar with VR, but also something that kills immersion since it's not integrated into the game.

Marvel's What If...? on the Apple Vision Pro
Marvel

Despite my issues with the gameplay, I ultimately had a decently entertaining hour with What If...? It was a quick Marvel fix in a time where I've grown tired of the onslaught of Disney+ MCU shows. I just can't help but wish it were more fun to play. I'm hoping this release helps Marvel and ILM Interactive get better at building AR and VR experiences. And for Apple, it's a clear sign that some sort of Vision Pro controller would be helpful down the line. 

This article originally appeared on Engadget at https://www.engadget.com/marvels-what-if-for-apple-vision-pro-looks-incredible-but-plays-terribly-143028639.html?src=rss

Marvel’s “What If…?” for Apple Vision Pro looks incredible, but plays terribly

The Watcher stood tall in my family room, bald and berobed, nestled amongst my kids' toys, sleeping cats and TV. I was being asked to help save the multiverse! So began Marvel and ILM Interactive's What If...? on the Apple Vision Pro. Like the Disney+ series and comics of the same name, this interactive experience recontextualizes Marvel's characters in a variety of intriguing ways — what if the Allies never won World War 2 and the Captain America experiment was a failure, for example.

What If...? has always been a fun concept, but can it actually be transformed into a worthwhile augmented reality showpiece? Well, yes and no — at least, based on the hour I spent with it on the Apple Vision Pro.

Before I dive into major criticisms, I'll say up front that What If...? is clearly an experiment, so rough edges are to be expected. I give Marvel and ILM Interactive credit for making it completely free for Vision Pro users and for taking a sizable swing at a platform without many users. The entire experience also looks wonderfully detailed, thanks to the combination of Marvel and ILM's immersive environments and character animation, as well as the sheer power of the Vision Pro's M2 processor. It's the closest you'll get to living inside of a comic.

Marvel has already dabbled in virtual reality with Iron Man for the PSVR and Quest, as well as Marvel Powers United VR, but What If...? is an attempt to accomplish something even more immersive: What if you could interact with superheroes right in your home? Mostly, though, I found myself asking "What if this experience was actually fun to play?"

Marvel's What If...? on the Apple Vision Pro
Marvel

You're placed in the role of a mystical apprentice, wielding powers similar to Doctor Strange. Initially you can hold up a fist to manifest a shield, or look towards objects to use telekinesis. But you eventually gain the ability to shoot mystical blasts and trap enemies. It all sounds incredibly cool in theory, but in practice it felt worse than the first-gen VR games I played a decade ago.

Mostly, that's because What If...? relies on your hands for everything. The Vision Pro doesn't have a dedicated VR controller like the Oculus Quest or HTC Vive, which offer instant button inputs and could be tracked through IR sensors. Instead, you have to wait a fraction of a second for Apple's headset to recognize your hands and determine what you're trying to do. Consequently, What If...? feels more like you're sitting through a Marvel theme park ride, moving from one scenario to the next without much active participation. It's a poor way to make you feel like a multiverse-hopping adventurer.

At the very least, What If...? shows off what Marvel could do if it focused more on the Vision Pro and whatever Oculus has cooking next. Like a campy 3D film, the game wastes no time trying to blow you away with its core gimmick. It kicks off with a remixed Marvel intro montage in 2D, floating in front of you in augmented reality. As Michael Giacchino's iconic score crescendoes, you're suddenly surrounded by clips of the series drifting in from outside your field of view. It's a brief moment, but it's the sort of thing that wouldn't be as impactful in a traditional VR headset, where you're immersed in an alternate reality from the start.

Marvel's What If...? on the Apple Vision Pro
Marvel

The experience truly begins with the aforementioned Watcher — one of Marvel's cosmic beings who observe its many universes — roping you in for an adventure. You know the drill: Find all of the Infinity Stones and stop whoever is trying to destroy all known creation. Kids' stuff. Along the way, you'll run into alternate-universe versions of familiar characters: Thor's sister Hela, who only wants to save her beloved giant wolf Fenris; a version of Steve Rogers who looks eerily like the Red Skull; and a more sympathetic Thanos. 

What If...? moves between virtual environments that fully immerse you in the action and augmented reality scenarios, where The Watcher and a few companions putter around your room. You can do the same, sometimes, but within the VR segments, the game expects you to stay still. You'll also have to click through Vision Pro pop-ups about being mindful of your surroundings before every VR scene — a necessary evil for people unfamiliar with VR, but also something that kills immersion since it's not integrated into the game.

Marvel's What If...? on the Apple Vision Pro
Marvel

Despite my issues with the gameplay, I ultimately had a decently entertaining hour with What If...? It was a quick Marvel fix in a time where I've grown tired of the onslaught of Disney+ MCU shows. I just can't help but wish it were more fun to play. I'm hoping this release helps Marvel and ILM Interactive get better at building AR and VR experiences. And for Apple, it's a clear sign that some sort of Vision Pro controller would be helpful down the line. 

This article originally appeared on Engadget at https://www.engadget.com/marvels-what-if-for-apple-vision-pro-looks-incredible-but-plays-terribly-143028639.html?src=rss

The Morning After: Google tightens up its AI Overview feature after suggesting glue on a pizza

Liz Reid, head of Google Search, has admitted the company’s search engine has returned some “odd, inaccurate or unhelpful AI Overviews” after the feature rolled out to everyone in the US.

The executive’s explanation outlined some new safeguards to help the new feature return more accurate (and less funny) results. Some of the worst AI Overview results doing the rounds were apparently faked, but the glue-on-pizza example was real, as was the viral answer to how many rocks you should be eating. Also real. Reid said Google came up with an answer because it had tapped into a comedy satire site.

The issue for Google is this could erode trust in the search engine’s results and accuracy. Reid said the company tested the feature extensively before launch, but “there’s nothing quite like having millions of people using the feature with many novel searches.” Maybe it needed a little more testing first.

— Mat Smith

Silent Hill 2 remake hits PS5 and PC on October 8

OpenAI says it stopped multiple covert influence operations that abused its AI models

Until Dawn remaster is coming to PS5 and PC this fall

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!

TMA
Firewalk Studios

It’s been a long time since we had a first-person shooter from a PlayStation studio. Finally, Firewalk Studios’ Concord has broken cover. Firewalk says it focused on tight movement, precise gunplay and a range of abilities — just as you might expect from a studio led by former Destiny developers. It’s a five vs. five hero shooter, suggesting comparisons to Overwatch 2 — now a Microsoft-owned title. Expect 16 heroes, six game modes and some cinematic scenes between all the fighting. It’s coming to PS5 and PC on August 23, with a beta in July.

Continue reading.

Apple’s Worldwide Developers Conference is right around the corner. Expect the company to reveal some of the main features of iOS 18 and iPadOS 18, as well as what’s ahead for the likes of watchOS, macOS and visionOS at WWDC 2024. Expect all kinds of generative AI tricks — hopefully even some compelling ones. I’d appreciate more photo-fill features to match Google’s efforts on Android. It seems unlikely we’ll get any major hardware announcements at the event, but you never truly know until Tim Cook wraps things up — maybe we’ll get a next-gen Vision Pro VR headset.

Continue reading.

Meta is rolling out a new TweetDeck-like column view to all Threads users after it started testing the feature earlier this month. The new look, which some Threads users have nicknamed ThreadsDeck, allows you to pin up to 100 feeds to the Threads home page. Each column can also be set to auto-update. Yeah, it’s TweetDeck but Threads. And you can more easily hide Threads’ trashy for-you feed. At least, mine is particularly trashy. Perhaps I’m the problem.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-google-tightens-up-its-ai-overview-feature-after-suggesting-glue-on-a-pizza-111502061.html?src=rss

The Morning After: Google tightens up its AI Overview feature after suggesting glue on a pizza

Liz Reid, head of Google Search, has admitted the company’s search engine has returned some “odd, inaccurate or unhelpful AI Overviews” after the feature rolled out to everyone in the US.

The executive’s explanation outlined some new safeguards to help the new feature return more accurate (and less funny) results. Some of the worst AI Overview results doing the rounds were apparently faked, but the glue-on-pizza example was real, as was the viral answer to how many rocks you should be eating. Also real. Reid said Google came up with an answer because it had tapped into a comedy satire site.

The issue for Google is this could erode trust in the search engine’s results and accuracy. Reid said the company tested the feature extensively before launch, but “there’s nothing quite like having millions of people using the feature with many novel searches.” Maybe it needed a little more testing first.

— Mat Smith

Silent Hill 2 remake hits PS5 and PC on October 8

OpenAI says it stopped multiple covert influence operations that abused its AI models

Until Dawn remaster is coming to PS5 and PC this fall

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!

TMA
Firewalk Studios

It’s been a long time since we had a first-person shooter from a PlayStation studio. Finally, Firewalk Studios’ Concord has broken cover. Firewalk says it focused on tight movement, precise gunplay and a range of abilities — just as you might expect from a studio led by former Destiny developers. It’s a five vs. five hero shooter, suggesting comparisons to Overwatch 2 — now a Microsoft-owned title. Expect 16 heroes, six game modes and some cinematic scenes between all the fighting. It’s coming to PS5 and PC on August 23, with a beta in July.

Continue reading.

Apple’s Worldwide Developers Conference is right around the corner. Expect the company to reveal some of the main features of iOS 18 and iPadOS 18, as well as what’s ahead for the likes of watchOS, macOS and visionOS at WWDC 2024. Expect all kinds of generative AI tricks — hopefully even some compelling ones. I’d appreciate more photo-fill features to match Google’s efforts on Android. It seems unlikely we’ll get any major hardware announcements at the event, but you never truly know until Tim Cook wraps things up — maybe we’ll get a next-gen Vision Pro VR headset.

Continue reading.

Meta is rolling out a new TweetDeck-like column view to all Threads users after it started testing the feature earlier this month. The new look, which some Threads users have nicknamed ThreadsDeck, allows you to pin up to 100 feeds to the Threads home page. Each column can also be set to auto-update. Yeah, it’s TweetDeck but Threads. And you can more easily hide Threads’ trashy for-you feed. At least, mine is particularly trashy. Perhaps I’m the problem.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-google-tightens-up-its-ai-overview-feature-after-suggesting-glue-on-a-pizza-111502061.html?src=rss

Google is putting more restrictions on AI Overviews after it told people to put glue on pizza

Liz Reid, the Head of Google Search, has admitted that the company's search engine has returned some "odd, inaccurate or unhelpful AI Overviews" after they rolled out to everyone in the US. The executive published an explanation for Google's more peculiar AI-generated responses in a blog post, where it also announced that the company has implemented safeguards that will help the new feature return more accurate and less meme-worthy results. 

Reid defended Google and pointed out that some of the more egregious AI Overview responses going around, such as claims that it's safe to leave dogs in cars, are fake. The viral screenshot showing the answer to "How many rocks should I eat?" is real, but she said that Google came up with an answer because a website published a satirical content tackling the topic. "Prior to these screenshots going viral, practically no one asked Google that question," she explained, so the company's AI linked to that website.

The Google VP also confirmed that AI Overview told people to use glue to get cheese to stick to pizza based on content taken from a forum. She said forums typically provide "authentic, first-hand information," but they could also lead to "less-than-helpful advice." The executive didn't mention the other viral AI Overview answers going around, but as The Washington Post reports, the technology also told users that Barack Obama was Muslim and that people should drink plenty of urine to help them pass a kidney stone. 

Reid said the company tested the feature extensively before launch, but "there’s nothing quite like having millions of people using the feature with many novel searches." Google was apparently able to determine patterns wherein its AI technology didn't get things right by looking at examples of its responses over the past couple of weeks. It has then put protections in place based on its observations, starting by tweaking its AI to be able to better detect humor and satire content. It has also updated its systems to limit the addition of user-generated replies in Overviews, such as social media and forum posts, which could give people misleading or even harmful advice. In addition, it has also "added triggering restrictions for queries where AI Overviews were not proving to be as helpful" and has stopped showing AI-generated replies for certain health topics. 

This article originally appeared on Engadget at https://www.engadget.com/google-is-putting-more-restrictions-on-ai-overviews-after-it-told-people-to-put-glue-on-pizza-011316780.html?src=rss

Google is putting more restrictions on AI Overviews after it told people to put glue on pizza

Liz Reid, the Head of Google Search, has admitted that the company's search engine has returned some "odd, inaccurate or unhelpful AI Overviews" after they rolled out to everyone in the US. The executive published an explanation for Google's more peculiar AI-generated responses in a blog post, where it also announced that the company has implemented safeguards that will help the new feature return more accurate and less meme-worthy results. 

Reid defended Google and pointed out that some of the more egregious AI Overview responses going around, such as claims that it's safe to leave dogs in cars, are fake. The viral screenshot showing the answer to "How many rocks should I eat?" is real, but she said that Google came up with an answer because a website published a satirical content tackling the topic. "Prior to these screenshots going viral, practically no one asked Google that question," she explained, so the company's AI linked to that website.

The Google VP also confirmed that AI Overview told people to use glue to get cheese to stick to pizza based on content taken from a forum. She said forums typically provide "authentic, first-hand information," but they could also lead to "less-than-helpful advice." The executive didn't mention the other viral AI Overview answers going around, but as The Washington Post reports, the technology also told users that Barack Obama was Muslim and that people should drink plenty of urine to help them pass a kidney stone. 

Reid said the company tested the feature extensively before launch, but "there’s nothing quite like having millions of people using the feature with many novel searches." Google was apparently able to determine patterns wherein its AI technology didn't get things right by looking at examples of its responses over the past couple of weeks. It has then put protections in place based on its observations, starting by tweaking its AI to be able to better detect humor and satire content. It has also updated its systems to limit the addition of user-generated replies in Overviews, such as social media and forum posts, which could give people misleading or even harmful advice. In addition, it has also "added triggering restrictions for queries where AI Overviews were not proving to be as helpful" and has stopped showing AI-generated replies for certain health topics. 

This article originally appeared on Engadget at https://www.engadget.com/google-is-putting-more-restrictions-on-ai-overviews-after-it-told-people-to-put-glue-on-pizza-011316780.html?src=rss

Sony’s Overwatch-esque Concord is coming to PS5 and PC on August 23

Save for Destiny 2, it's been a long time since there's been a first-person shooter from a PlayStation studio. As such, there's been quite a bit of interest in Firewalk Studios' Concord since it was announced a year ago. 

We learned more details about the sci-fi game during Sony's State of Play showcase, including a release date and the first look at gameplay. It's coming to PS5 and PC on August 23, and there will be a beta in July. (Heads up: You will need a PlayStation Network account to play on PC.) 

Firewalk says it focused on making sure Concord has tight movement, precise gunplay and a range of abilities — just as you might expect from a studio led by former Destiny developers. Given that it's a five vs. five hero shooter, there are plenty of similarities with Overwatch 2 as well (Microsoft now owns that game, for what it's worth).

As you might imagine, there are several roles to choose from, while each of the initial 16 characters, who are dubbed Freegunners, has unique abilities. Firewalk aimed to make Concord approachable for a wide range of players, no matter their preferred play style or skill level. There should be at least one or two Freegunners whose abilities you can get to grips with relatively quickly. 

You might throw exploding knives, deploy spores that grant speed boosts, drop a healing pad, cast a wall of fire or block an entire lane with a wall. Some of these abilities will persist on the map between rounds and respawns, and others are designed for more spontaneous use.

There will be six different game modes at launch. Firewalk plans to add more modes, Freegunners, maps and cinematic vignettes as free post-launch updates. 

There's a lot of competition in this genre and in the live-service market overall. But Sony already has a hit shooter this year in Helldivers 2. Concord has made a solid impression to date, so it has a fair chance of being successful too.

This article originally appeared on Engadget at https://www.engadget.com/sonys-overwatch-esque-concord-is-coming-to-ps5-and-pc-on-august-23-224046362.html?src=rss