Meta is testing cross-posts from Facebook to Threads

Despite quickly amassing more than 100 million users, Meta’s Threads hasn’t exactly broken through to the zeitgeist the way its main rival, X/Twitter, did. It’s arguably still awaiting its plane-on-the-Hudson moment. Nevertheless, Meta is doing what it can to bring attention to and keep eyes on the text-based platform, including by displaying popular threads on Facebook and Instagram.

Its latest test is out of a previous playbook too. The company is toying with letting users cross post from Facebook to Threads with ease. That could eventually make it easier for heavy Facebook users and/or content creators to share their thoughts, videos and photos on Threads without much more effort. As it stands, some users can share text and link posts from Facebook to Threads. There's no guarantee that Meta will deploy the feature in the long term or expand it to include images.

It makes sense for Meta to at least try this. Users have long been able to post stories and Reels to Facebook and Instagram simultaneously, so adding Threads to the mix is a logical step. Meta confirmed to TechCrunch that it's running the test, which is limited to iOS and isn't available in the EU. 

The opt-in approach is far more sensible than automatically sharing a user's Threads posts on Facebook, which Meta was doing for a while to boost awareness of the former. People often have different identities on Facebook and Instagram/Threads, even if they're tied to the same account. They might not want a highly political Threads post or dirty joke to show up in their friends' and family's Facebook feeds. At least this way they'll have the option to share a post on both platforms.

This article originally appeared on Engadget at https://www.engadget.com/meta-is-testing-cross-posts-from-facebook-to-threads-193038834.html?src=rss

The Borderlands movie trailer has all the nuance of a Borderlands game

There’s a Borderlands movie coming out, and now we have our very first teaser trailer. This footage gives us a glimpse of all of the major characters, most of which are sourced from the game, and the tone that director Eli Roth is going for.

There’s a definite Guardians of the Galaxy vibe running throughout. Maybe it’s the heavy use of an iconic Electric Light Orchestra song, or maybe it’s the ragtag group of adventurers or the mix of action and humor. In any event, director Eli Roth seems to be channeling his best James Gunn. All things considered, that seems to be the right tone for a Borderlands movie. Color us cautiously optimistic.

Now onto the cast and the characters that franchise fans know and love. Cate Blanchett plays the famously short-tempered Lilith and the actress certainly looks the part. Just look at that hair and outfit. The film follows Blanchett as she looks for a mysterious vault rumored to be stuffed to the brim with sweet, sweet loot. It’s just like the game!

Jamie Lee Curtis plays the scientist Dr. Tannis, an NPC in all three of the mainline Borderlands games. Comedian Kevin Hart portrays the mercenary Roland, a playable soldier in many of the games. Jack Black, following his turn as Bowser in the Super Mario Bros. Movie, plays the robot Claptrap. The well-meaning robot is considered a mascot for the franchise and often acts as comic relief. Black seems well-suited to the role. The cast is rounded out by Ariana Greenblatt as the demolitionist Tiny Tina, star of the spinoff game Tiny Tina’s Wonderlands, and Florian Munteanu as her enforcer Krieg.

Of course, it remains to be seen if Roth can pull off this kind of big-budget adventure spectacle. The director’s mostly known for horror films. One thing’s for certain, however, the trailer actually looks and feels like Borderlands. The big and bright color palette recalls the cel-shaded aesthetic from the games. The movie hits theaters on August 9.

This article originally appeared on Engadget at https://www.engadget.com/the-borderlands-movie-trailer-has-all-the-nuance-of-a-borderlands-game-181156113.html?src=rss

Elden Ring expansion ‘Shadow of the Erdtree’ arrives on June 21

Elden Ring fans who have been itching for a reason to return to the Lands Between (y'know, other than it being one of the best-received games in recent memory) will soon have one. Publisher Bandai Namco and studio FromSoftware revealed in a gameplay trailer that the Shadow of the Erdtree expansion will arrive on PC, PlayStation and Xbox on June 21. The studio announced the DLC one year ago.

The three-minute trailer is suitably epic in scope. It features new locales and more devious bosses to take down. One looks like a giant burning Wicker Man, while another has a horn-headed, lion-esque visage and lightning powers. We also got a look at a gross, worm-like enemy that can almost swallow the player character.

It seems there will be new hand-to-hand combat options as the protagonist is shown attacking an enemy with a combination of kicks. At the end of the clip, our hero sprouts wings for what appears to be an aerial attack.

According to Bandai Namco, players will be entering "the Land of Shadow to explore a new adventure full of mysteries and danger," where they'll "unravel the dark side of the Elden Ring story." Pre-orders for the $40 expansion are open and newcomers will be able to pick up a bundle that includes the base game. Various special editions include extras such as an artbook, soundtrack and a figure of Messmer the Impaler, who appears to be the expansion's big bad.

This article originally appeared on Engadget at https://www.engadget.com/elden-ring-expansion-shadow-of-the-erdtree-arrives-on-june-21-160953368.html?src=rss

The Morning After: Want some hybrid meat rice?

If the image itself isn’t unappetizing enough, the description might put you off. South Korean researchers have made a hybrid rice variant, infused with cow muscle and fat cells, creating a bright pink grain that is one part plant and one part meat. The team hopes to eventually create a cheaper and more sustainable source of protein, with a much lower carbon footprint than actual beef. But please: change the color.

TMA
Yonsei University

The meat cells grow both on the surface of the rice grain and inside of the grain itself. After around ten days, you get the finished product. The study, published in Matter, suggests the rice grains taste like beef sushi, which is made of cow and rice. So yes, that tracks.

— Mat Smith

The biggest stories you might have missed

The best robot vacuums on a budget for 2024

Ayaneo's NES-inspired mini PC is more than a retro tribute

Marvel’s X-Men ‘97 will pick up from where the 90s animated series left off

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!

Bose Ultra Open Earbuds review

Function meets fashion.

TMA
Engadget

Bose’s $299 Ultra Open Earbuds sit outside of your ear canal and clip onto the ridge of your ear to stay in place. Due to the open nature of the design, active noise cancellation (ANC) is moot. Open-type earbuds have become increasingly popular, mostly for the allure of “all day” wear by allowing you to stay in tune with your surroundings, so Bose developed this model that fixes all the issues of its previous design. They seem more of a fashion accessory than a wearable, however.

Continue reading.

Xbox confirms four of its games are coming to more popular consoles

Not Starfield or Indiana Jones, however.

On the latest episode of the Official Xbox Podcast, Microsoft Gaming CEO Phil Spencer said the company is bringing four of its games to "the other consoles." Contrary to previous rumors, Starfield and Indiana Jones and the Great Circle are not coming to PS5 or Switch for now. Reports have suggested that Hi-Fi Rush, Sea of Thieves, Halo and Gears of War may appear on Nintendo and Sony hardware. Both of those consoles have a far larger install base than Xbox Series X/S, which are estimated to have shipped a combined 27 million units, compared with 54.8 million PS5s and nearly 140 million Switches.

Continue reading.

OpenAI’s new model can generate minute-long videos from text prompts

It’s still in testing before being offered to the public.

OpenAI on Thursday announced Sora, a brand new model that generates high-definition videos up to one minute in length from text prompts. Sora, which means “sky” in Japanese, won’t be available to the general public any time soon. Instead, OpenAI is first offering it to a small group of academics and researchers who will assess harm and its potential for misuse. The company said on its website: “The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world.” Other companies including Meta, Google and Runway, have either teased text-to-video tools or made them available to the public. Still, no other tool can generate videos as long as 60 seconds.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-want-some-hybrid-meat-rice-121549152.html?src=rss

The Morning After: Want some hybrid meat rice?

If the image itself isn’t unappetizing enough, the description might put you off. South Korean researchers have made a hybrid rice variant, infused with cow muscle and fat cells, creating a bright pink grain that is one part plant and one part meat. The team hopes to eventually create a cheaper and more sustainable source of protein, with a much lower carbon footprint than actual beef. But please: change the color.

TMA
Yonsei University

The meat cells grow both on the surface of the rice grain and inside of the grain itself. After around ten days, you get the finished product. The study, published in Matter, suggests the rice grains taste like beef sushi, which is made of cow and rice. So yes, that tracks.

— Mat Smith

The biggest stories you might have missed

The best robot vacuums on a budget for 2024

Ayaneo's NES-inspired mini PC is more than a retro tribute

Marvel’s X-Men ‘97 will pick up from where the 90s animated series left off

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!

Bose Ultra Open Earbuds review

Function meets fashion.

TMA
Engadget

Bose’s $299 Ultra Open Earbuds sit outside of your ear canal and clip onto the ridge of your ear to stay in place. Due to the open nature of the design, active noise cancellation (ANC) is moot. Open-type earbuds have become increasingly popular, mostly for the allure of “all day” wear by allowing you to stay in tune with your surroundings, so Bose developed this model that fixes all the issues of its previous design. They seem more of a fashion accessory than a wearable, however.

Continue reading.

Xbox confirms four of its games are coming to more popular consoles

Not Starfield or Indiana Jones, however.

On the latest episode of the Official Xbox Podcast, Microsoft Gaming CEO Phil Spencer said the company is bringing four of its games to "the other consoles." Contrary to previous rumors, Starfield and Indiana Jones and the Great Circle are not coming to PS5 or Switch for now. Reports have suggested that Hi-Fi Rush, Sea of Thieves, Halo and Gears of War may appear on Nintendo and Sony hardware. Both of those consoles have a far larger install base than Xbox Series X/S, which are estimated to have shipped a combined 27 million units, compared with 54.8 million PS5s and nearly 140 million Switches.

Continue reading.

OpenAI’s new model can generate minute-long videos from text prompts

It’s still in testing before being offered to the public.

OpenAI on Thursday announced Sora, a brand new model that generates high-definition videos up to one minute in length from text prompts. Sora, which means “sky” in Japanese, won’t be available to the general public any time soon. Instead, OpenAI is first offering it to a small group of academics and researchers who will assess harm and its potential for misuse. The company said on its website: “The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world.” Other companies including Meta, Google and Runway, have either teased text-to-video tools or made them available to the public. Still, no other tool can generate videos as long as 60 seconds.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-want-some-hybrid-meat-rice-121549152.html?src=rss

Marvel’s X-Men ‘97 will pick up from where the 90s animated series left off

Disney+ has released the first trailer for its upcoming animated series X-Men '97, and it feels like a blast from the past for fans of the animated series that aired in the 90s. Its story picks up from where the old series left off, with the trailer showing how the team makes an effort to work together after the death of Professor X who was seriously injured by the end of the Saturday morning cartoon. That means viewers can expect the same roster of mutants from the original show, including Cyclops as team leader, Wolverine, Jean Grey, Beast, Storm, Rogue, Gambit, Jubilee and Bishop. By the end of trailer, we also get a glimpse of Magneto, who apparently inherited everything Professor X had left behind. 

X-Men: The Animated Series was arguably the best adaptation of the comic books. The new show has a similar look and feel to it, but its animation quality thankfully looks a lot better. It features voice actors already known for the role, including Alison Sealy-Smith as Storm and Cal Dodd as Wolverine, but it also features new ones like Ray Chase as Cyclops. According to Entertainment Weekly, Divergent star Theo James is also part of the cast, but showrunner Beau DeMayo refused to reveal who he's voicing other than saying that it's a "fan-favorite character." Marvel Animation's X-Men '97 starts streaming on Disney+ on March 20 and will have 10 episodes in all. The streaming service has yet to reveal its release schedule, but it typically adds an episode a week for its shows — whether it'll also release an episode every Saturday morning remains to be seen. 

This article originally appeared on Engadget at https://www.engadget.com/marvels-x-men-97-will-pick-up-from-where-the-90s-animated-series-left-off-082615903.html?src=rss

Marvel’s X-Men ‘97 will pick up from where the 90s animated series left off

Disney+ has released the first trailer for its upcoming animated series X-Men '97, and it feels like a blast from the past for fans of the animated series that aired in the 90s. Its story picks up from where the old series left off, with the trailer showing how the team makes an effort to work together after the death of Professor X who was seriously injured by the end of the Saturday morning cartoon. That means viewers can expect the same roster of mutants from the original show, including Cyclops as team leader, Wolverine, Jean Grey, Beast, Storm, Rogue, Gambit, Jubilee and Bishop. By the end of trailer, we also get a glimpse of Magneto, who apparently inherited everything Professor X had left behind. 

X-Men: The Animated Series was arguably the best adaptation of the comic books. The new show has a similar look and feel to it, but its animation quality thankfully looks a lot better. It features voice actors already known for the role, including Alison Sealy-Smith as Storm and Cal Dodd as Wolverine, but it also features new ones like Ray Chase as Cyclops. According to Entertainment Weekly, Divergent star Theo James is also part of the cast, but showrunner Beau DeMayo refused to reveal who he's voicing other than saying that it's a "fan-favorite character." Marvel Animation's X-Men '97 starts streaming on Disney+ on March 20 and will have 10 episodes in all. The streaming service has yet to reveal its release schedule, but it typically adds an episode a week for its shows — whether it'll also release an episode every Saturday morning remains to be seen. 

This article originally appeared on Engadget at https://www.engadget.com/marvels-x-men-97-will-pick-up-from-where-the-90s-animated-series-left-off-082615903.html?src=rss

OpenAI’s new Sora model can generate minute-long videos from text prompts

OpenAI on Thursday announced Sora, a brand new model that generates high-definition videos up to one minute in length from text prompts. Sora, which means “sky” in Japanese, won’t be available to the general public any time soon. Instead, OpenAI is making it available to a small group of academics and researchers who will assess harm and its potential for misuse.

“Sora is able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background,” the company said on its website. “The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world.”

One of the videos generated by Sora that OpenAI shared on its website shows a couple walking through a snowy Tokyo city as cherry blossom petals and snowflakes blow around them.

Another shows realistic-looking wooly mammoths walking through a snowy meadow against a backdrop of snow-clad mountain ranges.

OpenAI says that the model works as a result of “deep understanding of language,” which lets it interpret text prompts accurately. Still, like basically all AI image- and video-generators we’ve seen, Sora isn’t perfect. In one of the examples, the prompt, which asks for a video of a Dalmatian looking through a window and people “walking and cycling along the canal streets,” omits the people and the streets in the video entirely. OpenAI also warns that the model can struggle to understand cause and effect — it can generate a video of a person eating a cookie, for instance, but the cookie may not have bite marks.

Sora isn’t the first text-to-video model around. Other companies including Meta, Google and Runway, have either teased text-to-video tools or made them available to the public. Still, no other tool is currently able to generate videos as long as 60 seconds. Sora also generates entire videos at once, instead of putting them together frame-by-frame like other models, which makes sure that subjects in the video stay the same even when they go out of view temporarily.

The rise of text-to-video tools has sparked concerns over their potential to more easily create realistic-looking fake footage. “I am absolutely terrified that this kind of thing will sway a narrowly contested election,” Oren Etzioni, a professor at the University of Washington who specializes in artificial intelligence, and the founder of True Media, an organization that works to identify disinformation in political campaigns, told The New York Times. And generative AI more broadly has sparked backlash from artists and creative professionals concerned about the technology being used to replace jobs.

OpenAI said that it was working with experts in areas like misinformation, hateful content and bias to test the tool before making it available to the public. The company is also building tools capable of detecting videos generated by Sora and including metadata in the generated videos for easier detection. The company declined to tell the Times how Sora had been trained, except stating that it used both “publicly available videos” as well as videos licensed from copyright holders.

This article originally appeared on Engadget at https://www.engadget.com/openais-new-sora-model-can-generate-minute-long-videos-from-text-prompts-195717694.html?src=rss

OpenAI’s new Sora model can generate minute-long videos from text prompts

OpenAI on Thursday announced Sora, a brand new model that generates high-definition videos up to one minute in length from text prompts. Sora, which means “sky” in Japanese, won’t be available to the general public any time soon. Instead, OpenAI is making it available to a small group of academics and researchers who will assess harm and its potential for misuse.

“Sora is able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background,” the company said on its website. “The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world.”

One of the videos generated by Sora that OpenAI shared on its website shows a couple walking through a snowy Tokyo city as cherry blossom petals and snowflakes blow around them.

Another shows realistic-looking wooly mammoths walking through a snowy meadow against a backdrop of snow-clad mountain ranges.

OpenAI says that the model works as a result of “deep understanding of language,” which lets it interpret text prompts accurately. Still, like basically all AI image- and video-generators we’ve seen, Sora isn’t perfect. In one of the examples, the prompt, which asks for a video of a Dalmatian looking through a window and people “walking and cycling along the canal streets,” omits the people and the streets in the video entirely. OpenAI also warns that the model can struggle to understand cause and effect — it can generate a video of a person eating a cookie, for instance, but the cookie may not have bite marks.

Sora isn’t the first text-to-video model around. Other companies including Meta, Google and Runway, have either teased text-to-video tools or made them available to the public. Still, no other tool is currently able to generate videos as long as 60 seconds. Sora also generates entire videos at once, instead of putting them together frame-by-frame like other models, which makes sure that subjects in the video stay the same even when they go out of view temporarily.

The rise of text-to-video tools has sparked concerns over their potential to more easily create realistic-looking fake footage. “I am absolutely terrified that this kind of thing will sway a narrowly contested election,” Oren Etzioni, a professor at the University of Washington who specializes in artificial intelligence, and the founder of True Media, an organization that works to identify disinformation in political campaigns, told The New York Times. And generative AI more broadly has sparked backlash from artists and creative professionals concerned about the technology being used to replace jobs.

OpenAI said that it was working with experts in areas like misinformation, hateful content and bias to test the tool before making it available to the public. The company is also building tools capable of detecting videos generated by Sora and including metadata in the generated videos for easier detection. The company declined to tell the Times how Sora had been trained, except stating that it used both “publicly available videos” as well as videos licensed from copyright holders.

This article originally appeared on Engadget at https://www.engadget.com/openais-new-sora-model-can-generate-minute-long-videos-from-text-prompts-195717694.html?src=rss

YouTube Shorts now lets you chop up and remix music videos

YouTube just released a new feature that lets users remix music videos and turn them into Shorts. This allows you to adjust various parameters from a full-length music video to create something wholly unique. Does this sound like TikTok? It definitely sounds like TikTok.

Here’s how it works. Just tap “remix” on a music video. You’ll be presented with four options: Sound, Green Screen, Cut and Collab. You can only pick one, so choose wisely. The Sound tool does what you think. It strips the audio and lets you use it in your own YouTube Short. This is the kind of thing that’s hugely popular on TikTok, with many users lip-syncing to various audio clips. This Sound tool is available to any music video and most songs that were automatically uploaded to the platform.  

Green Screen takes things a step further. It turns the video into a background, which you can then dance in front of or whatever. The Cut tool just clips out a five second portion of the video that you can add to any Short. Finally, Collab creates a side-by-side video that places your Short next to the original content. YouTube says this is the perfect option when “you and your friends” want to show off choreography alongside the original artist.

The feature’s already available on the mobile app, though it may not have rolled out to every user yet. If you want to check, just open the app, click on a music video and look for that “remix” option. It’s worth noting that many of these features were already available to Shorts creators, but not in one handy tab.

A still from a Dr. Dre video.
YouTube/Lawrence Bonk

YouTube Shorts was already a TikTok-alike when it released back in 2021, but these features make it even more, uh, TikTok-ier. With that in mind, YouTube picked the perfect time to officially launch the toolset. Universal Music has pulled its roster from TikTok after a breakdown in financial negotiations. UMG artists include Taylor Swift, Drake, Billie Eilish and many more. 

This has forced TikTok creators to swap out music tracks, as anything sourced from Universal is automatically muted. The record label has accused TikTok of wanting to pay a “fraction” of rates offered by other social media sites. YouTube’s Remix tool has access to Universal’s entire roster.

This article originally appeared on Engadget at https://www.engadget.com/youtube-shorts-now-lets-you-chop-up-and-remix-music-videos-180655627.html?src=rss