Xbox confirms four of its games are coming to more popular consoles

Times are a-changing at Xbox. The brand's leaders have confirmed plans to bring more Xbox games to other platforms — that almost definitely means PlayStation 5 and Nintendo Switch. Both of those consoles have a far larger install base than Xbox Series X/S, which are estimated to have shipped a combined 27 million units, compared with 54.8 million PS5s and nearly 140 million Switches.

On the latest edition of the Official Xbox Podcast, Microsoft Gaming CEO Phil Spencer says his team is bringing four of its games to "the other consoles." He didn't name the titles, but contrary to previous rumors, Starfield and Indiana Jones and the Great Circle are not coming to PS5 or Switch for now. Reports have suggested that Hi-Fi Rush, Sea of Thieves, Halo and Gears of War would be among those crossing the great divide. 

Spencer did confirm that the Xbox games that are coming to PlayStation and Switch have been on Xbox and PC for at least a year already. "A couple of the games are community-driven games, new games, kind of first iterations of a franchise that have reached their full potential, let's say, on Xbox and PC — there's always growth, franchises that we obviously want to continue to invest in," he said. 

"Two of the other games are smaller games that were never really meant to be built as kind of platform exclusives and all the fanfare that goes around that, but games that our teams really wanted to go build that we love supporting creative endeavors across our studios regardless of size," Spencer added. "And as they've realized their full potential on Xbox and PC, we see an opportunity to utilize the other platforms as a place to just drive more business value out of those games, allowing us to invest in maybe future iterations of those, so equals to those or just other games like that in our portfolio."

Spencer said Xbox isn't going to commit to porting other titles to more platforms beyond those four games just yet. He urged folks who play games on "those other platforms" not to assume every Xbox game will be come to their systems, but suggested that his team is going to take notes based on the impact of the initial four games and take things from there.

That said, this doesn't mark a major change in strategy, Spencer argued. Xbox's philosophy has long been about helping players access its games from anywhere, including through the cloud, and tiptoeing onto other consoles is just a part of that.

"By bringing these games to more players, we not only expand the reach and impact of those titles, but this will allow us to invest in either future versions of these games, or elsewhere in our first-party portfolio," an Xbox Wire blog post reads. "There is no fundamental change to our approach on exclusivity."

President of Game Content and Studios Matt Booty noted on the podcast that Xbox will continue to release its first-party games on Game Pass on their release date, and that "Game Pass will only be available on Xbox." Still, Booty acknowledged that Microsoft wants to bring more of its games to more players.

Meanwhile, Xbox President Sarah Bond assured fans that Microsoft isn't looking to get out of the console hardware business. In fact, the team has "some exciting stuff coming out in hardware that we're going to share this holiday." Previous leaks indicated that Microsoft was building an all-digital version of the Xbox Series X that has improved Wi-Fi connectivity and more power efficiency.

Microsoft is also looking ahead to the next Xbox. "We're also invested in the next generation roadmap," Bond added. "And what we're really focused on there is delivering the largest technical leap you will have ever seen in a hardware generation, which makes it better for players and better for creators and the visions that they're building." A leak last September indicated that the next Xbox is slated to arrive in 2028 and that it will be support "cloud-hybrid games."

Microsoft's gaming division looks vastly different than it did a few months ago. The company finally completed its protracted $68.7 billion takeover of Activision Blizzard in October, significantly swelling its headcount in the process. In January, Microsoft said it was laying off 1,900 people from the gaming teams. It also canceled at least one game, a survival title that Blizzard was working on.

Even though the Activision acquisition immediately and significantly improved the bottom line of Microsoft's gaming division, the company is looking to make that part of the business more profitable. Reducing headcount is one way of doing that. Selling games to new audiences on other platforms is an arguably healthier approach, even though it might come at the expense of turning some former Xbox loyalists away from the brand.

This article originally appeared on Engadget at https://www.engadget.com/xbox-confirms-four-of-its-games-are-coming-to-more-popular-consoles-201419203.html?src=rss

OpenAI’s new Sora model can generate minute-long videos from text prompts

OpenAI on Thursday announced Sora, a brand new model that generates high-definition videos up to one minute in length from text prompts. Sora, which means “sky” in Japanese, won’t be available to the general public any time soon. Instead, OpenAI is making it available to a small group of academics and researchers who will assess harm and its potential for misuse.

“Sora is able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background,” the company said on its website. “The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world.”

One of the videos generated by Sora that OpenAI shared on its website shows a couple walking through a snowy Tokyo city as cherry blossom petals and snowflakes blow around them.

Another shows realistic-looking wooly mammoths walking through a snowy meadow against a backdrop of snow-clad mountain ranges.

OpenAI says that the model works as a result of “deep understanding of language,” which lets it interpret text prompts accurately. Still, like basically all AI image- and video-generators we’ve seen, Sora isn’t perfect. In one of the examples, the prompt, which asks for a video of a Dalmatian looking through a window and people “walking and cycling along the canal streets,” omits the people and the streets in the video entirely. OpenAI also warns that the model can struggle to understand cause and effect — it can generate a video of a person eating a cookie, for instance, but the cookie may not have bite marks.

Sora isn’t the first text-to-video model around. Other companies including Meta, Google and Runway, have either teased text-to-video tools or made them available to the public. Still, no other tool is currently able to generate videos as long as 60 seconds. Sora also generates entire videos at once, instead of putting them together frame-by-frame like other models, which makes sure that subjects in the video stay the same even when they go out of view temporarily.

The rise of text-to-video tools has sparked concerns over their potential to more easily create realistic-looking fake footage. “I am absolutely terrified that this kind of thing will sway a narrowly contested election,” Oren Etzioni, a professor at the University of Washington who specializes in artificial intelligence, and the founder of True Media, an organization that works to identify disinformation in political campaigns, told The New York Times. And generative AI more broadly has sparked backlash from artists and creative professionals concerned about the technology being used to replace jobs.

OpenAI said that it was working with experts in areas like misinformation, hateful content and bias to test the tool before making it available to the public. The company is also building tools capable of detecting videos generated by Sora and including metadata in the generated videos for easier detection. The company declined to tell the Times how Sora had been trained, except stating that it used both “publicly available videos” as well as videos licensed from copyright holders.

This article originally appeared on Engadget at https://www.engadget.com/openais-new-sora-model-can-generate-minute-long-videos-from-text-prompts-195717694.html?src=rss

OpenAI’s new Sora model can generate minute-long videos from text prompts

OpenAI on Thursday announced Sora, a brand new model that generates high-definition videos up to one minute in length from text prompts. Sora, which means “sky” in Japanese, won’t be available to the general public any time soon. Instead, OpenAI is making it available to a small group of academics and researchers who will assess harm and its potential for misuse.

“Sora is able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background,” the company said on its website. “The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world.”

One of the videos generated by Sora that OpenAI shared on its website shows a couple walking through a snowy Tokyo city as cherry blossom petals and snowflakes blow around them.

Another shows realistic-looking wooly mammoths walking through a snowy meadow against a backdrop of snow-clad mountain ranges.

OpenAI says that the model works as a result of “deep understanding of language,” which lets it interpret text prompts accurately. Still, like basically all AI image- and video-generators we’ve seen, Sora isn’t perfect. In one of the examples, the prompt, which asks for a video of a Dalmatian looking through a window and people “walking and cycling along the canal streets,” omits the people and the streets in the video entirely. OpenAI also warns that the model can struggle to understand cause and effect — it can generate a video of a person eating a cookie, for instance, but the cookie may not have bite marks.

Sora isn’t the first text-to-video model around. Other companies including Meta, Google and Runway, have either teased text-to-video tools or made them available to the public. Still, no other tool is currently able to generate videos as long as 60 seconds. Sora also generates entire videos at once, instead of putting them together frame-by-frame like other models, which makes sure that subjects in the video stay the same even when they go out of view temporarily.

The rise of text-to-video tools has sparked concerns over their potential to more easily create realistic-looking fake footage. “I am absolutely terrified that this kind of thing will sway a narrowly contested election,” Oren Etzioni, a professor at the University of Washington who specializes in artificial intelligence, and the founder of True Media, an organization that works to identify disinformation in political campaigns, told The New York Times. And generative AI more broadly has sparked backlash from artists and creative professionals concerned about the technology being used to replace jobs.

OpenAI said that it was working with experts in areas like misinformation, hateful content and bias to test the tool before making it available to the public. The company is also building tools capable of detecting videos generated by Sora and including metadata in the generated videos for easier detection. The company declined to tell the Times how Sora had been trained, except stating that it used both “publicly available videos” as well as videos licensed from copyright holders.

This article originally appeared on Engadget at https://www.engadget.com/openais-new-sora-model-can-generate-minute-long-videos-from-text-prompts-195717694.html?src=rss

Apple Vision Pro now has a native TikTok app

The Apple Vision Pro is officially two weeks old, and the apps are starting to roll in. TikTok was conspicuously absent on launch day, but now our long national nightmare has come to an end. The Vision Pro has a native TikTok app.

This isn’t just the iPad app with a new coat of paint. There are some neat features here that take advantage of Apple’s well-regarded and prohibitively-expensive headset. The navigation bar and like button are moved entirely off-screen, giving users an uninterrupted view of video content.

A video on the app showing a man leaning back.
TikTok

This extends to comment sections and creator profiles, as they both now appear as expansions alongside the feed, which TikTok says provides “a more immersive content viewing experience.” To that end, TikTok integrates with the headset’s immersive environments, so people can watch short-form videos on the moon or surrounded by the lush flora of Yosemite.

TikTok also works with the Vision Pro’s Shared Space feature, allowing the app to exist somewhere in your peripheral as you work on other stuff. The location of the app will remain static, so it’ll be in the same place every time you put on the headset (provided you are in the same room.)

You may notice that these features are primarily intended for content consumers, and not creators. Engadget reached out to TikTok to ask about creator-specific features and we’ll update this post when we hear back.

The app’s available for download right now, though it likely won’t be accessible for TikTok’s core userbase of 10 to 19 year olds. The Apple Vision Pro costs $3,500. That’s like an entire childhood of allowances.

This article originally appeared on Engadget at https://www.engadget.com/apple-vision-pro-now-has-a-native-tiktok-app-193214818.html?src=rss

Apple Vision Pro now has a native TikTok app

The Apple Vision Pro is officially two weeks old, and the apps are starting to roll in. TikTok was conspicuously absent on launch day, but now our long national nightmare has come to an end. The Vision Pro has a native TikTok app.

This isn’t just the iPad app with a new coat of paint. There are some neat features here that take advantage of Apple’s well-regarded and prohibitively-expensive headset. The navigation bar and like button are moved entirely off-screen, giving users an uninterrupted view of video content.

A video on the app showing a man leaning back.
TikTok

This extends to comment sections and creator profiles, as they both now appear as expansions alongside the feed, which TikTok says provides “a more immersive content viewing experience.” To that end, TikTok integrates with the headset’s immersive environments, so people can watch short-form videos on the moon or surrounded by the lush flora of Yosemite.

TikTok also works with the Vision Pro’s Shared Space feature, allowing the app to exist somewhere in your peripheral as you work on other stuff. The location of the app will remain static, so it’ll be in the same place every time you put on the headset (provided you are in the same room.)

You may notice that these features are primarily intended for content consumers, and not creators. Engadget reached out to TikTok to ask about creator-specific features and we’ll update this post when we hear back.

The app’s available for download right now, though it likely won’t be accessible for TikTok’s core userbase of 10 to 19 year olds. The Apple Vision Pro costs $3,500. That’s like an entire childhood of allowances.

This article originally appeared on Engadget at https://www.engadget.com/apple-vision-pro-now-has-a-native-tiktok-app-193214818.html?src=rss

Google’s Gemini 1.5 Pro is a new, more efficient AI model

On Thursday, Google unveiled Gemini 1.5 Pro, which the company describes as delivering “dramatically enhanced performance” over the previous model. The company’s AI trajectory — viewed internally as increasingly critical for its future — follows the unveiling of Gemini 1.0 Ultra last week, alongside the rebranding of the Bard chatbot (to Gemini) to align with the new model’s more powerful and versatile capabilities.

In an announcement blog post, Google CEO Sundar Pichai and Google DeepMind CEO Demis Hassabis try to balance assuring their audience about ethical AI safety while touting their models’ rapidly advancing capabilities. “Our teams continue pushing the frontiers of our latest models with safety at the core,” Pichai summarized.

The company needs to emphasize safety for AI skeptics (including one former Google CEO) and government regulators. But it also needs to stress its models’ accelerating performance for AI developers, potential customers and investors concerned the company was too slow to react to OpenAI’s breakout success with ChatGPT.

Pichai and Hassabis say Gemini 1.5 Pro delivers comparable results to Gemini 1.0 Ultra. However, Gemini 1.5 performs at that level more efficiently, with reduced computational requirements. The multimodal capabilities include processing text, images, videos, audio or code. As AI models advance, they’ll continue to offer a more versatile array of capabilities in one prompt box (another recent example was OpenAI integrating DALL-E 3 image generation into ChatGPT).

Alphabet Inc. and Google CEO Sundar Pichai attends the inauguration of a Google Artificial Intelligence (AI) hub in Paris on February 15, 2024. (Photo by ALAIN JOCARD / AFP) (Photo by ALAIN JOCARD/AFP via Getty Images)
Google CEO Sundar Pichai
ALAIN JOCARD via Getty Images

Gemini 1.5 Pro can also handle up to one million tokens, or the units of data AI models can process in a single request. Google says Gemini 1.5 Pro can process over 700,000 words, an hour of video, 11 hours of audio and codebases with over 30,000 lines of code. The company says it’s even “successfully tested” a version that supports up to 10 million tokens.

The company says Gemini 1.5 Pro maintains high accuracy in queries with larger token counts when it has more new data to learn. It says the model impressed in the Needle In a Haystack evaluation. In this test, developers insert a small piece of information inside a long text block to see if the AI model can pick it out. Google said Gemini 1.5 Pro could find the embedded text 99 percent of the time in data blocks as long as one million tokens.

Google says Gemini 1.5 Pro can reason about various details from the 402-page Apollo 11 moon mission transcripts. In addition, it can analyze plot points and events from an uploaded 44-minute silent film starring Buster Keaton. “As 1.5 Pro’s long context window is the first of its kind among large-scale models, we’re continuously developing new evaluations and benchmarks for testing its novel capabilities,” Hassabis wrote.

Google is launching Gemini 1.5 Pro with 128,000-token capabilities, the same number at which OpenAI’s (publicly announced) GPT-4 models max out. Hassabis says Google will eventually introduce new pricing tiers that support up to one million-token queries.

NEW YORK, NEW YORK - MAY 02: Demis Hassabis attends 2023 WSJ's Future Of Everything Festival at Spring Studios on May 02, 2023 in New York City. (Photo by Joy Malone/Getty Images)
Google DeepMind CEO Demis Hassabis
Joy Malone via Getty Images

Gemini 1.5 Pro is also adept at learning new skills from information in long prompts — without additional fine-tuning (“in-context learning”). In a benchmark called Machine Translation from One Book, the model learned a grammar manual for Kalamang, a language with fewer than 200 speakers globally that it hadn’t previously been trained on. The company says Gemini 1.5 Pro learned to perform at a similar level as a human learning the same content when translating English to Kalamang.

In a piece of the announcement that will catch developers’ attention, Google says Gemini 1.5 Pro can perform problem-solving tasks across longer code blocks. “When given a prompt with more than 100,000 lines of code, it can better reason across examples, suggest helpful modifications and give explanations about how different parts of the code works,” Hassabis wrote.

On the ethics and safety front, Google says it’s taking “the same approach to responsible deployment” it took with Gemini 1.0 models. That includes developing and applying red-teaming techniques, where a group of ethical developers essentially serve as devil’s advocate, testing for “a range of potential harms.” In addition, the company says it heavily scrutinizes areas like content safety and representational harms. The company says it continues to develop new ethical and safety tests for its AI tools.

Google is launching Gemini 1.5 in early access for developers and enterprise customers. The company plans to make it more widely available eventually. Gemini 1.0 is currently available for consumers, alongside a Pro variant that costs $20 monthly.

This article originally appeared on Engadget at https://www.engadget.com/googles-gemini-15-pro-is-a-new-more-efficient-ai-model-181909354.html?src=rss

Google’s Gemini 1.5 Pro is a new, more efficient AI model

On Thursday, Google unveiled Gemini 1.5 Pro, which the company describes as delivering “dramatically enhanced performance” over the previous model. The company’s AI trajectory — viewed internally as increasingly critical for its future — follows the unveiling of Gemini 1.0 Ultra last week, alongside the rebranding of the Bard chatbot (to Gemini) to align with the new model’s more powerful and versatile capabilities.

In an announcement blog post, Google CEO Sundar Pichai and Google DeepMind CEO Demis Hassabis try to balance assuring their audience about ethical AI safety while touting their models’ rapidly advancing capabilities. “Our teams continue pushing the frontiers of our latest models with safety at the core,” Pichai summarized.

The company needs to emphasize safety for AI skeptics (including one former Google CEO) and government regulators. But it also needs to stress its models’ accelerating performance for AI developers, potential customers and investors concerned the company was too slow to react to OpenAI’s breakout success with ChatGPT.

Pichai and Hassabis say Gemini 1.5 Pro delivers comparable results to Gemini 1.0 Ultra. However, Gemini 1.5 performs at that level more efficiently, with reduced computational requirements. The multimodal capabilities include processing text, images, videos, audio or code. As AI models advance, they’ll continue to offer a more versatile array of capabilities in one prompt box (another recent example was OpenAI integrating DALL-E 3 image generation into ChatGPT).

Alphabet Inc. and Google CEO Sundar Pichai attends the inauguration of a Google Artificial Intelligence (AI) hub in Paris on February 15, 2024. (Photo by ALAIN JOCARD / AFP) (Photo by ALAIN JOCARD/AFP via Getty Images)
Google CEO Sundar Pichai
ALAIN JOCARD via Getty Images

Gemini 1.5 Pro can also handle up to one million tokens, or the units of data AI models can process in a single request. Google says Gemini 1.5 Pro can process over 700,000 words, an hour of video, 11 hours of audio and codebases with over 30,000 lines of code. The company says it’s even “successfully tested” a version that supports up to 10 million tokens.

The company says Gemini 1.5 Pro maintains high accuracy in queries with larger token counts when it has more new data to learn. It says the model impressed in the Needle In a Haystack evaluation. In this test, developers insert a small piece of information inside a long text block to see if the AI model can pick it out. Google said Gemini 1.5 Pro could find the embedded text 99 percent of the time in data blocks as long as one million tokens.

Google says Gemini 1.5 Pro can reason about various details from the 402-page Apollo 11 moon mission transcripts. In addition, it can analyze plot points and events from an uploaded 44-minute silent film starring Buster Keaton. “As 1.5 Pro’s long context window is the first of its kind among large-scale models, we’re continuously developing new evaluations and benchmarks for testing its novel capabilities,” Hassabis wrote.

Google is launching Gemini 1.5 Pro with 128,000-token capabilities, the same number at which OpenAI’s (publicly announced) GPT-4 models max out. Hassabis says Google will eventually introduce new pricing tiers that support up to one million-token queries.

NEW YORK, NEW YORK - MAY 02: Demis Hassabis attends 2023 WSJ's Future Of Everything Festival at Spring Studios on May 02, 2023 in New York City. (Photo by Joy Malone/Getty Images)
Google DeepMind CEO Demis Hassabis
Joy Malone via Getty Images

Gemini 1.5 Pro is also adept at learning new skills from information in long prompts — without additional fine-tuning (“in-context learning”). In a benchmark called Machine Translation from One Book, the model learned a grammar manual for Kalamang, a language with fewer than 200 speakers globally that it hadn’t previously been trained on. The company says Gemini 1.5 Pro learned to perform at a similar level as a human learning the same content when translating English to Kalamang.

In a piece of the announcement that will catch developers’ attention, Google says Gemini 1.5 Pro can perform problem-solving tasks across longer code blocks. “When given a prompt with more than 100,000 lines of code, it can better reason across examples, suggest helpful modifications and give explanations about how different parts of the code works,” Hassabis wrote.

On the ethics and safety front, Google says it’s taking “the same approach to responsible deployment” it took with Gemini 1.0 models. That includes developing and applying red-teaming techniques, where a group of ethical developers essentially serve as devil’s advocate, testing for “a range of potential harms.” In addition, the company says it heavily scrutinizes areas like content safety and representational harms. The company says it continues to develop new ethical and safety tests for its AI tools.

Google is launching Gemini 1.5 in early access for developers and enterprise customers. The company plans to make it more widely available eventually. Gemini 1.0 is currently available for consumers, alongside a Pro variant that costs $20 monthly.

This article originally appeared on Engadget at https://www.engadget.com/googles-gemini-15-pro-is-a-new-more-efficient-ai-model-181909354.html?src=rss

YouTube Shorts now lets you chop up and remix music videos

YouTube just released a new feature that lets users remix music videos and turn them into Shorts. This allows you to adjust various parameters from a full-length music video to create something wholly unique. Does this sound like TikTok? It definitely sounds like TikTok.

Here’s how it works. Just tap “remix” on a music video. You’ll be presented with four options: Sound, Green Screen, Cut and Collab. You can only pick one, so choose wisely. The Sound tool does what you think. It strips the audio and lets you use it in your own YouTube Short. This is the kind of thing that’s hugely popular on TikTok, with many users lip-syncing to various audio clips. This Sound tool is available to any music video and most songs that were automatically uploaded to the platform.  

Green Screen takes things a step further. It turns the video into a background, which you can then dance in front of or whatever. The Cut tool just clips out a five second portion of the video that you can add to any Short. Finally, Collab creates a side-by-side video that places your Short next to the original content. YouTube says this is the perfect option when “you and your friends” want to show off choreography alongside the original artist.

The feature’s already available on the mobile app, though it may not have rolled out to every user yet. If you want to check, just open the app, click on a music video and look for that “remix” option. It’s worth noting that many of these features were already available to Shorts creators, but not in one handy tab.

A still from a Dr. Dre video.
YouTube/Lawrence Bonk

YouTube Shorts was already a TikTok-alike when it released back in 2021, but these features make it even more, uh, TikTok-ier. With that in mind, YouTube picked the perfect time to officially launch the toolset. Universal Music has pulled its roster from TikTok after a breakdown in financial negotiations. UMG artists include Taylor Swift, Drake, Billie Eilish and many more. 

This has forced TikTok creators to swap out music tracks, as anything sourced from Universal is automatically muted. The record label has accused TikTok of wanting to pay a “fraction” of rates offered by other social media sites. YouTube’s Remix tool has access to Universal’s entire roster.

This article originally appeared on Engadget at https://www.engadget.com/youtube-shorts-now-lets-you-chop-up-and-remix-music-videos-180655627.html?src=rss

YouTube Shorts now lets you chop up and remix music videos

YouTube just released a new feature that lets users remix music videos and turn them into Shorts. This allows you to adjust various parameters from a full-length music video to create something wholly unique. Does this sound like TikTok? It definitely sounds like TikTok.

Here’s how it works. Just tap “remix” on a music video. You’ll be presented with four options: Sound, Green Screen, Cut and Collab. You can only pick one, so choose wisely. The Sound tool does what you think. It strips the audio and lets you use it in your own YouTube Short. This is the kind of thing that’s hugely popular on TikTok, with many users lip-syncing to various audio clips. This Sound tool is available to any music video and most songs that were automatically uploaded to the platform.  

Green Screen takes things a step further. It turns the video into a background, which you can then dance in front of or whatever. The Cut tool just clips out a five second portion of the video that you can add to any Short. Finally, Collab creates a side-by-side video that places your Short next to the original content. YouTube says this is the perfect option when “you and your friends” want to show off choreography alongside the original artist.

The feature’s already available on the mobile app, though it may not have rolled out to every user yet. If you want to check, just open the app, click on a music video and look for that “remix” option. It’s worth noting that many of these features were already available to Shorts creators, but not in one handy tab.

A still from a Dr. Dre video.
YouTube/Lawrence Bonk

YouTube Shorts was already a TikTok-alike when it released back in 2021, but these features make it even more, uh, TikTok-ier. With that in mind, YouTube picked the perfect time to officially launch the toolset. Universal Music has pulled its roster from TikTok after a breakdown in financial negotiations. UMG artists include Taylor Swift, Drake, Billie Eilish and many more. 

This has forced TikTok creators to swap out music tracks, as anything sourced from Universal is automatically muted. The record label has accused TikTok of wanting to pay a “fraction” of rates offered by other social media sites. YouTube’s Remix tool has access to Universal’s entire roster.

This article originally appeared on Engadget at https://www.engadget.com/youtube-shorts-now-lets-you-chop-up-and-remix-music-videos-180655627.html?src=rss

The Morning After: Mark Zuckerberg thinks the Quest 3 is much better than the Vision Pro

Meta chief Mark Zuckerberg has posted his own review of Apple's Vision Pro on Instagram, coming inexplicably for our jobs here at Engadget.

In a video shot direct from a Meta Quest 3 (oh of course), Zuckerberg didn't mince his words. He said he expected the Quest to be the better value for most people, because it's "like seven times less expensive" than the $3,500 Vision Pro. Eventually, he concluded that the Quest 3 was “the better product, period."

Zuckerberg thinks the Quest is "a lot more comfortable," noting that the headset’s field of view is wider and has a brighter display than the Vision Pro. He added that the Quest had a bigger library: Meta’s Quest, unlike the Vision Pro, has access to the YouTube and Xbox apps. And that’s definitely a fair criticism.

All in all, two out of five Zucks. Don't forget to like and subscribe.

— Mat Smith

The biggest stories you might have missed

X let terrorist groups pay for verification, report says

Amazon knocks $100 off the Apple AirPods Max

An earnings typo sent Lyft's stock price into the stratosphere

Mario vs. Donkey Kong is an odd, eye-catching ode to simpler times

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!

A piracy app outranked Netflix on the App Store before Apple pulled it

Kimi gave viewers access to pirated shows and movies.

An app called Kimi curiously outranked well-known streaming services like Netflix and Prime Video in the App Store's list of top free entertainment apps this week. Now, Apple has pulled it, probably because it gave users access to pirated movies.

Kimi was disguised as an app that tests your eyesight by making you play ‘spot the difference’ between similar photos. In reality, it was packed with bootlegged shows and movies. If anyone remembers the heyday of pirated movies on slow internet connections, you got to relive the variable video quality of yesteryear.

Continue reading.

Walmart might buy budget TV maker Vizio

This could make the retail giant a formidable rival to Amazon and Roku.

TMA
Justin Sullivan via Getty Images

Walmart might buy budget TV maker Vizio. The rumored $2 billion deal would make Vizio a house brand for the retailer and would allow the company to compete directly in the affordable smart TV space currently dominated by Amazon and Roku. Vizio has been eyeing up buyers for years. It was nearly purchased by Chinese media conglomerate LeEco back in 2016, which was another $2 billion deal, but that fell through. If the purchase happens, Walmart would also have access to all of that sweet, sweet customer data collected by Vizio’s smart TV platform.

Continue reading.

Can geoengineering stop the ice caps from melting?

We’re not ready for what’s coming.

TMA
Getty Images

Since 1979, Arctic ice has shrunk by 1.35 million square miles and Antarctic ice is now at the lowest level since records began. Frozen Arctic, a report produced by the universities of the Arctic and Lapland alongside UN-backed thinktank GRID-Arendal, collates sixty geoengineering projects that could slow down or reverse polar melting. A team of researchers examined every idea, from those already in place to the ones at the fringes of science. Daniel Cooper breaks down some of the possible solutions.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-mark-zuckerberg-thinks-the-quest-3-is-much-better-than-the-vision-pro-121503056.html?src=rss