Microsoft may have finally made quantum computing useful

The dream of quantum computing has always been exciting: What if we could build a machine working at the quantum level that could tackle complex calculations exponentially faster than a computer limited by classical physics? But despite seeing IBM, Google and others announce iterative quantum computing hardware, they're still not being used for any practical purposes. That might change with today's announcement from Microsoft and Quantinuum, who say they've developed the most error-free quantum computing system yet.

While classical computers and electronics rely on binary bits as their basic unit of information (they can be either on or off), quantum computers work with qubits, which can exist in a superposition of two states at the same time. The trouble with qubits is that they're prone to error, which is the main reason today's quantum computers (known as Noisy Intermediate Scale Quantum [NISQ] computers) are just used for research and experimentation.

Microsoft's solution was to group physical qubits into virtual qubits, which allows it to apply error diagnostics and correction without destroying them, and run it all over Quantinuum's hardware. The result was an error rate that was 800 times better than relying on physical qubits alone. Microsoft claims it was able to run more than 14,000 experiments without any errors.

According to Jason Zander, EVP of Microsoft's Strategic Missions and Technologies division, this achievement could finally bring us to "Level 2 Resilient" quantum computing, which would be reliable enough for practical applications.

"The task at hand for the entire quantum ecosystem is to increase the fidelity of qubits and enable fault-tolerant quantum computing so that we can use a quantum machine to unlock solutions to previously intractable problems," Zander wrote in a blog post today. "In short, we need to transition to reliable logical qubits — created by combining multiple physical qubits together into logical ones to protect against noise and sustain a long (i.e., resilient) computation."

Microsoft's announcement is a "strong result," according to Aram Harrow, a professor of physics at MIT focusing on quantum information and computing. "The Quantinuum system has impressive error rates and control, so it was plausible that they could do an experiment like this, but it's encouraging to see that it worked," he said in an e-mail to Engadget. "Hopefully they'll be able to keep maintaining or even improving the error rate as they scale up."

Microsoft Quantum Computing
Microsoft

Researchers will be able to get a taste of Microsoft's reliable quantum computing via Azure Quantum Elements in the next few months, where it will be available as a private preview. The goal is to push even further to Level 3 quantum supercomputing, which will theoretically be able to tackle incredibly complex issues like climate change and exotic drug research. It's unclear how long it'll take to actually reach that point, but for now, at least we're moving one step closer towards practical quantum computing.

"Getting to a large-scale fault-tolerant quantum computer is still going to be a long road," Professor Harrow wrote. "This is an important step for this hardware platform. Along with the progress on neutral atoms, it means that the cold atom platforms are doing very well relative to their superconducting qubit competitors."

This article originally appeared on Engadget at https://www.engadget.com/microsoft-may-have-finally-made-quantum-computing-useful-164501302.html?src=rss

Microsoft may have finally made quantum computing useful

The dream of quantum computing has always been exciting: What if we could build a machine working at the quantum level that could tackle complex calculations exponentially faster than a computer limited by classical physics? But despite seeing IBM, Google and others announce iterative quantum computing hardware, they're still not being used for any practical purposes. That might change with today's announcement from Microsoft and Quantinuum, who say they've developed the most error-free quantum computing system yet.

While classical computers and electronics rely on binary bits as their basic unit of information (they can be either on or off), quantum computers work with qubits, which can exist in a superposition of two states at the same time. The trouble with qubits is that they're prone to error, which is the main reason today's quantum computers (known as Noisy Intermediate Scale Quantum [NISQ] computers) are just used for research and experimentation.

Microsoft's solution was to group physical qubits into virtual qubits, which allows it to apply error diagnostics and correction without destroying them, and run it all over Quantinuum's hardware. The result was an error rate that was 800 times better than relying on physical qubits alone. Microsoft claims it was able to run more than 14,000 experiments without any errors.

According to Jason Zander, EVP of Microsoft's Strategic Missions and Technologies division, this achievement could finally bring us to "Level 2 Resilient" quantum computing, which would be reliable enough for practical applications.

"The task at hand for the entire quantum ecosystem is to increase the fidelity of qubits and enable fault-tolerant quantum computing so that we can use a quantum machine to unlock solutions to previously intractable problems," Zander wrote in a blog post today. "In short, we need to transition to reliable logical qubits — created by combining multiple physical qubits together into logical ones to protect against noise and sustain a long (i.e., resilient) computation."

Microsoft's announcement is a "strong result," according to Aram Harrow, a professor of physics at MIT focusing on quantum information and computing. "The Quantinuum system has impressive error rates and control, so it was plausible that they could do an experiment like this, but it's encouraging to see that it worked," he said in an e-mail to Engadget. "Hopefully they'll be able to keep maintaining or even improving the error rate as they scale up."

Microsoft Quantum Computing
Microsoft

Researchers will be able to get a taste of Microsoft's reliable quantum computing via Azure Quantum Elements in the next few months, where it will be available as a private preview. The goal is to push even further to Level 3 quantum supercomputing, which will theoretically be able to tackle incredibly complex issues like climate change and exotic drug research. It's unclear how long it'll take to actually reach that point, but for now, at least we're moving one step closer towards practical quantum computing.

"Getting to a large-scale fault-tolerant quantum computer is still going to be a long road," Professor Harrow wrote. "This is an important step for this hardware platform. Along with the progress on neutral atoms, it means that the cold atom platforms are doing very well relative to their superconducting qubit competitors."

This article originally appeared on Engadget at https://www.engadget.com/microsoft-may-have-finally-made-quantum-computing-useful-164501302.html?src=rss

Facebook finally adds video controls like a slide bar

The craze around Facebook Live might be a thing of the past, but Meta is still trying to make the platform video-friendly. The company has announced a new video player for uniformly displaying Reels, longer content and Live videos on the Facebook app. 

One of the biggest shifts is that all of Facebook's videos will now appear full-screen — even landscape-oriented ones. Videos will automatically play vertically, but you can now turn your phone on its side to watch most horizontal content across your entire device. 

Like many videos on TikTok, Facebook will now offer a slider at the bottom of the screen, letting you quickly move through the video. The update also brings some of the same features streamers like Netflix offer in their apps, such as the option to jump forward or backward by 10 seconds. Meta claims that you will now get "more relevant video recommendations" of all lengths appearing on the video tab and in your feed. The company will also be increasing the number of Reels shown on Facebook. 

The video player is rolling out now to Android and iOS users in the United States and Canada, with the new controls launching in the next few weeks. The entire update should be available globally in the coming months.

This article originally appeared on Engadget at https://www.engadget.com/facebook-finally-adds-video-controls-like-a-slide-bar-163014443.html?src=rss

Facebook finally adds video controls like a slide bar

The craze around Facebook Live might be a thing of the past, but Meta is still trying to make the platform video-friendly. The company has announced a new video player for uniformly displaying Reels, longer content and Live videos on the Facebook app. 

One of the biggest shifts is that all of Facebook's videos will now appear full-screen — even landscape-oriented ones. Videos will automatically play vertically, but you can now turn your phone on its side to watch most horizontal content across your entire device. 

Like many videos on TikTok, Facebook will now offer a slider at the bottom of the screen, letting you quickly move through the video. The update also brings some of the same features streamers like Netflix offer in their apps, such as the option to jump forward or backward by 10 seconds. Meta claims that you will now get "more relevant video recommendations" of all lengths appearing on the video tab and in your feed. The company will also be increasing the number of Reels shown on Facebook. 

The video player is rolling out now to Android and iOS users in the United States and Canada, with the new controls launching in the next few weeks. The entire update should be available globally in the coming months.

This article originally appeared on Engadget at https://www.engadget.com/facebook-finally-adds-video-controls-like-a-slide-bar-163014443.html?src=rss

Our favorite cheap smartphone is on sale for $250 right now

You don't need to shell out a four-figure sum to find a great smartphone. In fact, you don't even need to spend half of that to snap up one that covers all of the basics and then some. At its regular price of $300, the OnePlus Nord N30 5G was already our pick for the best cheap phone around. It's currently on sale for $250 ($50) off, which makes it an even better deal. That's close to a record low price. The discount is part of a broader sale on OnePlus phones and earbuds.

The OnePlus Nord N30 5G offers great value however you slice it. The phone has a relatively zippy Snapdragon 695 5G processor, along with 8GB of RAM and 128GB of storage, which is expandable with a microSD card.

You'll get a 16MP front-facing camera and, on the rear, 108MP main and 2MP macro lenses. The 5,000mAh battery should last you a day of moderate use, while OnePlus says the 50W fast charging support will top it up from a one-percent charge to 80 percent in 30 minutes. The OnePlus Nord N30 5G also has a 6.7-inch, 120Hz IPS display that's great for gaming.

On the downside, there's no IP rating for dust or water resistance. And while the handset runs on Oxygen OS 13.1 (which is based on Android 13), OnePlus has only committed to bringing one major Android update to the N30, along with three years of security support. That's a pity for those looking for something that'll stay up to date for a few years without breaking the bank, but that level of Android support is typical for budget phones.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/our-favorite-cheap-smartphone-is-on-sale-for-250-right-now-161336458.html?src=rss

Our favorite cheap smartphone is on sale for $250 right now

You don't need to shell out a four-figure sum to find a great smartphone. In fact, you don't even need to spend half of that to snap up one that covers all of the basics and then some. At its regular price of $300, the OnePlus Nord N30 5G was already our pick for the best cheap phone around. It's currently on sale for $250 ($50) off, which makes it an even better deal. That's close to a record low price. The discount is part of a broader sale on OnePlus phones and earbuds.

The OnePlus Nord N30 5G offers great value however you slice it. The phone has a relatively zippy Snapdragon 695 5G processor, along with 8GB of RAM and 128GB of storage, which is expandable with a microSD card.

You'll get a 16MP front-facing camera and, on the rear, 108MP main and 2MP macro lenses. The 5,000mAh battery should last you a day of moderate use, while OnePlus says the 50W fast charging support will top it up from a one-percent charge to 80 percent in 30 minutes. The OnePlus Nord N30 5G also has a 6.7-inch, 120Hz IPS display that's great for gaming.

On the downside, there's no IP rating for dust or water resistance. And while the handset runs on Oxygen OS 13.1 (which is based on Android 13), OnePlus has only committed to bringing one major Android update to the N30, along with three years of security support. That's a pity for those looking for something that'll stay up to date for a few years without breaking the bank, but that level of Android support is typical for budget phones.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/our-favorite-cheap-smartphone-is-on-sale-for-250-right-now-161336458.html?src=rss

Stability AI’s audio generator can now crank out 3 minute ‘songs’

Stability AI just unveiled Stable Audio 2.0, an upgraded version of its music-generation platform. This system lets users create up to three minutes of audio via text prompt. That’s around the length of an actual song, so it'll also whip up an intro, a full chord progression and an outro.

First, the good news. Three minutes is huge. The previous version of the software maxed out at 90 seconds. Just imagine the fake birthday song you could make in the style of that one Rob Thomas/Santana track. Another boon? The tool is free and publicly available through the company’s website, so have at it.

It primarily works via text prompt, but there’s an option to upload an audio clip. The system will analyze the clip and produce something similar. All uploaded audio must be copyright-free, so this isn’t for the purposes of mimicking something that already exists. Rather, it could be useful for, say, humming a drum part or extending a 20 second clip into something longer.

Now, the bad news. This is still AI-generated music. It’s cool as a conversation piece and as an emblem of a possible future that’s great for tinkerers and bad for musicians, but that’s about it. The songs can actually sound nifty, at first, until the seams start showing. Then things get a bit creepy.

For instance, the system loves adding vocals, but not in any known human language. I guess it’s in whatever language that makes up the text in AI-generated images. The vocals sort of sound like actual people, and other times they sound Gregorian chanters filtered through outer space. It’s right smack dab in the middle of that uncanny valley. The Verge called the vocals “soulless and weird," comparing them to whale sounds. That tracks. 

Stable Audio 2.0 makes the same weird little mistakes that all of these systems make, no matter the output type. Parts can vanish into thin air, replaced with something else. Sometimes melodic elements will double out of nowhere, like an audio version of those extra fingers in AI-generated images.

There’s also the, well, boring-ness of it all. This is music in name only. Without a human connection, what’s the point? I listen to music to get inside the head of another person or group of people. There’s no head to get inside of here, despite constant proclamations that artificial general intelligence (AGI) is only months away.

So, this tech is an absolute gift for those making silly birthday videos or bank hold music. For everyone else? Shrug. One thing I can say from personal experience: It’s pretty fast. The system concocted an absolutely terrifying big band song about my cat in around a minute. 

This article originally appeared on Engadget at https://www.engadget.com/stability-ais-audio-generator-can-now-crank-out-3-minute-songs-160620135.html?src=rss

Stability AI’s audio generator can now crank out 3 minute ‘songs’

Stability AI just unveiled Stable Audio 2.0, an upgraded version of its music-generation platform. This system lets users create up to three minutes of audio via text prompt. That’s around the length of an actual song, so it'll also whip up an intro, a full chord progression and an outro.

First, the good news. Three minutes is huge. The previous version of the software maxed out at 90 seconds. Just imagine the fake birthday song you could make in the style of that one Rob Thomas/Santana track. Another boon? The tool is free and publicly available through the company’s website, so have at it.

It primarily works via text prompt, but there’s an option to upload an audio clip. The system will analyze the clip and produce something similar. All uploaded audio must be copyright-free, so this isn’t for the purposes of mimicking something that already exists. Rather, it could be useful for, say, humming a drum part or extending a 20 second clip into something longer.

Now, the bad news. This is still AI-generated music. It’s cool as a conversation piece and as an emblem of a possible future that’s great for tinkerers and bad for musicians, but that’s about it. The songs can actually sound nifty, at first, until the seams start showing. Then things get a bit creepy.

For instance, the system loves adding vocals, but not in any known human language. I guess it’s in whatever language that makes up the text in AI-generated images. The vocals sort of sound like actual people, and other times they sound Gregorian chanters filtered through outer space. It’s right smack dab in the middle of that uncanny valley. The Verge called the vocals “soulless and weird," comparing them to whale sounds. That tracks. 

Stable Audio 2.0 makes the same weird little mistakes that all of these systems make, no matter the output type. Parts can vanish into thin air, replaced with something else. Sometimes melodic elements will double out of nowhere, like an audio version of those extra fingers in AI-generated images.

There’s also the, well, boring-ness of it all. This is music in name only. Without a human connection, what’s the point? I listen to music to get inside the head of another person or group of people. There’s no head to get inside of here, despite constant proclamations that artificial general intelligence (AGI) is only months away.

So, this tech is an absolute gift for those making silly birthday videos or bank hold music. For everyone else? Shrug. One thing I can say from personal experience: It’s pretty fast. The system concocted an absolutely terrifying big band song about my cat in around a minute. 

This article originally appeared on Engadget at https://www.engadget.com/stability-ais-audio-generator-can-now-crank-out-3-minute-songs-160620135.html?src=rss

OnePlus rolls out its own version of Google’s Magic Eraser

OnePlus is the latest company to hop on the AI train. The phone manufacturer is rolling out a new photo editing tool called AI Eraser, which lets users remove extraneous objects from their photos. The new feature will be available on a range of OnePlus smartphones, including the OnePlus 12 and 12R, OnePlus 11 and OnePlus Open.

To use the OnePlus AI Eraser, a person first has to highlight the parts of the image that need removing. These could be random people or a dirty trash can, but they can also be "imperfections" in the photo. Then, AI analyzes that area and creates a background that OnePlus claims will blend into the existing image. If it sounds familiar, it works basically the same as Adobe's Generative Fill and Google's Magic Eraser tools.  

However, this is a new venture for OnePlus, which uses its proprietary LLM to power the AI Eraser. "As OnePlus' first feature based on generative AI technology, AI Eraser represents the first step in our vision to liberate user creativity through AI and revolutionize the future of photo editing, empowering users to create remarkable photos with just a few touches," Kinder Liu, president and COO of OnePlus, said in a statement. "This year, we plan to introduce more AI features, and we look forward to their upcoming availability."

This article originally appeared on Engadget at https://www.engadget.com/oneplus-rolls-out-its-own-version-of-googles-magic-eraser-151731265.html?src=rss

OnePlus rolls out its own version of Google’s Magic Eraser

OnePlus is the latest company to hop on the AI train. The phone manufacturer is rolling out a new photo editing tool called AI Eraser, which lets users remove extraneous objects from their photos. The new feature will be available on a range of OnePlus smartphones, including the OnePlus 12 and 12R, OnePlus 11 and OnePlus Open.

To use the OnePlus AI Eraser, a person first has to highlight the parts of the image that need removing. These could be random people or a dirty trash can, but they can also be "imperfections" in the photo. Then, AI analyzes that area and creates a background that OnePlus claims will blend into the existing image. If it sounds familiar, it works basically the same as Adobe's Generative Fill and Google's Magic Eraser tools.  

However, this is a new venture for OnePlus, which uses its proprietary LLM to power the AI Eraser. "As OnePlus' first feature based on generative AI technology, AI Eraser represents the first step in our vision to liberate user creativity through AI and revolutionize the future of photo editing, empowering users to create remarkable photos with just a few touches," Kinder Liu, president and COO of OnePlus, said in a statement. "This year, we plan to introduce more AI features, and we look forward to their upcoming availability."

This article originally appeared on Engadget at https://www.engadget.com/oneplus-rolls-out-its-own-version-of-googles-magic-eraser-151731265.html?src=rss