Microsoft may have finally made quantum computing useful

The dream of quantum computing has always been exciting: What if we could build a machine working at the quantum level that could tackle complex calculations exponentially faster than a computer limited by classical physics? But despite seeing IBM, Google and others announce iterative quantum computing hardware, they're still not being used for any practical purposes. That might change with today's announcement from Microsoft and Quantinuum, who say they've developed the most error-free quantum computing system yet.

While classical computers and electronics rely on binary bits as their basic unit of information (they can be either on or off), quantum computers work with qubits, which can exist in a superposition of two states at the same time. The trouble with qubits is that they're prone to error, which is the main reason today's quantum computers (known as Noisy Intermediate Scale Quantum [NISQ] computers) are just used for research and experimentation.

Microsoft's solution was to group physical qubits into virtual qubits, which allows it to apply error diagnostics and correction without destroying them, and run it all over Quantinuum's hardware. The result was an error rate that was 800 times better than relying on physical qubits alone. Microsoft claims it was able to run more than 14,000 experiments without any errors.

According to Jason Zander, EVP of Microsoft's Strategic Missions and Technologies division, this achievement could finally bring us to "Level 2 Resilient" quantum computing, which would be reliable enough for practical applications.

"The task at hand for the entire quantum ecosystem is to increase the fidelity of qubits and enable fault-tolerant quantum computing so that we can use a quantum machine to unlock solutions to previously intractable problems," Zander wrote in a blog post today. "In short, we need to transition to reliable logical qubits — created by combining multiple physical qubits together into logical ones to protect against noise and sustain a long (i.e., resilient) computation."

Microsoft's announcement is a "strong result," according to Aram Harrow, a professor of physics at MIT focusing on quantum information and computing. "The Quantinuum system has impressive error rates and control, so it was plausible that they could do an experiment like this, but it's encouraging to see that it worked," he said in an e-mail to Engadget. "Hopefully they'll be able to keep maintaining or even improving the error rate as they scale up."

Microsoft Quantum Computing
Microsoft

Researchers will be able to get a taste of Microsoft's reliable quantum computing via Azure Quantum Elements in the next few months, where it will be available as a private preview. The goal is to push even further to Level 3 quantum supercomputing, which will theoretically be able to tackle incredibly complex issues like climate change and exotic drug research. It's unclear how long it'll take to actually reach that point, but for now, at least we're moving one step closer towards practical quantum computing.

"Getting to a large-scale fault-tolerant quantum computer is still going to be a long road," Professor Harrow wrote. "This is an important step for this hardware platform. Along with the progress on neutral atoms, it means that the cold atom platforms are doing very well relative to their superconducting qubit competitors."

This article originally appeared on Engadget at https://www.engadget.com/microsoft-may-have-finally-made-quantum-computing-useful-164501302.html?src=rss

This camera captures 156.3 trillion frames per second

Scientists have created a blazing-fast scientific camera that shoots images at an encoding rate of 156.3 terahertz (THz) to individual pixels — equivalent to 156.3 trillion frames per second. Dubbed SCARF (swept-coded aperture real-time femtophotography), the research-grade camera could lead to breakthroughs in fields studying micro-events that come and go too quickly for today’s most expensive scientific sensors.

SCARF has successfully captured ultrafast events like absorption in a semiconductor and the demagnetization of a metal alloy. The research could open new frontiers in areas as diverse as shock wave mechanics or developing more effective medicine.

Leading the research team was Professor Jinyang Liang of Canada’s Institut national de la recherche scientifique (INRS). He’s a globally recognized pioneer in ultrafast photography who built on his breakthroughs from a separate study six years ago. The current research was published in Nature, summarized in a press release from INRS and first reported on by Science Daily.

Professor Liang and company tailored their research as a fresh take on ultrafast cameras. Typically, these systems use a sequential approach: capture frames one at a time and piece them together to observe the objects in motion. But that approach has limitations. “For example, phenomena such as femtosecond laser ablation, shock-wave interaction with living cells, and optical chaos cannot be studied this way,” Liang said.

Components of a research-grade camera spread in a row on a scientific table.
SCARF
Institut national de la recherche scientifique

The new camera builds on Liang’s previous research to upend traditional ultrafast camera logic. “SCARF overcomes these challenges,” INRS communication officer Julie Robert wrote in a statement. “Its imaging modality enables ultrafast sweeping of a static coded aperture while not shearing the ultrafast phenomenon. This provides full-sequence encoding rates of up to 156.3 THz to individual pixels on a camera with a charge-coupled device (CCD). These results can be obtained in a single shot at tunable frame rates and spatial scales in both reflection and transmission modes.”

In extremely simplified terms, that means the camera uses a computational imaging modality to capture spatial information by letting light enter its sensor at slightly different times. Not having to process the spatial data at the moment is part of what frees the camera to capture those extremely quick “chirped” laser pulses at up to 156.3 trillion times per second. The images’ raw data can then be processed by a computer algorithm that decodes the time-staggered inputs, transforming each of the trillions of frames into a complete picture.

Remarkably, it did so “using off-the-shelf and passive optical components,” as the paper describes. The team describes SCARF as low-cost with low power consumption and high measurement quality compared to existing techniques.

Although SCARF is focused more on research than consumers, the team is already working with two companies, Axis Photonique and Few-Cycle, to develop commercial versions, presumably for peers at other higher learning or scientific institutions.

For a more technical explanation of the camera and its potential applications, you can view the full paper in Nature.

This article originally appeared on Engadget at https://www.engadget.com/this-camera-captures-1563-trillion-frames-per-second-184651322.html?src=rss

Friends don’t let friends use an AI STI test

Picture the scene: Your date has gone well and you and your partner might sleep together. Like any safe adult, you assume there will be a conversation about STI status and the use of protection. Now imagine how you would feel if they asked to take a photo of your penis and upload it to a website you’ve never heard of. That’s the future of intimacy, as imagined by Calmara, a new service launched by “men’s health” startup HeHealth.

Banner image from the top of the HeHealth website describing Calmara.ai as as
HeHealth Website

Its press release suggests users take a picture of their partner’s penis so it can be run through a deep learning model for visual signs of sexually-transmitted infections. And while the website suggests users should wear protection, a banner atop the HeHealth sites describes the app as “Your intimate bestie for unprotected sex.” Mixed messages aside, you may notice some major issues with the pitch: That this only covers infections that present visually, and that it’s only designed to work with penises.

But even if that use case applies, you might not feel you can trust its conclusions once you’ve looked at the data. The Calmara website claims its scans are up to 90 percent accurate, saying its AI has been “battle-tested by over 40,000 users.” That figure doesn’t match up to its press release, which says accuracy reaches 94.4 percent (a figure cited in this NSFW preprint paper submitted a week ago), but its FAQ says the accuracy ranges “from 65 percent to 96 percent across various conditions.” We've reached out to the company and want to learn more about the apparent discrepancy.

Image of the Calmara website showing its claim of
Calmara

It’s not impossible for models to categorize visual information — I reported on how systems like these look at images of cells to aid drug discovery. But there are plenty of reasons as to why visual information isn’t going to be as reliable for an STI test. After all, plenty of conditions don’t have visual symptoms and carriers can often be asymptomatic long after infection. The company admits to this in its FAQ, saying that the app is a “first line of defense, not a full-on fortress.” Not to mention that other factors, like the “lighting, the particular health quirks you’re scouting for and a rainbow of skin tones might tweak those [accuracy] numbers a bit.” Even more alarming, the unpublished paper (which is riddled with typos) admits that a full 40 percent of its training dataset is comprised of "augmented" images, for instance "extracting specific visually recognizable disease patterns from the existing clinical image dataset and layering those patterns on top of images of health (sic) penises."

Image from the Calmara FAQ highlighting the variability of its tests.
Calmara

The Calmara website’s disclaimer says that its tools are for the purpose of “promoting and supporting general wellness and a healthy lifestyle and are not to be used to diagnose, cure, treat, manage or prevent any disease or condition." Of course, if it really was intended as a general wellness tool, it probably wouldn’t describe itself as “Your intimate bestie for unprotected sex,” would it.

It doesn’t help that this is a system asking users to send pictures of their, or their partner's genitalia. Issues around consent and — as writer Ella Dawson raised on Bluesky — age verification, don’t seem to have been considered. The company's promises that the data is locked in a "digital stronghold" lacks specifics about its security approach or how the data it obtains may be shared. But that hasn’t stopped the company from suggesting that it could, in future, be integrated “directly into dating apps.”

Fundamentally, there are so many red flags and potential vectors for abuse and giving users a false sense of confidence that nobody should try using it.

This article originally appeared on Engadget at https://www.engadget.com/friends-dont-let-friends-use-an-ai-sti-test-162354796.html?src=rss

Moon mining startup Interlune wants to start digging for helium-3 by 2030

A budding startup called Interlune is trying to become the first private company to mine the moon’s natural resources and sell them back on Earth. Interlune will initially focus on helium-3 — a helium isotope created by the sun through the process of fusion — which is abundant on the moon. In an interview with Ars Technica, Rob Meyerson, one of Interlune’s founders and former Blue Origin president, said the company hopes to fly its harvester with one of the upcoming commercial moon missions backed by NASA. The plan is to have a pilot plant on the moon by 2028 and begin operations by 2030, Meyerson said.

Interlune announced this week that it’s raised $18 million in funding, including $15 million in its most recent round led by Seven Seven Six, the venture firm started by Reddit co-founder Alexis Ohanian. The resource it’s targeting, helium-3, could be used on Earth for applications like quantum computing, medical imaging and, perhaps some day down the line, as fuel for fusion reactors. ​​Helium-3 is carried to the moon by solar winds and is thought to remain on the surface trapped in the soil, whereas when it reaches Earth, it’s blocked by the magnetosphere.

Interlune aims to excavate huge amounts of the lunar soil (or regolith), process it and extract the helium-3 gas, which it would then ship back to Earth. Alongside its proprietary lunar harvester, Interlune is planning a robotic lander mission to assess the concentration of helium-3 at the selected location on the surface. 

A graphic showing how helium-3 is produced by the sun, travels to the moon and is deflected by Earth's magnetosphere
Interlune

“For the first time in history,” Meyerson said in a statement, “harvesting natural resources from the Moon is technologically and economically feasible.” The founding team includes Meyerson and former Blue Origin Chief Architect Gary Lai, Apollo 17 astronaut Harrison H. Schmitt, former Rocket Lab exec Indra Hornsby and James Antifaev, who worked for Alphabet’s high-altitude balloon project, Loon. 

This article originally appeared on Engadget at https://www.engadget.com/moon-mining-startup-interlune-wants-to-start-digging-for-helium-3-by-2030-152216803.html?src=rss

Moon mining startup Interlune wants to start digging for helium-3 by 2030

A budding startup called Interlune is trying to become the first private company to mine the moon’s natural resources and sell them back on Earth. Interlune will initially focus on helium-3 — a helium isotope created by the sun through the process of fusion — which is abundant on the moon. In an interview with Ars Technica, Rob Meyerson, one of Interlune’s founders and former Blue Origin president, said the company hopes to fly its harvester with one of the upcoming commercial moon missions backed by NASA. The plan is to have a pilot plant on the moon by 2028 and begin operations by 2030, Meyerson said.

Interlune announced this week that it’s raised $18 million in funding, including $15 million in its most recent round led by Seven Seven Six, the venture firm started by Reddit co-founder Alexis Ohanian. The resource it’s targeting, helium-3, could be used on Earth for applications like quantum computing, medical imaging and, perhaps some day down the line, as fuel for fusion reactors. ​​Helium-3 is carried to the moon by solar winds and is thought to remain on the surface trapped in the soil, whereas when it reaches Earth, it’s blocked by the magnetosphere.

Interlune aims to excavate huge amounts of the lunar soil (or regolith), process it and extract the helium-3 gas, which it would then ship back to Earth. Alongside its proprietary lunar harvester, Interlune is planning a robotic lander mission to assess the concentration of helium-3 at the selected location on the surface. 

A graphic showing how helium-3 is produced by the sun, travels to the moon and is deflected by Earth's magnetosphere
Interlune

“For the first time in history,” Meyerson said in a statement, “harvesting natural resources from the Moon is technologically and economically feasible.” The founding team includes Meyerson and former Blue Origin Chief Architect Gary Lai, Apollo 17 astronaut Harrison H. Schmitt, former Rocket Lab exec Indra Hornsby and James Antifaev, who worked for Alphabet’s high-altitude balloon project, Loon. 

This article originally appeared on Engadget at https://www.engadget.com/moon-mining-startup-interlune-wants-to-start-digging-for-helium-3-by-2030-152216803.html?src=rss

SpaceX’s third Starship test launch takes off successfully

SpaceX hoped the third time would be the charm as it attempted another test of its Starship rocket. This third launch did indeed go well, with the Starship successfully launching at 9:25AM ET. Shortly after launch, it succesfully completed the hot-staging separation from the Super Heavy Booster, and the Starship successfully ignited the second-stage Raptor engines. It's currently coasting and the Raptor engines are planned to be re-lit about 40 minutes after initial take off. The Super Heavy Booster, meanwhile, went into a semi-controlled descent; its engines didn't fully re-ignite as planned prior to splashdown. We should hear more about what worked and didn't work in that phase of testing once everything is finished.

While SpaceX said that both the booster and Starship itself were going to return to Earth at "terminal velocity," thus making any recovery of them impossible, it looks like Starship itself didn't make it to splashdown. Based on the initial data, it looks like Starship broke up during re-entry. As with the booster, we should hear more about the specifics behind the ship's ultimate fate soon.

Before breakup, though, we got to see some dramatic footage of Starship beginning reentry:

The previous two efforts ended in failure, though Starship did reach space on the second go-round. A 110-minute launch window for the latest attempt opens at 8AM ET. A livestream covering the launch kicked off at about 8:50AM ET, and you can follow it here on X

The Federal Aviation Authority authorized the SpaceX Starship Super Heavy Orbital Flight Test 3 on Wednesday afternoon. The agency said in a statement to Engadget that Space X "met all safety, environmental, policy and financial responsibility requirements." 

The FAA grounded Starship for several weeks before the second test flight until the company took 63 "corrective actions." The first launch caused a fire in a state park and led to a lawsuit from environmental groups.

Along with building on top of the previous tests, there are a number of "ambitious" goals SpaceX had in mind for this launch. The company aimed to carry out the first re-light of a Raptor engine in space, along with ensuring the successful ascent burn of both stages, opening and closing the payload door and conducting a controlled reentry. The spacecraft flew on a new trajectory and splashed down in the Indian Ocean. SpaceX said the updated flight path afforded it the chance to try out new things like engine burns in space while prioritizing public safety.

This article originally appeared on Engadget at https://www.engadget.com/watch-spacexs-third-starship-test-launch-here-set-for-takeoff-at-925am-et-125513011.html?src=rss

Japan’s Space One rocket launch attempt ends in a fiery explosion

A startup company called Space One launched a rocket earlier in hopes of becoming the first private entity in Japan to put a satellite in orbit. Unfortunately, its attempt ended in a fiery explosion, mere seconds after lift off at 11AM local time. Its 60-foot-long rocket Kairos launched from the company's Space Port Kii in Wakayama, a prefecture south of Osaka in Japan's Kansai region. Space One director Mamoru Endo told reporters at a conference that the rocket's automated system detected an anomaly five seconds after liftoff and triggered its self-destruct function. The company has yet to figure out what that anomaly is and will be investigating the incident for answers. 

Kairos was carrying payload for the Cabinet Satellite Intelligence Center, which collects and analyzes imagery information for the Japanese government. That satellite was supposed to be an alternative to an existing Japanese satellite monitoring military facilities in and rocket launches from North Korea. Masakazu Toyoda, the company's president, said during the conference that Space One is "prepared to take up the next challenge." He also emphasized how common failed launches are in space travel. And that is true — SpaceX, for instance, lost several Starship vehicles over the past few years when they blew up during testing. 

Space One, backed by Canon and aerospace manufacturer IHI, eventually hopes to offer satellite launch services using small rockets, which it says "offer greater scheduling flexibility than large ones." It's also aiming to provide the "world's shortest lead time from contractual engagement to launch, as well as the world's most frequent launching schedule" while also minimizing the costs of putting satellites into orbit. Since the company must be able to stage a successful launch before customers come knocking on its doors, it will most likely announce its next attempt in the near future. 

Last year, Japanese company ispace also failed to become the first private company to land on the moon when it lost contact with its Hakuto-R lander. But the country's space agency, JAXA, is doing better than its private counterparts: Its SLIM lunar lander successfully touched down in January and is expected to resume its operations in late March after the lunar night is over. 

This article originally appeared on Engadget at https://www.engadget.com/japans-space-one-rocket-launch-attempt-ends-in-a-fiery-explosion-104937369.html?src=rss

This luxury handbag is made from the material NASA uses to collect comet dust

Space and fashion lovers have a crossover accessory right now, and it's not just because there are some moons and stars dotted across it. Coperni, a French luxury brand, has unveiled the Air Swipe Bag, made entirely of NASA's nanomaterial silica aerogel, Fast Company reports. Scientist Steve Jones first created the substance for NASA's 1999 Stardust mission, which brought samples back from the Wild 2 comet.

The Air Swipe Bag weighs only 1.1 ounces, with just 0.2 percent of its matter actually tangible. The rest is air that moves through the Aerogel's trillions of channels. Aerogel is renowned for its lightweight build, taking the title of lightest matter in the 1990s, with a second version breaking that record. NASA previously dubbed the substance "solid smoke," and one look at the bag shows how true that statement is. Coperni's Instagram post even had one Instagram user comment: "This looks like my bong when it's filled with smoke and I'm obsessed."

While Aerogel is just being used to create a small bag, in this case, it's one sturdy accessory. The substance can hold 4,000 times its weight (far more than this purse can fit) and withstand up to 2,200 degrees Fahrenheit. It's no surprise then that, when not being made into bags, Aerogel has been used for tasks such as insulating Mars rovers.

The Air Swipe Bag isn't listed for sale on Coperni's website yet, but if you want an accessory this powerful, it's likely going to cost you. Space travel and fashion are two things that never come cheap. 

This article originally appeared on Engadget at https://www.engadget.com/this-luxury-handbag-is-made-from-the-material-nasa-uses-to-collect-comet-dust-135151909.html?src=rss

This luxury handbag is made from the material NASA uses to collect comet dust

Space and fashion lovers have a crossover accessory right now, and it's not just because there are some moons and stars dotted across it. Coperni, a French luxury brand, has unveiled the Air Swipe Bag, made entirely of NASA's nanomaterial silica aerogel, Fast Company reports. Scientist Steve Jones first created the substance for NASA's 1999 Stardust mission, which brought samples back from the Wild 2 comet.

The Air Swipe Bag weighs only 1.1 ounces, with just 0.2 percent of its matter actually tangible. The rest is air that moves through the Aerogel's trillions of channels. Aerogel is renowned for its lightweight build, taking the title of lightest matter in the 1990s, with a second version breaking that record. NASA previously dubbed the substance "solid smoke," and one look at the bag shows how true that statement is. Coperni's Instagram post even had one Instagram user comment: "This looks like my bong when it's filled with smoke and I'm obsessed."

While Aerogel is just being used to create a small bag, in this case, it's one sturdy accessory. The substance can hold 4,000 times its weight (far more than this purse can fit) and withstand up to 2,200 degrees Fahrenheit. It's no surprise then that, when not being made into bags, Aerogel has been used for tasks such as insulating Mars rovers.

The Air Swipe Bag isn't listed for sale on Coperni's website yet, but if you want an accessory this powerful, it's likely going to cost you. Space travel and fashion are two things that never come cheap. 

This article originally appeared on Engadget at https://www.engadget.com/this-luxury-handbag-is-made-from-the-material-nasa-uses-to-collect-comet-dust-135151909.html?src=rss

This is what it looks like to reenter Earth’s atmosphere from a space capsule’s POV

Incredible footage released by Varda Space Industries gives us a first-person view of a space capsule’s return trip to Earth, from the moment it separates from its carrier satellite in orbit all the way through its fiery reentry and bumpy arrival at the surface. Varda’s W-1 capsule landed at the Utah Test and Training Range, a military site, on February 21 in a first for a commercial company. It spent roughly eight months leading up to that in low Earth orbit, stuck in regulatory limbo while the company waited for the government approvals it needed to land on US soil, according to Ars Technica.

“Here's a video of our capsule ripping through the atmosphere at mach 25, no renders, raw footage,” the company posted on X alongside clips from reentry. Varda also shared a 28-minute video of W-1’s full journey home from LEO on YouTube.

Varda, which worked with Rocket Lab for the mission, is trying to develop mini-labs that can produce pharmaceuticals in orbit — in this case, the HIV drug ritonavir. Its W-1 capsule was attached to Rocket Lab’s Photon satellite “bus,” which the company said ahead of launch would provide power, communications and altitude control for the capsule. Photon successfully brought the capsule to where it needed to be for last week’s reentry, then itself burned up in Earth’s atmosphere, SpaceNews reported. Now that the capsule has returned, Ars Technica reports that the ritonavir crystals grown in orbit will be analyzed by the Indiana-based pharmaceutical company, Improved Pharma.

This article originally appeared on Engadget at https://www.engadget.com/this-is-what-it-looks-like-to-reenter-earths-atmosphere-from-a-space-capsules-pov-211120769.html?src=rss