NASA and IBM have teamed up to build an AI foundation model for weather and climate applications. They’re combining their respective knowledge and skills in the Earth science and AI fields, respectively, for the model, which they say should offer “significant advantages over existing technology.”
Current AI models such as GraphCast and Fourcastnet are already generating weather forecasts more quickly than traditional meteorological models. However, IBM notes those are AI emulators rather than foundation models. As the name suggests, foundation models are the base technologies that power generative AI applications. AI emulators can make weather predictions based on sets of training data, but they don’t have applications beyond that. Nor can they encode the physics at the core of weather forecasting, IBM says.
NASA and IBM have several goals for their foundational model. Compared with current models, they hope for it to have expanded accessibility, faster inference times and greater diversity of data. Another key aim is to improve forecasting accuracy for other climate applications. The expected capabilities of the model include predicting meteorological phenomena, inferring high-res information based on low-res data and "identifying conditions conducive to everything from airplane turbulence to wildfires."
This follows another foundational model that NASA and IBM deployed in May. It harnesses data from NASA satellites for geospatial intelligence, and it's the largest geospatial model on open-source AI platform Hugging Face, according to IBM. So far, this model has been used to track and visualize tree planting and growing activities in water tower areas (forest landscapes that retain water) in Kenya. The aim is to plant more trees and tackle water scarcity issues. The model is also being used to analyze urban heat islands in the United Arab Emirates.
This article originally appeared on Engadget at https://www.engadget.com/nasa-and-ibm-are-building-an-ai-for-weather-and-climate-applications-050141545.html?src=rss
Do black holes, like dying old soldiers, simply fade away? Do they pop like hyperdimensional balloons? Maybe they do, or maybe they pass through a cosmic rubicon, effectively reversing their natures and becoming inverse anomalies that cannot be entered through their event horizons but which continuously expel energy and matter back into the universe.
In his latest book, White Holes, physicist and philosopher Carlo Rovelli focuses his attention and considerable expertise on the mysterious space phenomena, diving past the event horizon to explore their theoretical inner workings and and posit what might be at the bottom of those infinitesimally tiny, infinitely fascinating gravitational points. In this week's Hitting the Books excerpt, Rovelli discusses a scientific schism splitting the astrophysics community as to where all of the information — which, from our current understanding of the rules of our universe, cannot be destroyed — goes once it is trapped within an inescapable black hole.
In 1974, Stephen Hawking made an unexpected theoretical discovery: black holes must emit heat. This, too, is a quantum tunnel effect, but a simpler one than the bounce of a Planck star: photons trapped inside the horizon escape thanks to the pass that quantum physics provides to everything. They “tunnel” beneath the horizon.
So black holes emit heat, like a stove, and Hawking computed their temperature. Radiated heat carries away energy. As it loses energy, the black hole gradually loses mass (mass is energy), becoming ever lighter and smaller. Its horizon shrinks. In the jargon we say that the black hole “evaporates.”
Heat emission is the most characteristic of the irreversible processes: the processes that occur in one time direction and cannot be reversed. A stove emits heat and warms a cold room. Have you ever seen the walls of a cold room emit heat and heat up a warm stove? When heat is produced, the process is irreversible. In fact, whenever the process is irreversible, heat is produced (or something analogous). Heat is the mark of irreversibility. Heat distinguishes past from future.
There is therefore at least one clearly irreversible aspect to the life of a black hole: the gradual shrinking of its horizon.
But, careful: the shrinking of the horizon does not mean that the interior of the black hole becomes smaller. The interior largely remains what it is, and the interior volume keeps growing. It is only the horizon that shrinks. This is a subtle point that confuses many. Hawking radiation is a phenomenon that regards mainly the horizon, not the deep interior of the hole. Therefore, a very old black hole turns out to have a peculiar geometry: an enormous interior (that continues to grow) and a minuscule (because it has evaporated) horizon that encloses it. An old black hole is like a glass bottle in the hands of a skillful Murano glassblower who succeeds in making the volume of the bottle increase as its neck becomes narrower.
At the moment of the leap from black to white, a black hole can therefore have an extremely small horizon and a vast interior. A tiny shell containing vast spaces, as in a fable.
In fables, we come across small huts that, when entered, turn out to contain hundreds of vast rooms. This seems impossible, the stuff of fairy tales. But it is not so. A vast space enclosed in a small sphere is concretely possible.
If this seems bizarre to us, it is only because we became habituated to the idea that the geometry of space is simple: it is the one we studied at school, the geometry of Euclid. But it is not so in the real world. The geometry of space is distorted by gravity. The distortion permits a gigantic volume to be enclosed within a tiny sphere. The gravity of a Planck star generates such a huge distortion.
An ant that has always lived on a large, flat plaza will be amazed when it discovers that through a small hole it has access to a large underground garage. Same for us with a black hole. What the amazement teaches is that we should not have blind confidence in habitual ideas: the world is stranger and more varied than we imagine.
The existence of large volumes within small horizons has also generated confusion in the world of science. The scientific community has split and is quarreling about the topic. In the rest of this section, I tell you about this dispute. It is more technical than the rest — skip it if you like — but it is a picture of a lively, ongoing scientific debate.
The disagreement concerns how much information you can cram into an entity with a large volume but a small surface. One part of the scientific community is convinced that a black hole with a small horizon can contain only a small amount of information. Another disagrees.
What does it mean to “contain information”?
More or less this: Are there more things in a box containing five large and heavy balls, or in a box that contains twenty small marbles? The answer depends on what you mean by “more things.” The five balls are bigger and weigh more, so the first box contains more matter, more substance, more energy, more stuff. In this sense there are “more things” in the box of balls.
But the number of marbles is greater than the number of balls. In this sense, there are “more things,” more details, in the box of marbles. If we wanted to send signals, by giving a single color to each marble or each ball, we could send more signals, more colors, more information, with the marbles, because there are more of them. More precisely: it takes more information to describe the marbles than it does to describe the balls, because there are more of them. In technical terms, the box of balls contains more energy, whereas the box of marbles contains more information.
An old black hole, considerably evaporated, has little energy, because the energy has been carried away via the Hawking radiation. Can it still contain much information, after much of its energy is gone? Here is the brawl.
Some of my colleagues convinced themselves that it is not possible to cram a lot of information beneath a small surface. That is, they became convinced that when most energy has gone and the horizon has become minuscule, only little information can remain inside.
Another part of the scientific community (to which I belong) is convinced of the contrary. The information in a black hole—even a greatly evaporated one—can still be large. Each side is convinced that the other has gone astray.
Disagreements of this kind are common in the history of science; one may say that they are the salt of the discipline. They can last long. Scientists split, quarrel, scream, wrangle, scuffle, jump at each other’s throats. Then, gradually, clarity emerges. Some end up being right, others end up being wrong.
At the end of the nineteenth century, for instance, the world of physics was divided into two fierce factions. One of these followed Mach in thinking that atoms were just convenient mathematical fictions; the other followed Boltzmann in believing that atoms exist for real. The arguments were ferocious. Ernst Mach was a towering figure, but it was Boltzmann who turned out to be right. Today, we even see atoms through a microscope.
I think that my colleagues who are convinced that a small horizon can contain only a small amount of information have made a serious mistake, even if at first sight their arguments seem convincing. Let’s look at these.
The first argument is that it is possible to compute how many elementary components (how many molecules, for example) form an object, starting from the relation between its energy and its temperature. We know the energy of a black hole (it is its mass) and its temperature (computed by Hawking), so we can do the math. The result indicates that the smaller the horizon, the fewer its elementary components.
The second argument is that there are explicit calculations that allow us to count these elementary components directly, using both of the most studied theories of quantum gravity—string theory and loop theory. The two archrival theories completed this computation within months of each other in 1996. For both, the number of elementary components becomes small when the horizon is small.
These seem like strong arguments. On the basis of these arguments, many physicists have accepted a “dogma” (they call it so themselves): the number of elementary components contained in a small surface is necessarily small. Within a small horizon there can only be little information. If the evidence for this “dogma” is so strong, where does the error lie?
It lies in the fact that both arguments refer only to the components of the black hole that can be detected from the outside, as long as the black hole remains what it is. And these are only the components residing on the horizon. Both arguments, in other words, ignore that there can be components in the large interior volume. These arguments are formulated from the perspective of someone who remains far from the black hole, does not see the inside, and assumes that the black hole will remain as it is forever. If the black hole stays this way forever—remember—those who are far from it will see only what is outside or what is right on the horizon. It is as if for them the interior does not exist. For them.
But the interior does exist! And not only for those (like us) who dare to enter, but also for those who simply have the patience to wait for the black horizon to become white, allowing what was trapped inside to come out. In other words, to imagine that the calculations of the number of components of a black hole given by string theory or loop theory are complete is to have failed to take on board Finkelstein’s 1958 article. The description of a black hole from the outside is incomplete.
The loop quantum gravity calculation is revealing: the number of components is precisely computed by counting the number of quanta of space on the horizon. But the string theory calculation, on close inspection, does the same: it assumes that the black hole is stationary, and is based on what is seen from afar. It neglects, by hypothesis, what is inside and what will be seen from afar after the hole has finished evaporating — when it is no longer stationary.
I think that certain of my colleagues err out of impatience they want everything resolved before the end of evaporation, where quantum gravity becomes inevitable) and because they forget to take into account what is beyond that which can be immediately seen — two mistakes we all frequently make in life.
Adherents to the dogma find themselves with a problem. They call it “the black hole information paradox.” They are convinced that inside an evaporated black hole there is no longer any information. Now, everything that falls into a black hole carries information. So a large amount of information can enter the hole. Information cannot vanish. Where does it go?
To solve the paradox, the devotees of the dogma imagine that information escapes the hole in mysterious and baroque ways, perhaps in the folds of the Hawking radiation, like Ulysses and his companions escaping from the cave of the cyclops by hiding beneath sheep. Or they speculate that the interior of a black hole is connected to the outside by hypothetical invisible canals . . . Basically, they are clutching at straws—looking, like all dogmatists in difficulty, for abstruse ways of saving the dogma.
But the information that enters the horizon does not escape by some arcane, magical means. It simply comes out after the horizon has been transformed from a black horizon into a white horizon.
In his final years, Stephen Hawking used to remark that there is no need to be afraid of the black holes of life: sooner or later, there will be a way out of them. There is — via the child white hole.
This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-white-holes-carlo-rovelli-riverhead-153058062.html?src=rss
The James Webb telescope is back with some more gorgeous images. This time, the telescope eyed the center of the Milky Way galaxy, shining a light on the densest part of our surrounding environs in “unprecedented detail.” Specifically, the images are sourced from a star-forming region called Sagittarius C, or Sgr C for short.
This area is about 300 light-years from the galaxy’s supermassive black hole, Sagittarius A, and over 25,000 light-years from a little blue rock called Earth. All told, the region boasts over 500,000 stars and various clusters of protostars, which are stars that are still forming and gaining mass. The end result? A stunning cloud of chaos, especially when compared to our region of space, which is decidedly sparse in comparison.
As a matter of fact, the galactic center is “the most extreme environment” in the Milky Way, as stated by University of Virginia professor Jonathan Tan, who assisted the observation team. There has never been any data on this region with this “level of resolution and sensitivity”, until now, thanks to the power of the Webb telescope.
At the center of everything is a massive protostar that weighs more than 30 times our sun. This actually makes the area seem less populated than it actually is, as this solar object blocks light from behind it, so not even Webb can see all of the stars in the region. So what you’re looking at is a conservative estimate of just how crowded the area is. It’s like the Times Square of space, only without a Guy Fieri restaurant (for now.)
NASA, ESA, CSA, STScI, and S. Crowe (University of Virginia).
The data provided by these images will allow researchers to put current theories of star formation to “their most rigorous test.” To that end, Webb’s NIRCam (Near-Infrared Camera) instrument captured large-scale emission imagery from ionized hydrogen, the blue on the lower side of the image. This is likely the result of young and massive stars releasing energetic photons, but the vast size of the region came as a surprise to researchers, warranting further study.
The observation team’s principal investigator, Samuel Crowe, said that the research enabled by these and forthcoming images will allow scientists to understand the nature of massive stars which is akin to “learning the origin story of much of the universe.”
This article originally appeared on Engadget at https://www.engadget.com/webb-telescope-images-show-an-unprecedented-and-chaotic-view-of-the-center-of-our-galaxy-185912370.html?src=rss
SpaceX's second test flight of its Starship spacecraft — which it hopes will one day ferry humans to the moon and Mars — ended in an explosion Saturday morning minutes after taking off from the company's spaceport in Boca Chica, Texas. Starship launched just after 8AM ET atop a Super Heavy rocket, the largest rocket in the world.
Moments after completing stage separation, when the Super Heavy booster detached itself from Starship, the rocket's first stage exploded. Starship, however, continued on for several more minutes, surpassing the flight time of its predecessor. A faint explosion could be seen in the livestream around the 8-minute mark, and hosts confirmed soon after that they'd lost contact with the craft.
Unlike in its first test, which came to an end about 24 miles above Earth's surface, Starship was able to reach space this time around. At the time of its explosion, the livestream's tracker clocked it at an altitude of about 92 miles.
The booster experienced a rapid unscheduled disassembly shortly after stage separation while Starship's engines fired for several minutes on its way to space
Today’s flight was also SpaceX’s first attempt at its new separation technique called “hot staging,” in which it fired up Starship’s engines before the craft detached from the still-firing first stage. It managed to complete the motions before Super Heavy exploded, with Starship already far away. SpaceX will now have to figure out tweaks to its booster to help it withstand future hot-staging attempts.
But, as with the last test that ended in an explosion, SpaceX is still billing it all as a success. Kate Tice, one of the livestream's hosts and a quality engineering manager for SpaceX, said it was “an incredibly successful day, even though we did have a RUD — or rapid unscheduled disassembly — of both the Super Heavy booster and the ship. We got so much data and that will all help to improve for our next flight.”
This article originally appeared on Engadget at https://www.engadget.com/spacex-loses-another-starship-after-rocket-explodes-during-test-flight-143503845.html?src=rss
At 8AM ET today, SpaceX will open a 20-minute launch window for Starship's second-ever fully integrated test flight. If everything goes well during the pre-flight procedures, and if the weather cooperates, then we'll see the company's spacecraft make another attempt to reach space. SpaceX completed Starship's first fully integrated launch in April. While it was considered a success, the company wasn't able to meet all its objectives and had to intentionally blow up the spacecraft after its two stages failed to separate.
As a result of that incident, the Federal Aviation Administration (FAA) had grounded Starship while authorities conducted an investigation. They found that the explosion scattered debris across 385 acres of land, caused pulverized concrete to rain down on areas up to 6.5 miles northwest of the pad site, and started a wildfire at Boca Chica State Park. The FAA required SpaceX to make 63 corrective actions before it could give the company clearance to fly its reusable spacecraft again.
SpaceX said that this flight will debut several changes it implemented due to what happened during Starship's first test flight. They include a new hot-stage separation system, a new electronic Thrust Vector Control (TVC) system for Super Heavy Raptor engines, reinforcements to the pad foundation and a water-cooled steel flame deflector.
The company's live broadcast of the launch starts at 7:24AM ET on its website and on X. If the Starship's stages can successfully separate this time around, its upper stage will fly across the planet before splashing down off a Hawaiian coast.
This article originally appeared on Engadget at https://www.engadget.com/watch-spacexs-starship-lift-off-for-its-second-fully-integrated-test-flight-121559318.html?src=rss
MIT researchers developed an ingestible capsule that can monitor vital signs including heart rate and breathing patterns from within a patient’s GI tract. The scientists also say that the novel device has the potential to also be used to detect signs of respiratory depression during an opioid overdose. Giovanni Traverso, an associate professor of mechanical engineering at MIT who has been working on developing a range of ingestible sensors, told Engadget that the device will be especially useful for sleep studies.
Conventionally, sleep studies require patients to be hooked up to a number of sensors and devices. In labs and in at-home studies, sensors can be attached to a patient’s scalp, temples, chest and lungs with wires. A patient may also wear a nasal cannula, chest belt and pulse oximeter which can connect to a portable monitor. “As you can imagine, trying to sleep with all of this machinery can be challenging,” Traverso told Engadget.
MIT
This trial, which used a capsule made by Celero Systems —A start-up led by MIT and Harvard researchers— marks the first time ingestible sensor technology was tested in humans. Aside from the start-up and MIT, the research was spearheaded by experts at West Virginia University and other hospital affiliates.
The capsule contains two small batteries and a wireless antenna that transmits data. The ingestible sensor, which is the size of a vitamin capsule, traveled through the gastrointestinal tract, and collected signals from the device while it was in the stomach. The participants stayed at a sleep lab overnight while the device recorded respiration, heart rate, temperature and gastric motility. The sensor was also able to detect sleep apnea in one of the patients during the trial. The findings suggest that the ingestible was able to measure health metrics on par with medical-grade diagnostic equipment at the sleep center. Traditionally, patients that need to get diagnosed with specific sleep disorders are required to stay overnight at a sleep lab, where they get hooked onto an array of sensors and devices. Ingestible sensor technology eliminates the need for that.
Importantly, MIT says there were no adverse effects reported due to capsule ingestion. The capsule typically passes through a patient within a day or so, though that short internal shelf life may also limit how effective it could be as a monitoring device. Traverso told Engadget that he aims to have Celetro, which he co-founded, eventually contain a mechanism that will allow the capsule to sit in a patient’s stomach for a week.
Dr. Ali Rezai, the executive chair of the West Virginia University Rockefeller Neuroscience Institute, said that there is a huge potential for creating a new pathway through this device that will help providers identify when a patient is overdosing according to their vitals. In the future, researchers even anticipate that devices could incorporate drugs internally: overdose reversal agents, such as nalmefene, could be slowly administered if a sensor records that a person’s breathing rate slowed or stopped. More data from the studies will be made available in the coming months.
This article originally appeared on Engadget at https://www.engadget.com/mit-tests-new-ingestible-sensor-that-record-your-breathing-through-your-intestines-224823353.html?src=rss
SpaceX aims to send Starship to space for its second test flight on November 17, now that the Federal Aviation Administration (FAA) has given it the clearance to do so. The company completed its next-generation spacecraft's first fully integrated launch in April, but it wasn't able to meet all its objectives, including having its upper stage fly across our planet before re-entering the atmosphere and splashing down in the ocean near Hawaii. SpaceX had to intentionally blow up the vehicle in the sky after an onboard fire had prevented its two stages from separating.
According to federal agencies, debris from the rocket explosion was found across 385 acres of land on SpaceX's facility and at Boca Chica State Park. It caused wildfire to break out on 3.5 acres of state park land and had led to a "plume cloud of pulverized concrete that deposited material up to 6.5 miles northwest of the pad site." The FAA grounded Starship until SpaceX took dozens of corrective actions, including a vehicle redesign to prevent leaks and fires. As Space notes, the agency finished its safety review in September, but it still had to work with the US Fish and Wildlife Service (USFWS) to finish an updated environmental review of the spacecraft.
For now, the FAA has given SpaceX the license to fly Starship for one flight. The company will open the spacecraft's two-hour launch window at 8AM EST on November 17, and if all goes well, Starship will fly across the planet and splash down off a Hawaiian coast as planned. Starship, of course, has to keep acing test flights before it can go into service. The fully reusable spacecraft represents SpaceX's future, since the company plans to use it for missions to geosynchronous orbit, the moon and Mars.
This article originally appeared on Engadget at https://www.engadget.com/spacex-prepares-for-starships-second-test-flight-after-securing-faa-clearance-035159364.html?src=rss
Synex Medical, a Toronto-based biotech research firm backed by Sam Altman (the CEO of OpenAI), has developed a tool that can measure your blood glucose levels without a finger prick. It uses a combination of low-field magnets and low-frequency radio waves to directly measure blood sugar levels non-invasively when a user inserts a finger into the device.
The tool uses magnetic resonance spectroscopy (MRS), which is similar to an MRI. Jamie Near, an Associate Professor at the University of Toronto who specializes in the research of MRS technology told Engadget that, “[an] MRI uses magnetic fields to make images of the distribution of hydrogen protons in water that is abundant in our body tissues. In MRS, the same basic principles are used to detect other chemicals that contain hydrogen.” When a user’s fingertip is placed inside the magnetic field, the frequency of a specific molecule, in this case glucose, is measured in parts per million. While the focus was on glucose for this project, MRS could be used to measure metabolites, according to the Synex, including lactate, ketones and amino acids.
Synex Medical
Matthew Rosen, a Harvard physicist whose research spans from fundamental physics to bioimaging in the field of MRI, told Engadget that he thinks the device is “clever” and “a great idea.” Magnetic resonance technology is a common technique used for chemical analysis of compounds, however, traditional resonance technologies operate at high magnetic fields and they're very expensive.
Synex found a way to get clear readings from low magnetic fields. “They’ve overcome the challenges really by developing a method that has high sensitivity and high specificity,” Rosen says. “Honestly, I have been doing magnetic resonance for thirty years. I never thought people could do glucose with a benchtop machine… you could do it with a big machine no problem.”
Professor Andre Simpson, a researcher and center director at the University of Toronto also told Engadget that he thinks Synex’s device is the “real deal.” “MRI machines can fit an entire human body and have been used to target molecule concentrations in the brain through localized spectroscopy,” he explained. “Synex has shrunk this technology to measure concentrations in a finger. I have reviewed their white paper and seen the instrument work.” Simpson said Synex’s ability to retrofit MRS technology into a small box is an engineering feat.
As of now, there are no commercially available devices that can measure blood glucose non-invasively. While there are continuous glucose monitors on the market that use microneedles, which are minimally invasive, there is still a risk of infection.
But there is competition in the space for no-prick diagnostics tools. Know Labs is trying to get approval for a portable glucose monitor that relies on a custom-made Bio-RFID sensing technology, which uses radio waves to detect blood glucose levels in the palm of your hand. When the Know Labs device was tested up against a Dexcom G6 continuous glucose monitor in a study, readings of blood glucose levels using its palm sensor technology were “within threshold” only 46 percent of the time. While the readings are technically in accordance with FDA accuracy limits for a new blood glucose monitor, Know Labs is still working out kinks through scientific research before it can begin FDA clinical trials.
Another start-up, German company DiaMonTech, is currently developing a pocket-sized diagnostic device that is still being tested and fine-tuned to measure glucose through “photothermal detection.” It uses mid-infrared lasers that essentially scan the tissue fluid at the fingertip to detect glucose molecules. CNBCand Bloomberg reported that even Apple has been “quietly developing” a sensor that can check your blood sugar levels through its wearables, though the company never confirmed. Founder and CEO of Synex, Ben Nashman, told Engadget that eventually, the company would like to develop a wearable. But further miniaturization was needed before they could bring a commercial product to market.
Rosen says he isn't sure how the sensor technology can be retrofitted for smartwatches or wearables just yet. But he can imagine a world where these tools complement blood-based diagnostics. “Is it good enough for clinical use? I have to leave that for what clinicians have to say.”
Update, November 16 2023, 10:59 AM ET: This story has been updated to clarify that a comment from the company was made by the CEO of Synex and not a company representative.
This article originally appeared on Engadget at https://www.engadget.com/researchers-use-magnetic-fields-for-non-invasive-blood-glucose-monitoring-215052628.html?src=rss
NASA’s Mars exploration robots will be on their own for the next two weeks while the space agency waits out a natural phenomenon that will prevent normal communications. Mars and Earth have reached positions in their orbits that put them on opposite sides of the sun, in an alignment known as solar conjunction. During this time, NASA says it’s risky to try and send commands to its instruments on Mars because interference from the sun could have a detrimental effect.
To prevent any issues, NASA is taking a planned break from giving orders until the planets move into more suitable positions. The pause started on Saturday and will go on until November 25. A Mars solar conjunction occurs every two years, and while the rovers will be able to send basic health updates home throughout most of the period, they’ll go completely silent for the two days when the sun blocks Mars entirely.
That means the Perseverance and Curiosity rovers, the Ingenuity helicopter, the Mars Reconnaissance Orbiter, and the Odyssey and MAVEN orbiters will be left to their own devices for a little while. Their onboard instruments will continue to gather data for their respective missions, but won’t send this information back to Earth until the blackout ends.
This article originally appeared on Engadget at https://www.engadget.com/nasa-cant-talk-to-its-mars-robots-for-two-weeks-because-the-sun-is-in-the-way-213022922.html?src=rss
One of the major benefits of certain artificial intelligence models is that they can speed up menial or time-consuming tasks —- and not just to whip up terrible "art" based on a brief text input. University of Leeds researchers have unveiled a neural network that they claim can map an outline of a large iceberg in just 0.01 seconds.
Scientists are able to track the locations of large icebergs manually. After all, one that was included in this study was the size of Singapore when it broke off from Antarctica a decade ago. But it's not feasible to manually track changes in icebergs' area and thickness — or how much water and nutrients they're releasing into seas.
"Giant icebergs are important components of the Antarctic environment," Anne Braakmann-Folgmann, lead author of a paper on the neural network, told the European Space Agency. "They impact ocean physics, chemistry, biology and, of course, maritime operations. Therefore, it is crucial to locate icebergs and monitor their extent, to quantify how much meltwater they release into the ocean.”
Until now, manual mapping has proven to be more accurate than automated approaches, but it can take a human analyst several minutes to outline a single iceberg. That can rapidly become a time- and labor-intensive process when multiple icebergs are concerned.
The researchers trained an algorithm called U-net using imagery captured by the ESA's Copernicus Sentinel-1 Earth-monitoring satellites. The algorithm was tested on seven icebergs. The smallest had an area roughly the same as Bern, Switzerland and the largest had approximately the same area as Hong Kong.
With 99 percent accuracy, the new model is said to surpass previous attempts at automation, which often struggled to tell the difference between icebergs and sea ice and other features. It's also 10,000 times faster than humans at mapping icebergs.
"Being able to map iceberg extent automatically with enhanced speed and accuracy will enable us to observe changes in iceberg area for several giant icebergs more easily and paves the way for an operational application," Dr. Braakmann-Folgmann said.
This article originally appeared on Engadget at https://www.engadget.com/a-neural-network-can-map-large-icebergs-10000-times-faster-than-humans-212855550.html?src=rss