A neural network can map large icebergs 10,000 times faster than humans

One of the major benefits of certain artificial intelligence models is that they can speed up menial or time-consuming tasks —- and not just to whip up terrible "art" based on a brief text input. University of Leeds researchers have unveiled a neural network that they claim can map an outline of a large iceberg in just 0.01 seconds.

Scientists are able to track the locations of large icebergs manually. After all, one that was included in this study was the size of Singapore when it broke off from Antarctica a decade ago. But it's not feasible to manually track changes in icebergs' area and thickness — or how much water and nutrients they're releasing into seas.

"Giant icebergs are important components of the Antarctic environment," Anne Braakmann-Folgmann, lead author of a paper on the neural network, told the European Space Agency. "They impact ocean physics, chemistry, biology and, of course, maritime operations. Therefore, it is crucial to locate icebergs and monitor their extent, to quantify how much meltwater they release into the ocean.”

Until now, manual mapping has proven to be more accurate than automated approaches, but it can take a human analyst several minutes to outline a single iceberg. That can rapidly become a time- and labor-intensive process when multiple icebergs are concerned.

The researchers trained an algorithm called U-net using imagery captured by the ESA's Copernicus Sentinel-1 Earth-monitoring satellites. The algorithm was tested on seven icebergs. The smallest had an area roughly the same as Bern, Switzerland and the largest had approximately the same area as Hong Kong.

With 99 percent accuracy, the new model is said to surpass previous attempts at automation, which often struggled to tell the difference between icebergs and sea ice and other features. It's also 10,000 times faster than humans at mapping icebergs.

"Being able to map iceberg extent automatically with enhanced speed and accuracy will enable us to observe changes in iceberg area for several giant icebergs more easily and paves the way for an operational application," Dr. Braakmann-Folgmann said.

This article originally appeared on Engadget at https://www.engadget.com/a-neural-network-can-map-large-icebergs-10000-times-faster-than-humans-212855550.html?src=rss

ESA releases stunning first images from Euclid, its ‘dark universe detective’

The European Space Agency (ESA) has released the first images from its Euclid space telescope — a spacecraft peering 10 billion years into the past to create the largest 3D map of the universe yet. From the distinctive Horsehead Nebula (pictured above) to a “hidden” spiral galaxy that looks much like the Milky Way, Euclid is giving us the clearest look yet at both known and previously unseen objects speckling enormous swathes of the sky.

Euclid is investigating the “dark” universe, searching for signs of how dark energy and dark matter have influenced the evolution of the cosmos. It’ll observe one-third of the sky over the next six years, studying billions of galaxies with its 4-foot-wide telescope, visible-wavelength camera and near-infrared camera/spectrometer. Euclid launched in July 2023, and while its official science mission doesn't start until early 2024, it’s already blowing scientists away with its early observations.

Perseus cluster of galaxies as seen by the Euclid spacecraft
ESA

Euclid’s observation of the Perseus Cluster (above), which sits 240 million light-years away, is the most detailed ever, showing not just the 1,000 galaxies in the cluster itself, but roughly 100,000 others that lay farther away, according to ESA. The space telescope also caught a look at a Milky-Way-like spiral galaxy dubbed IC 342 (below), or the “Hidden Galaxy,” nicknamed as such because it lies behind our own and is normally hard to see clearly.

Euclid spacecraft's view of the spiral galaxy IC 342
ESA

Euclid is able to observe huge portions of the sky, and it's the only telescope in operation able to image certain objects like globular clusters in their entirety in just one shot, according to ESA. Globular clusters like NGC 6397, pictured below, contain hundreds of thousands of gravity-bound stars. Euclid's observation of the cluster is unmatched in its level of detail, ESA says.

The spacecraft is able to see objects that have been too faint for others to observe. Its detailed observation of the well-known Horsehead Nebula, a stellar nursery in the Orion constellation, for example, could reveal young stars and planets that have previously gone undetected.

Euclid spacecraft's view of the Globular cluster NGC 6397
ESA
Euclid spacecraft's view of the irregular galaxy NGC 6822
ESA

Euclid also observed the dwarf galaxy, NGC 6822 (pictured above), which sits just 1.6 million light years away. This small, ancient galaxy could hold clues on how galaxies like our own came to be. It's only the beginning for Euclid, but it's already helping to unlock more information on the objects in our surrounding universe, both near and far. 

“We have never seen astronomical images like this before, containing so much detail,” said René Laureijs, ESA’s Euclid Project Scientist, of the first batch of images. “They are even more beautiful and sharp than we could have hoped for, showing us many previously unseen features in well-known areas of the nearby universe.”

This article originally appeared on Engadget at https://www.engadget.com/esa-releases-stunning-first-images-from-euclid-its-dark-universe-detective-203948971.html?src=rss

NASA discovered that an asteroid named Dinky actually has its own moon

NASA’s Lucy spacecraft, first launched in 2021 to explore the Trojan asteroids trapped near Jupiter, has made an interesting discovery. The spacecraft found an asteroid, nicknamed Dinky, that actually has a smaller asteroid orbiting it, as originally reported by Scientific American. That’s right. It’s basically a moon with its own moon. It’s an ouroboros of cosmic curiosity.

The technical term here is a binary asteroid pair and Dinky, whose real name is Dinkinesh, was spotted by Lucy during a quick fly by. That’s when the spacecraft spotted the smaller “moon” orbiting it.

“A binary was certainly a possibility,” Jessica Sunshine, a planetary scientist at the University of Maryland, told Scientific American. “But it was not expected, and it’s really cool.”

As a matter of fact, the fly by itself wasn’t supposed to find anything of note. It was simply a trial run for the team to hone its skills before investigating the aforementioned Trojan asteroids orbiting the sun ahead of and behind Jupiter. The team wanted to make sure Lucy’s probe would successfully latch onto a space rock, even when both objects were moving extremely fast. Guess what? It worked. Hal Levinson, a planetary scientist at the Southwest Research Institute and principal investigator of the Lucy mission, said that the test was “amazingly successful.”

As for Dinky and its, uh, even dinkier satellite, NASA scientists still have a long way to go with its investigation, as only about one third of the relevant data has been beamed down to Earth. NASA has released a series of images showing Dinky and its pseudo-moon, but not any actual data as of yet.

Even just from these images, however, you can tell a lot about these two celestial bodies. There’s a visible equatorial ridge on the main body of Dinky aka Dinkinesh and a secondary ridge-line branching off from it. The parent asteroid is covered in craters, likely the result of numerous hits by other asteroids. Levinson says that there are more images to come of the secondary satellite and that these pictures suggest that the junior asteroid has some “interesting” stuff going on. He goes on to say that the shape is “really bizarre.”

Binary asteroid pairs are not rare, as researchers have found that around 15 percent of near-Earth asteroids boast a cute lil orbital companion. NASA and affiliated researchers are still waiting for more data on the pair, including color images and spectroscopy that should shed some more light on the two asteroids. Levinson says “there’s a lot of cool stuff to come.”

In the meantime, Lucy will continue on its original mission, to investigate those mysterious Trojan asteroids near Jupiter. It’ll make contact with one in 2025.

This article originally appeared on Engadget at https://www.engadget.com/nasa-discovered-that-an-asteroid-named-dinky-actually-has-its-own-moon-173028204.html?src=rss

A commercial spaceplane capable of orbital flight is ready for NASA testing

NASA will soon start testing what is dubbed as the world’s first commercial spaceplane capable of orbital flight, which will eventually be used to resupply the International Space Station. The agency is set to take delivery of Sierra Space’s first Dream Chaser, which should provide an alternative to SpaceX spacecraft for trips to the ISS.

In the coming weeks, the spaceplane (which is currently at Sierra Space’s facility in Colorado) will make its way to a NASA test site in Ohio. The agency will put the vehicle, which has been named Tenacity, through its paces for between one and three months. According to Ars Technica, NASA will conduct vibration, acoustic and temperature tests to ensure Tenacity can survive the rigors of a rocket launch. NASA engineers, along with government and contractor teams, are running tests to make sure it's safe for Tenacity to approach the ISS.

All going well, Tenacity is scheduled to make its first trip to space in April on the second flight of United Launch Alliance's Vulcan rocket. The rocket has yet to make its own first test flight, which is currently expected to happen in December. However, given how things tend to go with spaceflight, delays are always a possibility on both fronts.

The spaceplane has foldable wings, which allow it to fit inside the payload of the rocket. On its first mission, Tenacity is scheduled to stay at the ISS for 45 days. Afterward, it will return to Earth at the former space shuttle landing strip at the Kennedy Space Center in Florida rather than dropping into the ocean as many spacecraft tend to do. Sierra says the spacecraft is capable of landing at any compatible commercial runway.

“Plunging into the ocean is awful," Sierra Space CEO Tom Vice told Ars Technica. "Landing on a runway is really nice." The company claims Dream Chaser can bring cargo back to Earth at fewer than 1.5 Gs, which is important to help protect sensitive payloads. The spaceplane will be capable of taking up to 12,000 pounds of cargo to the ISS and bringing up to around 4,000 pounds of cargo back to terra firma. Sierra plans for its Dream Chaser fleet to eventually be capable of taking humans to low-Earth orbit too.

As things stand, SpaceX is the only company that operates fully certified spacecraft for NASA missions. Boeing also won a contract to develop a capsule for NASA back in 2014, but Starliner has yet to transport any astronauts.to the ISS. Sierra Nevada (from which Sierra Space was spun out in 2021) previously competed with those businesses for NASA commercial crew program contracts, but it lost out. However, after the company retooled Dream Chaser to focus on cargo operations for the time being, NASA chose Sierra to join its stable of cargo transportation providers in 2016.

Dream Chaser's first trip to the ISS has been a long time coming. It was originally planned for 2019 but the project was beset by delays. COVID-19 compounded those, as it constricted supply chains for key parts that Sierra Space needed before the company brought more of its construction work in house. The company is now aiming to have a second, human-rated version of Dream Chaser ready for the 2026 timeframe.

NASA has long been interested in using spaceplanes, dating back to the agency's early days, and it seems closer than ever to being able to use such vehicles. Virgin Galactic (which just carried out its fifth commercial flight on Thursday) uses spaceplanes for tourist and research flights, its vehicle is only capable of suborbital operations. With Dream Chaser, Sierra has loftier goals.

This article originally appeared on Engadget at https://www.engadget.com/a-commercial-spaceplane-capable-of-orbital-flight-is-ready-for-nasa-testing-185542776.html?src=rss

NASA is launching a free streaming service with live shows and original series

NASA has announced a new streaming service called NASA+ that’s set to hit most major platforms next week. It’ll be completely free, with no subscription requirements, and you won’t be forced to sit through ads. NASA+ will be available starting November 8.

The space agency previously teased the release of its upcoming streaming service over the summer as it more broadly revamped its digital presence. At the time, it said NASA+ would be available on the NASA iOS and Android apps, and streaming players including Roku, Apple TV and Fire TV. You’ll also be able to watch it on the web. 

There aren’t too many details out just yet about the content itself, but NASA says its family friendly programming “embeds you into our missions” with live coverage and original video series. NASA already has its own broadcast network called NASA TV, and the new streaming service seems to be an expansion of that. But, we’ll know more when it officially launches next Wednesday.

This article originally appeared on Engadget at https://www.engadget.com/nasa-is-launching-a-free-streaming-service-with-live-shows-and-original-series-150128180.html?src=rss

HTC is sending VR headsets to the ISS to help cheer up lonely astronauts

Whether it's for a tour of the International Space Station (ISS) or a battle with Darth Vader, most VR enthusiasts are looking to get off this planet and into the great beyond. HTC, however, is sending VR headsets to the ISS to give lonely astronauts something to do besides staring into the star-riddled abyss.

The company partnered up with XRHealth and engineering firm Nord Space to send HTC VIVE Focus 3 headsets to the ISS as part of an ongoing effort to improve the mental health of astronauts in the midst of long assignments on the station. These headsets are pre-loaded with unique software that has been specifically designed to meet the mental health needs of literal space cadets, so they aren’t just for playing Walkabout Mini Golf during the off hours (though that’s not a bad idea.)

The headsets feature new camera tracking tech that was specially developed and adapted to work in microgravity, including eye-tracking sensors to better assess the mental health status of astronauts. These sensors are coupled with software intended to “maintain mental health while in orbit.” The headsets have also been optimized to stabilize alignment and, as such, reduce the chances of motion sickness. Can you imagine free-floating vomit in space?

Danish astronaut Andreas Mogensen will be the first ISS crew member to use the VR headset for preventative mental health care during his six-month mission as commander of the space station. HTC notes that astronauts are often isolated for “months and years at a time” while stationed in space. 

This leads to the question of internet connectivity. After all, Mogensen and his fellow astronauts would likely want to connect with family and friends while wearing their brand-new VR headsets. Playing Population: One by yourself is not exactly satisfying.

The internet used to be really slow on the ISS, with speeds resembling a dial-up connection to AOL in 1995. However, recent upgrades have boosted Internet speeds to around 600 megabits-per-second (Mbps) on the station. As a comparison, the average download speed in the US is about 135 Mbps. So we’d actually be the bottleneck in this scenario, and not the astronauts. The ISS connection should allow for even the most data-hungry VR applications.

These souped-up Vive Focus 3 headsets are heading up to the space station shortly, though there’s no arrival date yet. It’s worth noting that it took some massive feats of engineering to even get these headsets to work in microgravity, as so many aspects of a VR headset depend on normal Earth gravity.

This article originally appeared on Engadget at https://www.engadget.com/htc-is-sending-vr-headsets-to-the-iss-to-help-cheer-up-lonely-astronauts-120019661.html?src=rss

NYU is developing 3D streaming video tech with the help of its dance department

NYU is launching a project to spur the development of immersive 3D video for dance education — and perhaps other areas. Boosted by a $1.2 million four-year grant from the National Science Foundation, the undertaking will try to make Point-Cloud Video (PCV) tech viable for streaming.

A point cloud is a set of data points in a 3D space representing the surface of a subject or environment. NYU says Point-Cloud Video, which strings together point-cloud frames into a moving scene, has been under development for the last decade. However, it’s typically too data-intensive for practical purposes, requiring bandwidth far beyond the capabilities of today’s connected devices.

The researchers plan to address those obstacles by “reducing bandwidth consumption and delivery latency, and increasing power consumption efficiency so that PCVs can be streamed far more easily,” according to an NYU Engineering blog post published Monday. Project leader Yong Liu, an NYU electrical and computer engineering professor, believes modern breakthroughs make that possible. “With recent advances in the key enabling technologies, we are now at the verge of completing the puzzle of teleporting holograms of real-world humans, creatures and objects through the global Internet,” Liu wrote on Monday. 

ChatGPT maker OpenAI launched a model last year that can create 3D point clouds from text prompts. Engadget reached out to the project leader to clarify whether it or other generative AI tools are part of the process, and we’ll update this article if we hear back.

The team will test the technology with the NYU Tisch School of the Arts and the Mark Morris Dance Group’s Dance Center. Dancers from both organizations will perform on a volumetric capture stage. The team will stream their movements live and on-demand, offering educational content for aspiring dancers looking to study from high-level performers — and allowing engineers to test and tweak their PCV technology.

The researchers envision the work opening doors to more advanced VR and mixed reality streaming content. “The success of the proposed research will contribute towards wide deployment of high quality and robust PCV streaming systems that facilitate immersive augmented, virtual and mixed reality experience and create new opportunities in many domains, including education, business, healthcare and entertainment,” Liu said.

“Point-Cloud Video holds tremendous potential to transform a range of industries, and I’m excited that the research team at NYU Tandon prioritized dance education to reap those benefits early,” said Jelena Kovačević, NYU Tandon Dean.

This article originally appeared on Engadget at https://www.engadget.com/nyu-is-developing-3d-streaming-video-tech-with-the-help-of-its-dance-department-211947160.html?src=rss

What the evolution of our own brains can tell us about the future of AI

The explosive growth in artificial intelligence in recent years — crowned with the meteoric rise of generative AI chatbots like ChatGPT — has seen the technology take on many tasks that, formerly, only human minds could handle. But despite their increasingly capable linguistic computations, these machine learning systems remain surprisingly inept at making the sorts of cognitive leaps and logical deductions that even the average teenager can consistently get right. 

In this week's Hitting the Books excerpt, A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains, AI entrepreneur Max Bennett explores the quizzical gap in computer competency by exploring the development of the organic machine AIs are modeled after: the human brain. 

Focusing on the five evolutionary "breakthroughs," amidst myriad genetic dead ends and unsuccessful offshoots, that led our species to our modern minds, Bennett also shows that the same advancements that took humanity eons to evolve can be adapted to help guide development of the AI technologies of tomorrow. In the excerpt below, we take a look at how generative AI systems like GPT-3 are built to mimic the predictive functions of the neocortex, but still can't quite get a grasp on the vagaries of human speech.

It's a picture of a brain with words over it
HarperCollins

Excerpted from A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains by Max Bennett. Published by Mariner Books. Copyright © 2023 by Max Bennett. All rights reserved.


Words Without Inner Worlds

GPT-3 is given word after word, sentence after sentence, paragraph after paragraph. During this long training process, it tries to predict the next word in any of these long streams of words. And with each prediction, the weights of its gargantuan neural network are nudged ever so slightly toward the right answer. Do this an astronomical number of times, and eventually GPT-3 can automatically predict the next word based on a prior sentence or paragraph. In principle, this captures at least some fundamental aspect of how language works in the human brain. Consider how automatic it is for you to predict the next symbol in the following phrases:

  • One plus one equals _____

  • Roses are red, violets are _____

You’ve seen similar sentences endless times, so your neocortical machinery automatically predicts what word comes next. What makes GPT-3 impressive, however, is not that it just predicts the next word of a sequence it has seen a million times — that could be accomplished with nothing more than memorizing sentences. What is impressive is that GPT-3 can be given a novel sequence that it has never seen before and still accurately predict the next word. This, too, clearly captures something that the human brain can _____.

Could you predict that the next word was do? I’m guessing you could, even though you had never seen that exact sentence before. The point is that both GPT-3 and the neocortical areas for language seem to be engaging in prediction. Both can generalize past experiences, apply them to new sentences, and guess what comes next.

GPT-3 and similar language models demonstrate how a web of neurons can reasonably capture the rules of grammar, syntax, and context if it is given sufficient time to learn. But while this shows that prediction is part of the mechanisms of language, does this mean that prediction is all there is to human language? Try to finish these four questions:

  • If 3x + 1 = 3, then x equals _____

  • I am in my windowless basement, and I look toward the sky, and I see _____

  • He threw the baseball 100 feet above my head, I reached my hand up to catch it, jumped, and _____

  • I am driving as fast as I can to LA from New York. One hour after passing through Chicago, I finally _____

Here something different happens. In the first question, you likely paused and performed some mental arithmetic before being able to answer the question. In the other questions, you probably, even for only a split second, paused to visualize yourself in a basement looking upward, and realized what you would see is the ceiling. Or you visualized yourself trying to catch a baseball a hundred feet above your head. Or you imagined yourself one hour past Chicago and tried to find where you would be on a mental map of America. With these types of questions, more is happening in your brain than merely the automatic prediction of words.

We have, of course, already explored this phenomenon—it is simulating. In these questions, you are rendering an inner simulation, either of shifting values in a series of algebraic operations or of a three-dimensional basement. And the answers to the questions are to be found only in the rules and structure of your inner simulated world.

I gave the same four questions to GPT-3; here are its responses (responses of GPT-3 are bolded and underlined):

  • If 3x + 1 = 3 , then x equals 1

  • I am in my windowless basement, and I look toward the sky, and I see a light, and I know that it is a star, and I am happy.

  • He threw the baseball 100 feet above my head, I reached my hand up to catch it, jumped, and caught it. It was a lot of fun!

  • I am driving as fast as I can to LA from New York. One hour after passing through Chicago, I finally get to see the Pacific Ocean.

All four of these responses demonstrate that GPT-3, as of June 2022, lacked an understanding of even simple aspects of how the world works. If 3x + 1 = 3, then x equals 2/3, not 1. If you were in a basement and looked toward the sky, you would see your ceiling, not stars. If you tried to catch a ball 100 feet above your head, you would not catch the ball. If you were driving to LA from New York and you’d passed through Chicago one hour ago, you would not yet be at the coast. GPT-3’s answers lacked common sense.

What I found was not surprising or novel; it is well known that modern AI systems, including these new supercharged language models, struggle with such questions. But that’s the point: Even a model trained on the entire corpus of the internet, running up millions of dollars in server costs — requiring acres of computers on some unknown server farm — still struggles to answer common sense questions, those presumably answerable by even a middle-school human.

Of course, reasoning about things by simulating also comes with problems. Suppose I asked you the following question:

Tom W. is meek and keeps to himself. He likes soft music and wears glasses. Which profession is Tom W. more likely to be?

1) Librarian

2) Construction worker

If you are like most people, you answered librarian. But this is wrong. Humans tend to ignore base rates—did you consider the base number of construction workers compared to librarians? There are probably one hundred times more construction workers than librarians. And because of this, even if 95 percent of librarians are meek and only 5 percent of construction workers are meek, there still will be far more meek construction workers than meek librarians. Thus, if Tom is meek, he is still more likely to be a construction worker than a librarian.

The idea that the neocortex works by rendering an inner simulation and that this is how humans tend to reason about things explains why humans consistently get questions like this wrong. We imagine a meek person and compare that to an imagined librarian and an imagined construction worker. Who does the meek person seem more like? The librarian. Behavioral economists call this the representative heuristic. This is the origin of many forms of unconscious bias. If you heard a story of someone robbing your friend, you can’t help but render an imagined scene of the robbery, and you can’t help but fill in the robbers. What do the robbers look like to you? What are they wearing? What race are they? How old are they? This is a downside of reasoning by simulating — we fill in characters and scenes, often missing the true causal and statistical relationships between things.

It is with questions that require simulation where language in the human brain diverges from language in GPT-3. Math is a great example of this. The foundation of math begins with declarative labeling. You hold up two fingers or two stones or two sticks, engage in shared attention with a student, and label it two. You do the same thing with three of each and label it three. Just as with verbs (e.g., running and sleeping), in math we label operations (e.g., add and subtract). We can thereby construct sentences representing mathematical operations: three add one.

Humans don’t learn math the way GPT-3 learns math. Indeed, humans don’t learn language the way GPT-3 learns language. Children do not simply listen to endless sequences of words until they can predict what comes next. They are shown an object, engage in a hardwired nonverbal mechanism of shared attention, and then the object is given a name. The foundation of language learning is not sequence learning but the tethering of symbols to components of a child’s already present inner simulation.

A human brain, but not GPT-3, can check the answers to mathematical operations using mental simulation. If you add one to three using your fingers, you notice that you always get the thing that was previously labeled four.

You don’t even need to check such things on your actual fingers; you can imagine these operations. This ability to find the answers to things by simulating relies on the fact that our inner simulation is an accurate rendering of reality. When I mentally imagine adding one finger to three fingers, then count the fingers in my head, I count four. There is no reason why that must be the case in my imaginary world. But it is. Similarly, when I ask you what you see when you look toward the ceiling in your basement, you answer correctly because the three-dimensional house you constructed in your head obeys the laws of physics (you can’t see through the ceiling), and hence it is obvious to you that the ceiling of the basement is necessarily between you and the sky. The neocortex evolved long before words, already wired to render a simulated world that captures an incredibly vast and accurate set of physical rules and attributes of the actual world.

To be fair, GPT-3 can, in fact, answer many math questions correctly. GPT-3 will be able to answer 1 + 1 =___ because it has seen that sequence a billion times. When you answer the same question without thinking, you are answering it the way GPT-3 would. But when you think about why 1 + 1 =, when you prove it to yourself again by mentally imagining the operation of adding one thing to another thing and getting back two things, then you know that 1 + 1 = 2 in a way that GPT-3 does not.

The human brain contains both a language prediction system and an inner simulation. The best evidence for the idea that we have both these systems are experiments pitting one system against the other. Consider the cognitive reflection test, designed to evaluate someone’s ability to inhibit her reflexive response (e.g., habitual word predictions) and instead actively think about the answer (e.g., invoke an inner simulation to reason about it):

Question 1: A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?

If you are like most people, your instinct, without thinking about it, is to answer ten cents. But if you thought about this question, you would realize this is wrong; the answer is five cents. Similarly:

Question 2: If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?

Here again, if you are like most people, your instinct is to say “One hundred minutes,” but if you think about it, you would realize the answer is still five minutes.

And indeed, as of December 2022, GPT-3 got both of these questions wrong in exactly the same way people do, GPT-3 answered ten cents to the first question, and one hundred minutes to the second question.

The point is that human brains have an automatic system for predicting words (one probably similar, at least in principle, to models like GPT-3) and an inner simulation. Much of what makes human language powerful is not the syntax of it, but its ability to give us the necessary information to render a simulation about it and, crucially, to use these sequences of words to render the same inner simulation as other humans around us.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-a-brief-history-of-intelligence-max-bennett-mariner-books-143058118.html?src=rss

NASA is launching a rocket on Sunday to study a 20,000-year-old supernova

A sounding rocket toting a special imaging and spectroscopy instrument will take a brief trip to space Sunday night to try and capture as much data as it can on a long-admired supernova remnant in the Cygnus constellation. Its target, a massive cloud of dust and gas known as the Cygnus Loop or the Veil Nebula, was created after the explosive death of a star an estimated 20,000 years ago — and it’s still expanding.

NASA plans to launch the mission at 11:35 PM ET on Sunday October 29 from the White Sands Missile Range in New Mexico. The Integral Field Ultraviolet Spectroscopic Experiment, or INFUSE, will observe the Cygnus Loop for only a few minutes, capturing light in the far-ultraviolet wavelengths to illuminate gasses as hot as 90,000-540,000 degrees Fahrenheit. It’s expected to fly to an altitude of about 150 miles before parachuting back to Earth.

The Cygnus Loop sits about 2,600 light-years away, and was formed by the collapse of a star thought to be 20 times the size of our sun. Since the aftermath of the event is still playing out, with the cloud currently expanding at a rate of 930,000 miles per hour, it’s a good candidate for studying how supernovae affect the formation of new star systems. “Supernovae like the one that created the Cygnus Loop have a huge impact on how galaxies form,” said Brian Fleming, principal investigator for the INFUSE mission.

“INFUSE will observe how the supernova dumps energy into the Milky Way by catching light given off just as the blast wave crashes into pockets of cold gas floating around the galaxy,” Fleming said. Once INFUSE is back on the ground and its data has been collected, the team plans to fix it up and eventually launch it again.

This article originally appeared on Engadget at https://www.engadget.com/nasa-is-launching-a-rocket-on-sunday-to-study-a-20000-year-old-supernova-193009477.html?src=rss