This dazzling NASA image shows the biggest super star cluster in our galaxy

The James Webb Space Telescope continues to capture images of space that are clearer and more detailed than what we've seen before. One of the latest images it has taken is of a "super star cluster" called Westerlund 1, and it shows an abundant collection of heavenly bodies, shining brightly like gemstones. Super star clusters are young clusters of stars thousands of times bigger than our sun that are all packed in a small area. Our galaxy used to produce more clusters billions of years ago, but it doesn't churn out as many stars anymore, and only a few super star clusters still exist in the Milky Way. 

Westerlund 1 is the biggest remaining super star cluster in our galaxy, and it's also the closest to our planet. It's located 12,000 light-years away, made up of massive stars between 50,000 and 100,000 times the mass of our sun within a region that measures six light-years across. Those stars include yellow hypergiants that are around a million times brighter than our sun, as well. Since the stars populating the cluster have a comparatively short life, scientists believe it's only around 3.5 to 5 million years old. That's pretty young in the cosmic scale. As such, it's a valuable source of data that could help us better understand how massive stars form and eventually die. We won't be around to see it, but the cluster is expected to produce 1,500 supernovae in less than 40 million years. 

Astronomers captured an image of the super star cluster as part of an ongoing survey of Westerlund 1 and another cluster called Westerlund 2 to study star formation and evolution. To take the image, they used Webb's Near-InfraRed Camera (NIRCam), which was also recently used to capture a gravitationally lensed supernova that could help shed light on how fast our universe is expanding. 

This article originally appeared on Engadget at https://www.engadget.com/science/space/this-dazzling-nasa-image-shows-the-biggest-super-star-cluster-in-our-galaxy-120053279.html?src=rss

A pair of DeepMind researchers have won the 2024 Nobel Prize in Chemistry

A day after recognizing former Google vice president and engineering fellow Geoffrey Hinton for his contributions to the field of physics, the Royal Swedish Academy of Sciences has honored a pair of current Google employees. On Wednesday, DeepMind CEO Demis Hassabis and senior research scientist John Jumper won half of the 2024 Nobel Prize in Chemistry, with the other half going to David Baker, a professor at the University of Washington.

If there’s a theme to the 2024 Nobel Prize in Chemistry, it’s proteins. Baker, Hassabis and Jumper all advanced our understanding of those essential building blocks of life that are responsible for functions both inside and outside the human body. The Nobel Committee cited Baker’s seminal work in computational protein design. Since 2003, Baker and his research team have been using amino acids and computers to design entirely new proteins. In turn, those chemicals have contributed to the creation of pharmaceuticals, vaccines, nanomaterials and more.

As for Hassabis and Jumper, their work, and that of the entire DeepMind team, on AlphaFold 2 led to a generational breakthrough. Since the 1970s, scientists have been trying to find a way to predict a protein’s final, folded structure based solely on the amino acids that form its constituent parts. With AlphaFold 2, DeepMind created an AI algorithm that could do just that. Since 2020, the software has been able to successfully predict the structure of 200 million proteins, or nearly every one known to researchers.

“One of the discoveries being recognized this year concerns the construction of spectacular proteins. The other is about fulfilling a 50-year-old dream: predicting protein structures from their amino acid sequences,” said Heiner Linke, chair of the Nobel Committee for Chemistry. “Both of these discoveries open up vast possibilities.”

More broadly, the 2024 Nobel Prizes highlight the growing importance of artificial intelligence in modern science. Moving forward, it's safe to say advanced algorithms will be essential to future scientific discoveries and breakthroughs. 

This article originally appeared on Engadget at https://www.engadget.com/ai/a-pair-of-deepmind-researchers-have-won-the-2024-nobel-prize-in-chemistry-145056306.html?src=rss

The Morning After: Boring Company’s Vegas Loop plagued by lost drivers, trespassers and skateboarders

Elon Musk’s Boring Company pitched that its Vegas Loop, underground tunnels built below Las Vegas, would reduce gridlock in some of the busiest parts of the city, offering a new transport solution that isn’t a monorail. People are transported by ordinary Tesla vehicles in tunnels and terminals that are often difficult to get to. (At least, that was my experience earlier this year.)

It hasn’t been the transport game changer the company promised, though. A report from Fortune elaborated on what’s actually happening in those tunnels, saying there have been at least 67 trespassing reports since 2022 and 22 instances of other vehicles following Teslas into the tunnels and stations.

Boring’s monthly reports to the Las Vegas Convention and Visitors Authority also showed several instances of “property damage, theft, technical issues or injuries, near-misses and trespassing or intrusions.” Some curated highlights include a skateboarder who snuck into the tunnels through a passenger pickup station and two people spotted sleeping in one of the tunnel stations.

And yet (and yet!) county commissioners approved a plan last May to expand the tunnels to 65 miles and add 69 passenger stations.

— Mat Smith

The biggest stories you might have missed

The best deals from Amazon Prime Day 2024

SpaceX Crew-8 astronauts are leaving the ISS on October 13

The best projector for 2024

TMA
Engadget

If you were intrigued by Meta’s continued VR experiments but put off by the price of the Quest 3, then the Quest 3S may be for you. It’s a slightly bulkier, slightly less sharp version of Meta’s last standalone VR headset, but starting at $300, it’s much less than the Quest 3’s $500 launch price. There are compromises with display resolution and lenses, but it packs the same powerful processor as the Quest 3, so it should run games and apps just as quickly. Expect our full review soon, but so far we’re impressed.

Continue reading.

TMA
Engadget

The DJI Neo may be an inexpensive, beginner-friendly drone, but it still has powerful features, like subject tracking and quick shots. Surprisingly, this is a cheap $200 drone arguably worth considering. Just be prepared for the noise it makes.

Continue reading.

Two scientists have been awarded the Nobel Prize in Physics “for foundational discoveries and inventions that enable machine learning with artificial neural networks.” John Hopfield, an emeritus professor of Princeton University, devised an associative kind of memory that can store and reconstruct images and other patterns in data. Geoffrey Hinton, dubbed the Godfather of AI, pioneered a way to autonomously find properties in data, leading to the ability to identify picture elements.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/general/the-morning-after-boring-companys-vegas-loop-plagued-by-lost-drivers-trespassers-and-skateboarders-111742611.html?src=rss

SpaceX Crew-8 astronauts are leaving the ISS on October 13

NASA and SpaceX are looking to undock the Crew-8 mission vehicle from the ISS on October 13, 3:05AM Eastern time. Crew-8's astronauts were originally scheduled to start making their way back to Earth on October 7, but since their spacecraft is going to splash down off the coast of Florida, NASA and SpaceX had decided to push it back "due to weather conditions and potential impacts from Hurricane Milton." They will hold another briefing on the situation on October 11 and could delay the mission's return further for the safety of everyone involved. 

The Crew-8 mission launched on March 4 this year with four members: NASA astronauts Matthew Dominick, Michael Barratt and Jeanette Epps, as well as Roscosmos cosmonaut, Alexander Grebenkin. They conducted several experiments while on the International Space Station, such as sequencing the DNA of any antibiotic-resistant organism they could find on the ISS to look into how they adapted to the conditions out there. They also studied human brain organoids created with stem cells to look into Parkinson's disease and into how extended spaceflight affects the human brain. They printed human tissues, studied how microgravity affects drug manufacturing and worked with an Astrobee robot. NASA will most likely cover their flight back on a livestream

While Crew-8 has yet to leave the space station, SpaceX's Crew-9 mission astronauts have been on board since September 29. That mission only flew with two crew members, because it will be coming back home with NASA astronauts Butch Wilmore and Suni Williams who originally flew to the ISS on the Boeing Starliner. NASA said Wilmore and Williams have already tried on and tested their SpaceX Intravehicular Activity spacesuits and have completed all the work required to fly back to Earth with the Crew-9 vehicle. 

This article originally appeared on Engadget at https://www.engadget.com/science/space/spacex-crew-8-astronauts-are-leaving-the-iss-on-october-13-133027531.html?src=rss

Machine learning pioneers, including the ‘Godfather of AI,’ are awarded the Nobel Prize in Physics

Two scientists have been awarded the Nobel Prize in Physics “for foundational discoveries and inventions that enable machine learning with artificial neural networks.” John Hopfield, an emeritus professor of Princeton University, devised an associative memory that's able to store and reconstruct images and other types of patterns in data. Geoffrey Hinton, who has been dubbed the "Godfather of AI," pioneered a way to autonomously find properties in data, leading to the ability to identify certain elements in pictures.

"This year’s physics laureates’ breakthroughs stand on the foundations of physical science. They have showed a completely new way for us to use computers to aid and to guide us to tackle many of the challenges our society face," the committee wrote on X. "Thanks to their work humanity now has a new item in its toolbox, which we can choose to use for good purposes. Machine learning based on artificial neural networks is currently revolutionizing science, engineering and daily life."

However, Hinton has grown concerned about machine learning and its potential impact on society. He was part of Google's deep-learning artificial intelligence team (Google Brain, which merged with DeepMind last year) for many years before resigning in May 2023 so he could "freely speak out about the risks of AI." At the time, he expressed concern about generative AI spurring a tsunami of misinformation and having the potential to wipe out jobs, along with the possibility of fully autonomous weapons emerging.

Although Hinton acknowledged the likelihood that machine learning and AI will improve health care, "it’s going to exceed people in intellectual ability. We have no experience of what it’s like to have things smarter than us,” he told reporters, according to The New York Times. That said, Hinton, a Turing Award winner and professor of computer science at the University of Toronto, was “flabbergasted” to learn that he had become a Nobel Prize laureate.

This article originally appeared on Engadget at https://www.engadget.com/ai/machine-learning-pioneers-including-the-godfather-of-ai-are-awarded-the-nobel-prize-in-physics-132124417.html?src=rss

NASA’s latest supernova image could tell us how fast the universe is expanding

The James Webb Space Telescope's Near-Infrared Camera (NIRCam) captured a curious sight in a region 3.6 billion light-years away from Earth: A supernova that appears three times, at three different periods during its explosion, in one image. More importantly, this image could help scientists better understand how fast the universe is expanding. 

A team of researchers chose to observe the galaxy cluster PLCK G165.7+67.0, also known as G165, for its high star rate formation that also leads to higher supernova rates. One image, which you can see above, captures what looks to be a streak of light with three distinct dots that appear brighter than the rest of it. As Dr. Brenda Frye from the University of Arizona explained, those dots correspond to an exploding white dwarf star. It is also gravitationally lensed — that is, there's a cluster of galaxies between us and the star that served as a lens, bending the supernova's light into multiple images. Frye likened it to a trifold mirror that shows a different image of the person sitting in front of it. To note, it is the most distant Type Ia supernova, which is a supernova that occurs in a binary system, observed to date.

Because of that cluster of galaxies in front of the supernova, light from the explosion travelled three different paths, each with a different length. That means the Webb telescope was able to capture different periods of its explosion in one image: Early into the event, mid-way through and near the end of it. Trifold supernova images are special, Frye said, because the "time delays, supernova distance, and gravitational lensing properties yield a value for the Hubble constant or H0 (pronounced H-naught)." 

NASA describes the Hubble constant as the number that characterizes the present-day expansion rate of the universe, which, in turn, could tell us more about the universe's age and history. Scientists have yet to agree on its exact value, and the team is hoping that this supernova image could provide some clarity. "The supernova was named SN H0pe since it gives astronomers hope to better understand the universe's changing expansion rate," Frye said. 

Wendy Freedman from the University of Chicago led a team in 2001 that found a value of 72. Other teams put the Hubble constant between 69.8 and 74 kilometers per second per megaparsec. Meanwhile, this team reported a value of 75.4, plus 8.1 or minus 5.5. "Our team’s results are impactful: The Hubble constant value matches other measurements in the local universe, and is somewhat in tension with values obtained when the universe was young," Frye said. The supernova and the Hubble constant value derived from it need for be explored further, however, and the team expects future observations to "improve on the uncertainties" for a more accurate computation. 

This article originally appeared on Engadget at https://www.engadget.com/science/space/nasas-latest-supernova-image-could-tell-us-how-fast-the-universe-is-expanding-130005672.html?src=rss

Advanced AI chatbots are less likely to admit they don’t have all the answers

Researchers have spotted an apparent downside of smarter chatbots. Although AI models predictably become more accurate as they advance, they’re also more likely to (wrongly) answer questions beyond their capabilities rather than saying, “I don’t know.” And the humans prompting them are more likely to take their confident hallucinations at face value, creating a trickle-down effect of confident misinformation.

“They are answering almost everything these days,” José Hernández-Orallo, professor at the Universitat Politecnica de Valencia, Spain, told Nature. “And that means more correct, but also more incorrect.” Hernández-Orallo, the project lead, worked on the study with his colleagues at the Valencian Research Institute for Artificial Intelligence in Spain.

The team studied three LLM families, including OpenAI’s GPT series, Meta’s LLaMA and the open-source BLOOM. They tested early versions of each model and moved to larger, more advanced ones — but not today’s most advanced. For example, the team began with OpenAI’s relatively primitive GPT-3 ada model and tested iterations leading up to GPT-4, which arrived in March 2023. The four-month-old GPT-4o wasn’t included in the study, nor was the newer o1-preview. I’d be curious if the trend still holds with the latest models.

The researchers tested each model on thousands of questions about “arithmetic, anagrams, geography and science.” They also quizzed the AI models on their ability to transform information, such as alphabetizing a list. The team ranked their prompts by perceived difficulty.

The data showed that the chatbots’ portion of wrong answers (instead of avoiding questions altogether) rose as the models grew. So, the AI is a bit like a professor who, as he masters more subjects, increasingly believes he has the golden answers on all of them.

Further complicating things is the humans prompting the chatbots and reading their answers. The researchers tasked volunteers with rating the accuracy of the AI bots’ answers, and they found that they “incorrectly classified inaccurate answers as being accurate surprisingly often.” The range of wrong answers falsely perceived as right by the volunteers typically fell between 10 and 40 percent.

“Humans are not able to supervise these models,” concluded Hernández-Orallo.

The research team recommends AI developers begin boosting performance for easy questions and programming the chatbots to refuse to answer complex questions. “We need humans to understand: ‘I can use it in this area, and I shouldn’t use it in that area,’” Hernández-Orallo told Nature.

It’s a well-intended suggestion that could make sense in an ideal world. But fat chance AI companies oblige. Chatbots that more often say “I don’t know” would likely be perceived as less advanced or valuable, leading to less use — and less money for the companies making and selling them. So, instead, we get fine-print warnings that “ChatGPT can make mistakes” and “Gemini may display inaccurate info.”

That leaves it up to us to avoid believing and spreading hallucinated misinformation that could hurt ourselves or others. For accuracy, fact-check your damn chatbot’s answers, for crying out loud.

You can read the team’s full study in Nature.

This article originally appeared on Engadget at https://www.engadget.com/ai/advanced-ai-chatbots-are-less-likely-to-admit-they-dont-have-all-the-answers-172012958.html?src=rss

CTO Mira Murati is the latest leader to leave OpenAI

Mira Murati has departed OpenAI, where she had been the chief technology officer since 2018. In a note shared with the company and then posted publicly on X, Murati said that she is exiting "because I want to create the time and space to do my own exploration."

Murati gained additional visibility as a face for the AI company when she briefly assumed CEO duties in November 2023 when the board of directors fired Sam Altman. Altman returned to the helm and Murati resumed work as CTO. However, her departure follows on two other notable exits. Last month, president and co–founder Greg Brockman and co-founder John Schulman both announced that they would be stepping away from OpenAI. Brockman is taking a sabbatical and Schulman is moving to rival AI firm Anthropic.

Here is the full text of Murati's statement:

Hi all,

I have something to share with you. After much reflection, I have made the difficult decision to leave OpenAl.

My six-and-a-half years with the OpenAl team have been an extraordinary privilege. While I’ll express my gratitude to many individuals in the coming days, I want to start by thanking Sam and Greg for their trust in me to lead the technical organization and for their support throughout the years.

There’s never an ideal time to step away from a place one cherishes, yet this moment feels right. Our recent releases of speech-to-speech and OpenAl o1 mark the beginning of a new era in interaction and intelligence - achievements made possible by your ingenuity and craftsmanship. We didn’t merely build smarter models, we fundamentally changed how Al systems learn and reason through complex problems.

We brought safety research from the theoretical realm into practical applications, creating models that are more robust, aligned, and steerable than ever before. Our work has made cutting-edge Al research intuitive and accessible, developing technology that adapts and evolves based on everyone’s input. This success is a testament to our outstanding teamwork, and it is because of your brilliance, your dedication, and your commitment that OpenAl stands at the pinnacle of Al innovation.

I’m stepping away because I want to create the time and space to do my own exploration. For now, my primary focus is doing everything in my power to ensure a smooth transition, maintaining the momentum we’ve built.

I will forever be grateful for the opportunity to build and work alongside this remarkable team. Together, we’ve pushed the boundaries of scientific understanding in our quest to improve human well-being. While I may no longer be in the trenches with you, I will still be rooting for you all.

With deep gratitude for the friendships forged, the triumphs achieved, and most importantly, the challenges overcome together.

Mira

In a post on X, Altman has revealed that the company's Chief Research Officer, Bob McGrew, and VP of Research, Barret Zoph, are also leaving the company. He said they made the decisions "independently of each other and amicably," but it made sense to "do this all at once" for a smooth handover. OpenAI's leadership will go through some changes as a result, with Mark Chen, the Head of Frontiers Research, being named as Research SVP. Research Scientist Josh Achiam has been named as Head of Mission Alignment, while Mark Knight, the Head of Security, is now the Chief Information Security Officer. 

Update, September 26, 2024, 7:03AM ET: This post has been updated to include information about the other staffers leaving OpenAI.

This article originally appeared on Engadget at https://www.engadget.com/ai/cto-mira-murati-is-the-latest-leader-to-leave-openai-200230104.html?src=rss

CTO Mira Murati is the latest leader to leave OpenAI

Mira Murati has departed OpenAI, where she had been the chief technology officer since 2018. In a note shared with the company and then posted publicly on X, Murati said that she is exiting "because I want to create the time and space to do my own exploration."

Murati gained additional visibility as a face for the AI company when she briefly assumed CEO duties in November 2023 when the board of directors fired Sam Altman. Altman returned to the helm and Murati resumed work as CTO. However, her departure follows on two other notable exits. Last month, president and co–founder Greg Brockman and co-founder John Schulman both announced that they would be stepping away from OpenAI. Brockman is taking a sabbatical and Schulman is moving to rival AI firm Anthropic.

Here is the full text of Murati's statement:

Hi all,

I have something to share with you. After much reflection, I have made the difficult decision to leave OpenAl.

My six-and-a-half years with the OpenAl team have been an extraordinary privilege. While I’ll express my gratitude to many individuals in the coming days, I want to start by thanking Sam and Greg for their trust in me to lead the technical organization and for their support throughout the years.

There’s never an ideal time to step away from a place one cherishes, yet this moment feels right. Our recent releases of speech-to-speech and OpenAl o1 mark the beginning of a new era in interaction and intelligence - achievements made possible by your ingenuity and craftsmanship. We didn’t merely build smarter models, we fundamentally changed how Al systems learn and reason through complex problems.

We brought safety research from the theoretical realm into practical applications, creating models that are more robust, aligned, and steerable than ever before. Our work has made cutting-edge Al research intuitive and accessible, developing technology that adapts and evolves based on everyone’s input. This success is a testament to our outstanding teamwork, and it is because of your brilliance, your dedication, and your commitment that OpenAl stands at the pinnacle of Al innovation.

I’m stepping away because I want to create the time and space to do my own exploration. For now, my primary focus is doing everything in my power to ensure a smooth transition, maintaining the momentum we’ve built.

I will forever be grateful for the opportunity to build and work alongside this remarkable team. Together, we’ve pushed the boundaries of scientific understanding in our quest to improve human well-being. While I may no longer be in the trenches with you, I will still be rooting for you all.

With deep gratitude for the friendships forged, the triumphs achieved, and most importantly, the challenges overcome together.

Mira

This article originally appeared on Engadget at https://www.engadget.com/ai/cto-mira-murati-is-the-latest-leader-to-leave-openai-200230104.html?src=rss

The Polaris Dawn crew is back on Earth after a historic mission

The Polaris Dawn crew safely returned to Earth early Sunday morning, bringing the historic privately funded mission to a close. The Dragon capsule carrying the mission’s four astronauts — Jared Isaacman, Scott “Kidd” Poteet, Sarah Gillis and Anna Menon — splashed down in the Gulf of Mexico around 3:30AM ET.

On Thursday, Isaacman and Gillis completed the first commercial spacewalk, each taking a turn to exit the craft and perform a series of spacesuit mobility tests. And with this mission, Gillis and Menon have now traveled farther from Earth than any women before. Polaris reached a peak altitude of about 870 miles, which is also the farthest any humans have ventured since the Apollo program. 

The crew also performed a number of science experiments, and was able to complete a 40-minute video call to Earth and send files in a major test for Starlink’s space communications capabilities. That included a video recorded during the mission of Gillis, an engineer and violinist, playing the violin in space. “A new era of commercial spaceflight dawns, with much more to come,” Polaris posted on X Sunday morning.

This article originally appeared on Engadget at https://www.engadget.com/science/space/the-polaris-dawn-crew-is-back-on-earth-after-a-historic-mission-142028997.html?src=rss