SpaceX Crew-8 astronauts are leaving the ISS on October 13

NASA and SpaceX are looking to undock the Crew-8 mission vehicle from the ISS on October 13, 3:05AM Eastern time. Crew-8's astronauts were originally scheduled to start making their way back to Earth on October 7, but since their spacecraft is going to splash down off the coast of Florida, NASA and SpaceX had decided to push it back "due to weather conditions and potential impacts from Hurricane Milton." They will hold another briefing on the situation on October 11 and could delay the mission's return further for the safety of everyone involved. 

The Crew-8 mission launched on March 4 this year with four members: NASA astronauts Matthew Dominick, Michael Barratt and Jeanette Epps, as well as Roscosmos cosmonaut, Alexander Grebenkin. They conducted several experiments while on the International Space Station, such as sequencing the DNA of any antibiotic-resistant organism they could find on the ISS to look into how they adapted to the conditions out there. They also studied human brain organoids created with stem cells to look into Parkinson's disease and into how extended spaceflight affects the human brain. They printed human tissues, studied how microgravity affects drug manufacturing and worked with an Astrobee robot. NASA will most likely cover their flight back on a livestream

While Crew-8 has yet to leave the space station, SpaceX's Crew-9 mission astronauts have been on board since September 29. That mission only flew with two crew members, because it will be coming back home with NASA astronauts Butch Wilmore and Suni Williams who originally flew to the ISS on the Boeing Starliner. NASA said Wilmore and Williams have already tried on and tested their SpaceX Intravehicular Activity spacesuits and have completed all the work required to fly back to Earth with the Crew-9 vehicle. 

This article originally appeared on Engadget at https://www.engadget.com/science/space/spacex-crew-8-astronauts-are-leaving-the-iss-on-october-13-133027531.html?src=rss

Machine learning pioneers, including the ‘Godfather of AI,’ are awarded the Nobel Prize in Physics

Two scientists have been awarded the Nobel Prize in Physics “for foundational discoveries and inventions that enable machine learning with artificial neural networks.” John Hopfield, an emeritus professor of Princeton University, devised an associative memory that's able to store and reconstruct images and other types of patterns in data. Geoffrey Hinton, who has been dubbed the "Godfather of AI," pioneered a way to autonomously find properties in data, leading to the ability to identify certain elements in pictures.

"This year’s physics laureates’ breakthroughs stand on the foundations of physical science. They have showed a completely new way for us to use computers to aid and to guide us to tackle many of the challenges our society face," the committee wrote on X. "Thanks to their work humanity now has a new item in its toolbox, which we can choose to use for good purposes. Machine learning based on artificial neural networks is currently revolutionizing science, engineering and daily life."

However, Hinton has grown concerned about machine learning and its potential impact on society. He was part of Google's deep-learning artificial intelligence team (Google Brain, which merged with DeepMind last year) for many years before resigning in May 2023 so he could "freely speak out about the risks of AI." At the time, he expressed concern about generative AI spurring a tsunami of misinformation and having the potential to wipe out jobs, along with the possibility of fully autonomous weapons emerging.

Although Hinton acknowledged the likelihood that machine learning and AI will improve health care, "it’s going to exceed people in intellectual ability. We have no experience of what it’s like to have things smarter than us,” he told reporters, according to The New York Times. That said, Hinton, a Turing Award winner and professor of computer science at the University of Toronto, was “flabbergasted” to learn that he had become a Nobel Prize laureate.

This article originally appeared on Engadget at https://www.engadget.com/ai/machine-learning-pioneers-including-the-godfather-of-ai-are-awarded-the-nobel-prize-in-physics-132124417.html?src=rss

NASA’s latest supernova image could tell us how fast the universe is expanding

The James Webb Space Telescope's Near-Infrared Camera (NIRCam) captured a curious sight in a region 3.6 billion light-years away from Earth: A supernova that appears three times, at three different periods during its explosion, in one image. More importantly, this image could help scientists better understand how fast the universe is expanding. 

A team of researchers chose to observe the galaxy cluster PLCK G165.7+67.0, also known as G165, for its high star rate formation that also leads to higher supernova rates. One image, which you can see above, captures what looks to be a streak of light with three distinct dots that appear brighter than the rest of it. As Dr. Brenda Frye from the University of Arizona explained, those dots correspond to an exploding white dwarf star. It is also gravitationally lensed — that is, there's a cluster of galaxies between us and the star that served as a lens, bending the supernova's light into multiple images. Frye likened it to a trifold mirror that shows a different image of the person sitting in front of it. To note, it is the most distant Type Ia supernova, which is a supernova that occurs in a binary system, observed to date.

Because of that cluster of galaxies in front of the supernova, light from the explosion travelled three different paths, each with a different length. That means the Webb telescope was able to capture different periods of its explosion in one image: Early into the event, mid-way through and near the end of it. Trifold supernova images are special, Frye said, because the "time delays, supernova distance, and gravitational lensing properties yield a value for the Hubble constant or H0 (pronounced H-naught)." 

NASA describes the Hubble constant as the number that characterizes the present-day expansion rate of the universe, which, in turn, could tell us more about the universe's age and history. Scientists have yet to agree on its exact value, and the team is hoping that this supernova image could provide some clarity. "The supernova was named SN H0pe since it gives astronomers hope to better understand the universe's changing expansion rate," Frye said. 

Wendy Freedman from the University of Chicago led a team in 2001 that found a value of 72. Other teams put the Hubble constant between 69.8 and 74 kilometers per second per megaparsec. Meanwhile, this team reported a value of 75.4, plus 8.1 or minus 5.5. "Our team’s results are impactful: The Hubble constant value matches other measurements in the local universe, and is somewhat in tension with values obtained when the universe was young," Frye said. The supernova and the Hubble constant value derived from it need for be explored further, however, and the team expects future observations to "improve on the uncertainties" for a more accurate computation. 

This article originally appeared on Engadget at https://www.engadget.com/science/space/nasas-latest-supernova-image-could-tell-us-how-fast-the-universe-is-expanding-130005672.html?src=rss

Advanced AI chatbots are less likely to admit they don’t have all the answers

Researchers have spotted an apparent downside of smarter chatbots. Although AI models predictably become more accurate as they advance, they’re also more likely to (wrongly) answer questions beyond their capabilities rather than saying, “I don’t know.” And the humans prompting them are more likely to take their confident hallucinations at face value, creating a trickle-down effect of confident misinformation.

“They are answering almost everything these days,” José Hernández-Orallo, professor at the Universitat Politecnica de Valencia, Spain, told Nature. “And that means more correct, but also more incorrect.” Hernández-Orallo, the project lead, worked on the study with his colleagues at the Valencian Research Institute for Artificial Intelligence in Spain.

The team studied three LLM families, including OpenAI’s GPT series, Meta’s LLaMA and the open-source BLOOM. They tested early versions of each model and moved to larger, more advanced ones — but not today’s most advanced. For example, the team began with OpenAI’s relatively primitive GPT-3 ada model and tested iterations leading up to GPT-4, which arrived in March 2023. The four-month-old GPT-4o wasn’t included in the study, nor was the newer o1-preview. I’d be curious if the trend still holds with the latest models.

The researchers tested each model on thousands of questions about “arithmetic, anagrams, geography and science.” They also quizzed the AI models on their ability to transform information, such as alphabetizing a list. The team ranked their prompts by perceived difficulty.

The data showed that the chatbots’ portion of wrong answers (instead of avoiding questions altogether) rose as the models grew. So, the AI is a bit like a professor who, as he masters more subjects, increasingly believes he has the golden answers on all of them.

Further complicating things is the humans prompting the chatbots and reading their answers. The researchers tasked volunteers with rating the accuracy of the AI bots’ answers, and they found that they “incorrectly classified inaccurate answers as being accurate surprisingly often.” The range of wrong answers falsely perceived as right by the volunteers typically fell between 10 and 40 percent.

“Humans are not able to supervise these models,” concluded Hernández-Orallo.

The research team recommends AI developers begin boosting performance for easy questions and programming the chatbots to refuse to answer complex questions. “We need humans to understand: ‘I can use it in this area, and I shouldn’t use it in that area,’” Hernández-Orallo told Nature.

It’s a well-intended suggestion that could make sense in an ideal world. But fat chance AI companies oblige. Chatbots that more often say “I don’t know” would likely be perceived as less advanced or valuable, leading to less use — and less money for the companies making and selling them. So, instead, we get fine-print warnings that “ChatGPT can make mistakes” and “Gemini may display inaccurate info.”

That leaves it up to us to avoid believing and spreading hallucinated misinformation that could hurt ourselves or others. For accuracy, fact-check your damn chatbot’s answers, for crying out loud.

You can read the team’s full study in Nature.

This article originally appeared on Engadget at https://www.engadget.com/ai/advanced-ai-chatbots-are-less-likely-to-admit-they-dont-have-all-the-answers-172012958.html?src=rss

CTO Mira Murati is the latest leader to leave OpenAI

Mira Murati has departed OpenAI, where she had been the chief technology officer since 2018. In a note shared with the company and then posted publicly on X, Murati said that she is exiting "because I want to create the time and space to do my own exploration."

Murati gained additional visibility as a face for the AI company when she briefly assumed CEO duties in November 2023 when the board of directors fired Sam Altman. Altman returned to the helm and Murati resumed work as CTO. However, her departure follows on two other notable exits. Last month, president and co–founder Greg Brockman and co-founder John Schulman both announced that they would be stepping away from OpenAI. Brockman is taking a sabbatical and Schulman is moving to rival AI firm Anthropic.

Here is the full text of Murati's statement:

Hi all,

I have something to share with you. After much reflection, I have made the difficult decision to leave OpenAl.

My six-and-a-half years with the OpenAl team have been an extraordinary privilege. While I’ll express my gratitude to many individuals in the coming days, I want to start by thanking Sam and Greg for their trust in me to lead the technical organization and for their support throughout the years.

There’s never an ideal time to step away from a place one cherishes, yet this moment feels right. Our recent releases of speech-to-speech and OpenAl o1 mark the beginning of a new era in interaction and intelligence - achievements made possible by your ingenuity and craftsmanship. We didn’t merely build smarter models, we fundamentally changed how Al systems learn and reason through complex problems.

We brought safety research from the theoretical realm into practical applications, creating models that are more robust, aligned, and steerable than ever before. Our work has made cutting-edge Al research intuitive and accessible, developing technology that adapts and evolves based on everyone’s input. This success is a testament to our outstanding teamwork, and it is because of your brilliance, your dedication, and your commitment that OpenAl stands at the pinnacle of Al innovation.

I’m stepping away because I want to create the time and space to do my own exploration. For now, my primary focus is doing everything in my power to ensure a smooth transition, maintaining the momentum we’ve built.

I will forever be grateful for the opportunity to build and work alongside this remarkable team. Together, we’ve pushed the boundaries of scientific understanding in our quest to improve human well-being. While I may no longer be in the trenches with you, I will still be rooting for you all.

With deep gratitude for the friendships forged, the triumphs achieved, and most importantly, the challenges overcome together.

Mira

In a post on X, Altman has revealed that the company's Chief Research Officer, Bob McGrew, and VP of Research, Barret Zoph, are also leaving the company. He said they made the decisions "independently of each other and amicably," but it made sense to "do this all at once" for a smooth handover. OpenAI's leadership will go through some changes as a result, with Mark Chen, the Head of Frontiers Research, being named as Research SVP. Research Scientist Josh Achiam has been named as Head of Mission Alignment, while Mark Knight, the Head of Security, is now the Chief Information Security Officer. 

Update, September 26, 2024, 7:03AM ET: This post has been updated to include information about the other staffers leaving OpenAI.

This article originally appeared on Engadget at https://www.engadget.com/ai/cto-mira-murati-is-the-latest-leader-to-leave-openai-200230104.html?src=rss

CTO Mira Murati is the latest leader to leave OpenAI

Mira Murati has departed OpenAI, where she had been the chief technology officer since 2018. In a note shared with the company and then posted publicly on X, Murati said that she is exiting "because I want to create the time and space to do my own exploration."

Murati gained additional visibility as a face for the AI company when she briefly assumed CEO duties in November 2023 when the board of directors fired Sam Altman. Altman returned to the helm and Murati resumed work as CTO. However, her departure follows on two other notable exits. Last month, president and co–founder Greg Brockman and co-founder John Schulman both announced that they would be stepping away from OpenAI. Brockman is taking a sabbatical and Schulman is moving to rival AI firm Anthropic.

Here is the full text of Murati's statement:

Hi all,

I have something to share with you. After much reflection, I have made the difficult decision to leave OpenAl.

My six-and-a-half years with the OpenAl team have been an extraordinary privilege. While I’ll express my gratitude to many individuals in the coming days, I want to start by thanking Sam and Greg for their trust in me to lead the technical organization and for their support throughout the years.

There’s never an ideal time to step away from a place one cherishes, yet this moment feels right. Our recent releases of speech-to-speech and OpenAl o1 mark the beginning of a new era in interaction and intelligence - achievements made possible by your ingenuity and craftsmanship. We didn’t merely build smarter models, we fundamentally changed how Al systems learn and reason through complex problems.

We brought safety research from the theoretical realm into practical applications, creating models that are more robust, aligned, and steerable than ever before. Our work has made cutting-edge Al research intuitive and accessible, developing technology that adapts and evolves based on everyone’s input. This success is a testament to our outstanding teamwork, and it is because of your brilliance, your dedication, and your commitment that OpenAl stands at the pinnacle of Al innovation.

I’m stepping away because I want to create the time and space to do my own exploration. For now, my primary focus is doing everything in my power to ensure a smooth transition, maintaining the momentum we’ve built.

I will forever be grateful for the opportunity to build and work alongside this remarkable team. Together, we’ve pushed the boundaries of scientific understanding in our quest to improve human well-being. While I may no longer be in the trenches with you, I will still be rooting for you all.

With deep gratitude for the friendships forged, the triumphs achieved, and most importantly, the challenges overcome together.

Mira

This article originally appeared on Engadget at https://www.engadget.com/ai/cto-mira-murati-is-the-latest-leader-to-leave-openai-200230104.html?src=rss

The Polaris Dawn crew is back on Earth after a historic mission

The Polaris Dawn crew safely returned to Earth early Sunday morning, bringing the historic privately funded mission to a close. The Dragon capsule carrying the mission’s four astronauts — Jared Isaacman, Scott “Kidd” Poteet, Sarah Gillis and Anna Menon — splashed down in the Gulf of Mexico around 3:30AM ET.

On Thursday, Isaacman and Gillis completed the first commercial spacewalk, each taking a turn to exit the craft and perform a series of spacesuit mobility tests. And with this mission, Gillis and Menon have now traveled farther from Earth than any women before. Polaris reached a peak altitude of about 870 miles, which is also the farthest any humans have ventured since the Apollo program. 

The crew also performed a number of science experiments, and was able to complete a 40-minute video call to Earth and send files in a major test for Starlink’s space communications capabilities. That included a video recorded during the mission of Gillis, an engineer and violinist, playing the violin in space. “A new era of commercial spaceflight dawns, with much more to come,” Polaris posted on X Sunday morning.

This article originally appeared on Engadget at https://www.engadget.com/science/space/the-polaris-dawn-crew-is-back-on-earth-after-a-historic-mission-142028997.html?src=rss

What to read this weekend: Cosmic horror sci-fi, and the quest to understand how life began

New releases in fiction, nonfiction and comics that caught our attention.

The book cover for the novel The Night Guest. It has a bubblegum pink background, and a wine glass is pictured in the foreground, with a small amount of a red liquid dripping into it

Anyone who lives with a difficult-to-diagnose chronic illness and has endured the demoralizing process of trying to get proper treatment can tell you it is, at times, a living nightmare. Advocating for yourself, fighting to be taken seriously; it’s something I’ve dealt with most of my life as a person with autoimmune diseases. So when I read the description of Hildur Knútsdóttir’s psychological horror novel, The Night Guest, it resonated with me immediately:

Iðunn is in yet another doctor's office. She knows her constant fatigue is a sign that something's not right, but practitioners dismiss her symptoms and blood tests haven't revealed any cause. When she talks to friends and family about it, the refrain is the same ― have you tried eating better? exercising more? establishing a nighttime routine? She tries to follow their advice, buying everything from vitamins to sleeping pills to a step-counting watch. Nothing helps. Until one night Iðunn falls asleep with the watch on, and wakes up to find she’s walked over 40,000 steps in the night . . . What is happening when she’s asleep?

The Night Guest is a short, compelling read that puts an unsettling spin on an issue that a lot of people — especially women — can relate to. I pretty much inhaled it.

The cover for Is Earth Exceptional? Spherical green structures are pictured arranged like dense planets against a backdrop of stars

The origin of life and the question of whether it exists elsewhere is a topic I find to be endlessly interesting (as evidenced by how regularly books about it land among these recommendations). In their new book Is Earth Exceptional? The Quest for Cosmic Life, astrophysicist Mario Livio and Nobel Prize winning biologist Jack Szostak examine what we know about the things that make life possible — the building blocks of life — and explore how they could have emerged on Earth and, hypothetically, elsewhere. At the heart of the mystery is the as yet unanswered question of whether or not life came to be as the result of a freak accident.

As the authors write in their introduction, “Even with the enormous scientific progress we have witnessed in the past few decades, we still don’t know whether life is an extremely rare chemical accident, in which case we may be alone in our galaxy, or a chemical inevitability, which would potentially make us part of a huge galactic ensemble.”

An illustration of a huge, grotesque gray head with red tube-like sludge pouring out of its eyes and mouth onto a path, where a person is standing

In 2034 as imagined by Into the Unbeing, Earth is well past the tipping point of climate change. The planet has been devastated by natural disasters and species have died off in the masses. Looking for anything that can help improve the world’s situation, a team of climate scientists with the Scientific Institute for Nascent Ecology and Worlds (SINEW) ventures out to explore what appears to be an entirely new environment that has popped up out of nowhere near their camp in the Australian outback. But they’re not prepared for what they find.

Into the Unbeing is a new gripping science-fiction series that weaves in cosmic horror. The first issue came out at the beginning of the summer, and Part One just wrapped up this week with issue number four. If you were into Scavengers Reign or The Southern Reach Trilogy, you’ll probably enjoy Into the Unbeing. The art alone will suck you right in.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/what-to-read-this-weekend-the-night-guest-is-earth-exceptional-and-into-the-unbeing-194524310.html?src=rss

NASA confirms it’s developing the Moon’s new time zone

NASA confirmed on Friday that it’s developing a new lunar time system for the Moon. The White House published a policy memo in April, directing NASA to create the new standard by 2026. Over five months later (government time, y’all), the space agency’s confirmation states it will work with “U.S. government stakeholders, partners, and international standards organizations” to establish a Coordinated Lunar Time (LTC).

To understand why the Moon needs its own time zone, look no further than Einstein. His theories of relativity say that because time changes relative to speed and gravity, time moves slightly faster on our celestial neighbor (because of its weaker gravity). So, an Earth clock on the Moon would gain about 56 microseconds a day — enough to throw off calculations that could put future missions requiring precision in danger.

“For something traveling at the speed of light, 56 microseconds is enough time to travel the distance of approximately 168 football fields,” said Cheryl Gramling, NASA timing and standards leader, in a press release. “If someone is orbiting the Moon, an observer on Earth who isn’t compensating for the effects of relativity over a day would think that the orbiting astronaut is approximately 168 football fields away from where the astronaut really is.”

Classic image of Buzz Aldrin in astronaut suit on the Moon's surface.
NASA

April’s White House memo directed NASA to work with the Departments of Commerce, Defense, State and Transportation to plot the course for LTC’s introduction by the end of 2026. Global stakeholders, particularly Artemis Accords signees, will play a role. Established in 2020, the agreements include a growing collection of 43 countries committed to norms expected to be honored in space. Notably, China and Russia have refused to join.

NASA’s Space Communication and Navigation (SCaN) program will lead the initiative. One of LTC’s goals is to be scalable to other celestial bodies in the future, including Mars. The time standard will be determined by a weighted average of atomic clocks on the Moon, although their locations are still up for debate. Such a weighted average is similar to how scientists calculate Earth’s Coordinated Universal Time (UTC).

NASA plans to send crewed missions back to the Moon through its Artemis program. Artemis 2, scheduled for September 2025, plans to send four people on a pass around the Moon. A year later, Artemis 3 aims to land astronauts near the Moon’s South Pole.

This article originally appeared on Engadget at https://www.engadget.com/science/space/nasa-confirms-its-developing-the-moons-new-time-zone-165345568.html?src=rss

OpenAI’s new o1 model is slower, on purpose

OpenAI has unveiled its latest artificial intelligence model called o1, which, the company claims, can perform complex reasoning tasks more effectively than its predecessors. The release comes as OpenAI faces increasing competition in the race to develop more sophisticated AI systems. 

O1 was trained to "spend more time thinking through problems before they respond, much like a person would," OpenAI said on its website. "Through training, [the models] learn to refine their thinking process, try different strategies, and recognize their mistakes." OpenAI envisions the new model being used by healthcare researchers to annotate cell sequencing data, by physicists to generate mathematical formulas and software developers.  

Current AI systems are essentially fancier versions of autocomplete, generating responses through statistics instead of actually "thinking" through a question, which means that they are less "intelligent" than they appear to be. When Engadget tried to get ChatGPT and other AI chatbots to solve the New York Times Spelling Bee, for instance, they fumbled and produced nonsensical results.

With o1, the company claims that it is "resetting the counter back to 1" with a new kind of AI model designed to actually engage in complex problem-solving and logical thinking. In a blog post detailing the new model, OpenAI said that it performs similarly to PhD students on challenging benchmark tasks in physics, chemistry and biology, and excels in math and coding. For example, its current flagship model, GPT-4o, correctly solved only 13 percent of problems in a qualifying exam for the International Mathematics Olympiad compared to o1, which solved 83 percent.  

The new model, however, doesn't include capabilities like web browsing or the ability to upload files and images. And, according to The Verge, it's significantly slower at processing prompts compared to GPT-4o. Despite having longer to consider its outputs, o1 hasn't solved the problem of "hallucinations" — a term for AI models making up information. "We can't say we solved hallucinations," the company's chief research officer Bob McGrew told The Verge

O1 is still at a nascent stage. OpenAI calls it a "preview" and is making it available only to paying ChatGPT customers starting today with restrictions on how many questions they can ask it per week. In addition, OpenAI is also launching o1-mini, a slimmed-down version that the company says is particularly effective for coding. 

This article originally appeared on Engadget at https://www.engadget.com/ai/openais-new-o1-model-is-slower-on-purpose-185711459.html?src=rss