Meta is teaming up with the Center for Open Science (COS) to start a pilot program that studies “topics related to well-being.” It looks like the program will dive into our social media data, but on a voluntary basis, as COS says it will use a “privacy-preserving” dataset provided by Meta for the pilot program. The agency says the study should help people understand “how different factors may or may not impact well-being and inform productive conversations about how to help people thrive.”
The specifics of the program remain opaque, but COS says it’ll use “new types of research processes” like pre-registration and early peer review. That last one is important, as it sends proposed research questions to peer review before being issued to participants. This should help stave off bias and ensure the questions are actually useful. The agency also says that all results will be published and “not just those that confirm one’s hypothesis or support a prevailing theory.” As for the actual study, Meta told Engadget that it hasn't started yet.
As for a totally non-scientific study on the effects of social media, using it for even ten minutes transforms any dopamine in my brain to the swamps of sadness from The Neverending Story. You could be the same. It’s no secret that social media is basically a factory that creates mental unease, and this is particularly true for kids and teens.
So, why announce this partnership today of all days? It could be a coincidence, but the timing sure is funny. Meta is set to testify this week in front of the US Senate Judiciary Committee about its failures to protect kids online, along with other social media bigwigs like TikTok, Snap and X. It is worth noting, however, that Meta CEO Mark Zuckerberg and TikTok CEO Shou Zi Chew are willing participants in this testimony. Snap CEO Evan Spiegel, Discord CEO Jason Citron and X CEO Linda Yaccarino had to be formally subpoenaed.
However, Meta has a particularly bad track record when it comes to this stuff. After all, the company’s being sued by 41 states for allegedly harming the mental health of its youngest users. The suit claims Meta knew its “addictive” features were bad for kids and intentionally misled the public about the safety of its platforms.
Unsealed documents from the suit claim that Meta actually “coveted and pursued” children under 13 and lied about how it handled underage accounts once discovered, often failing to disable these accounts while continuing to harvest data. This would be a brazen violation of the Children’s Online Privacy Protection Act of 1998.
Another lawsuit alleges that Facebook and Instagram's algorithms facilitated child sexual harassment, with the complaint stating that Meta's own internal documents said over 100,000 kids were harassed daily. Facebook's "People You May Know" algorithm was singled out as a primary conduit to connect children to predators. The complaint alleges that Meta did nothing to stop this issue when approached by concerned employees. For Meta's part, it maintains the timing is a coincidence and that the announcement was timed for when the partnership went into effect.
With all of this in mind, it doesn’t really take a study to recognize that the “well-being” of users isn’t exactly the most important thing on the minds of social media CEOs. Still, if the program helps these companies move in the right direction, that’s certainly cool. COS says the study will take two years and that it’s still in the early planning stages. We’ll know more in the coming months. In the meantime, you can watch CEO Zuckerberg and all the rest testify before congress on Wednesday at 10 AM ET.
Update, January 29 2024, 7:40PM ET: This story has been updated to include timing information from Meta and clarification as to when a study would start.
This article originally appeared on Engadget at https://www.engadget.com/meta-will-offer-some-of-its-data-to-third-party-researchers-through-center-for-open-science-partnership-181418016.html?src=rss
Japan's lunar lander has regained power a full nine days after it landed on the moon's surface nearly upside down and was subsequently switched off, JAXA (Japan Aerospace Exploration Agency) announced. A change in the sun's position allowed the solar panels to receive light and charge the probe's battery, allowing JAXA to re-establish communication.
Things were looking dire shortly after the SLIM (Smart Lander for Investigating Moon) touched down. The agency immediately noticed a problem with power generation, but was able to launch a pair of probes onto the moon's surface. The Lunar Excursion Vehicle 2 (LEV-2) snapped an incredible photo of SLIM, showing it to be upside down with its panels pointing away from the sun. The cause was found to be a malfunction of the main engine.
Communication with SLIM was successfully established last night, and operations resumed! Science observations were immediately started with the MBC, and we obtained first light for the 10-band observation. This figure shows the “toy poodle” observed in the multi-band observation. pic.twitter.com/WYD4NlYDaG
JAXA thought there was a chance the probe could recover once the sun's rays pointed more toward the solar panels, and that's exactly what transpired. Shortly after power was regained, it snapped another picture of a previously imaged rock formation called "toy poodle" using a multi-band spectral camera. The team is also targeting several other rocks with canine-themed names, including "St. Bernard," "Bulldog" and "Shibainu."
The upside-down landing may have seemed like an unrecoverable fault, but it looks like the mission can now proceed more or less as planned. While the baseball-sized LEV-2 explores the surface (relaying data via the LEV-1 probe, which also has two cameras), SLIM will grab whatever science it can.
In any case, the mission was already deemed a success, as the primary goal was a precision landing. It did just that, hitting a spot just 55 meters (180 feet) of its target. It's not known how much longer SLIM can function, as it was never designed to survive a solar night and the next one happens on Thursday.
This article originally appeared on Engadget at https://www.engadget.com/japans-slim-lunar-probe-returns-to-life-more-than-a-week-after-landing-upside-down-124507467.html?src=rss
If there’s one thing we can all agree upon, it’s that the 21st century’s captains of industry are trying to shoehorn AI into every corner of our world. But for all of the ways in which AI will be shoved into our faces and not prove very successful, it might actually have at least one useful purpose. For instance, by dramatically speeding up the often decades-long process of designing, finding and testing new drugs.
Risk mitigation isn’t a sexy notion but it’s worth understanding how common it is for a new drug project to fail. To set the scene, consider that each drug project takes between three and five years to form a hypothesis strong enough to start tests in a laboratory. A 2022 study from Professor Duxin Sun found that 90 percent of clinical drug development fails, with each project costing more than $2 billion. And that number doesn’t even include compounds found to be unworkable at the preclinical stage. Put simply, every successful drug has to prop up at least $18 billion waste generated by its unsuccessful siblings, which all but guarantees that less lucrative cures for rarer conditions aren’t given as much focus as they may need.
Dr. Nicola Richmond is VP of AI at Benevolent, a biotech company using AI in its drug discovery process. She explained the classical system tasks researchers to find, for example, a misbehaving protein – the cause of disease – and then find a molecule that could make it behave. Once they've found one, they need to get that molecule into a form a patient can take, and then test if it’s both safe and effective. The journey to clinical trials on a living human patient takes years, and it’s often only then researchers find out that what worked in theory does not work in practice.
The current process takes “more than a decade and multiple billions of dollars of research investment for every drug approved,” said Dr. Chris Gibson, co-founder of Recursion, another company in the AI drug discovery space. He says AI’s great skill may be to dodge the misses and help avoid researchers spending too long running down blind alleys. A software platform that can churn through hundreds of options at a time can, in Gibson’s words, “fail faster and earlier so you can move on to other targets.”
CellProfiler / Carpenter-Singh laboratory at the Broad Institute
Dr. Anne E. Carpenter is the founder of the Carpenter-Singh laboratory at the Broad Institute of MIT and Harvard. She has spent more than a decade developing techniques in Cell Painting, a way to highlight elements in cells, with dyes, to make them readable by a computer. She is also the co-developer of Cell Profiler, a platform enabling researchers to use AI to scrub through vast troves of images of those dyed cells. Combined, this work makes it easy for a machine to see how cells change when they are impacted by the presence of disease or a treatment. And by looking at every part of the cell holistically – a discipline known as “omics” – there are greater opportunities for making the sort of connections that AI systems excel at.
Using pictures as a way of identifying potential cures seems a little left-field, since how things look don’t always represent how things actually are, right? Carpenter said humans have always made subconscious assumptions about medical status from sight alone. She explained most people may conclude someone may have a chromosomal issue just by looking at their face. And professional clinicians can identify a number of disorders by sight alone purely as a consequence of their experience. She added that if you took a picture of everyone’s face in a given population, a computer would be able to identify patterns and sort them based on common features.
This logic applies to the pictures of cells, where it’s possible for a digital pathologist to compare images from healthy and diseased samples. If a human can do it, then it should be faster and easier to employ a computer to spot these differences in scale so long as it’s accurate. “You allow this data to self-assemble into groups and now [you’re] starting to see patterns,” she explained, “when we treat [cells] with 100,000 different compounds, one by one, we can say ‘here’s two chemicals that look really similar to each other.’” And this looking really similar to each other isn’t just coincidence, but seems to be indicative of how they behave.
In one example, Carpenter cited that two different compounds could produce similar effects in a cell, and by extension could be used to treat the same condition. If so, then it may be that one of the two – which may not have been intended for this purpose – has fewer harmful side effects. Then there’s the potential benefit of being able to identify something that we didn’t know was affected by disease. “It allows us to say, ‘hey, there’s this cluster of six genes, five of which are really well known to be part of this pathway, but the sixth one, we didn’t know what it did, but now we have a strong clue it’s involved in the same biological process.” “Maybe those other five genes, for whatever reason, aren’t great direct targets themselves, maybe the chemicals don’t bind,” she said, “but the sixth one [could be] really great for that.”
FatCamera via Getty Images
In this context, the startups using AI in their drug discovery processes are hoping that they can find the diamonds hiding in plain sight. Dr. Richmond said that Benevolent’s approach is for the team to pick a disease of interest and then formulate a biological question around it. So, at the start of one project, the team might wonder if there are ways to treat ALS by enhancing, or fixing, the way a cell’s own housekeeping system works. (To be clear, this is a purely hypothetical example supplied by Dr. Richmond.)
That question is then run through Benevolent’s AI models, which pull together data from a wide variety of sources. They then produce a ranked list of potential answers to the question, which can include novel compounds, or existing drugs that could be adapted to suit. The data then goes to a researcher, who can examine what, if any, weight to give to its findings. Dr. Richmond added that the model has to provide evidence from existing literature or sources to support its findings even if its picks are out of left-field. And that, at all times, a human has the final say on what of its results should be pursued and how vigorously.
It’s a similar situation at Recursion, with Dr. Gibson claiming that its model is now capable of predicting “how any drug will interact with any disease without having to physically test it.” The model has now formed around three trillion predictions connecting potential problems to their potential solutions based on the data it has already absorbed and simulated. Gibson said that the process at the company now resembles a web search: Researchers sit down at a terminal, “type in a gene associated with breast cancer and [the system] populates all the other genes and compounds that [it believes are] related.”
“What gets exciting,” said Dr. Gibson, “is when [we] see a gene nobody has ever heard of in the list, which feels like novel biology because the world has no idea it exists.” Once a target has been identified and the findings checked by a human, the data will be passed to Recursion’s in-house scientific laboratory. Here, researchers will run initial experiments to see if what was found in the simulation can be replicated in the real world. Dr. Gibson said that Recursion’s wet lab, which uses large-scale automation, is capable of running more than two million experiments in a working week.
“About six weeks later, with very little human intervention, we’ll get the results,” said Dr. Gibson and, if successful, it’s then the team will “really start investing.” Because, until this point, the short period of validation work has cost the company “very little money and time to get.” The promise is that, rather than a three-year preclinical phase, that whole process can be crunched down to a few database searches, some oversight and then a few weeks of ex vivo testing to confirm if the system’s hunches are worth making a real effort to interrogate. Dr. Gibson said that it believes it has taken a “year’s worth of animal model work and [compressed] it, in many cases, to two months.”
Of course, there is not yet a concrete success story, no wonder cure that any company in this space can point to as a validation of the approach. But Recursion can cite one real-world example of how close its platform came to matching the success of a critical study. In April 2020, Recursion ran the COVID-19 sequence through its system to look at potential treatments. It examined both FDA-approved drugs and candidates in late-stage clinical trials. The system produced a list of nine potential candidates which would need further analysis, eight of which it would later be proved to be correct. It also said that Hydroxychloroquine and Ivermectin, both much-ballyhooed in the earliest days of the pandemic, would flop.
And there are AI-informed drugs that are currently undergoing real-world clinical trials right now. Recursion is pointing to five projects currently finishing their stage one (tests in healthy patients), or entering stage two (trials in people with the rare diseases in question) clinical testing right now. Benevolent has started a stage one trial of BEN-8744, a treatment for ulcerative colitis that may help with other inflammatory bowel disorders. And BEN-8744 is targeting an inhibitor that has no prior associations in the existing research which, if successful, will add weight to the idea that AIs can spot the connections humans have missed. Of course, we can’t make any conclusions until at least early next year when the results of those initial tests will be released.
Yuichiro Chino via Getty Images
There are plenty of unanswered questions, including how much we should rely upon AI as the sole arbiter of the drug discovery pipeline. There are also questions around the quality of the training data and the biases in the wider sources more generally. Dr. Richmond highlighted the issues around biases in genetic data sources both in terms of the homogeneity of cell cultures and how those tests are carried out. Similarly, Dr. Carpenter said the results of her most recent project, the publicly available JUMP-Cell Painting project, were based on cells from a single participant. “We picked it with good reason, but it’s still one human and one cell type from that one human.” In an ideal world, she’d have a far broader range of participants and cell types, but the issues right now center on funding and time, or more appropriately, their absence.
But, for now, all we can do is await the results of these early trials and hope that they bear fruit. Like every other potential application of AI, its value will rest largely in its ability to improve the quality of the work – or, more likely, improve the bottom line for the business in question. If AI can make the savings attractive enough, however, then maybe those diseases which are not likely to make back the investment demands under the current system may stand a chance. It could all collapse in a puff of hype, or it may offer real hope to families struggling for help while dealing with a rare disorder.
This article originally appeared on Engadget at https://www.engadget.com/ai-is-coming-for-big-pharma-150045224.html?src=rss
After three years of service, NASA's Ingenuity Helicopter has flown on Mars for the last time. Earlier this month, during its 72nd flight, Ingenuity stopped communicating with the Perseverance rover. Although NASA later reestablished contact with the helicopter, it emerged that at least one of Ingenuity's carbon fiber rotor blades was damaged during a landing on January 18th. The helicopter is upright and is still in contact with ground controllers, but it's no longer capable of flight.
Ingenuity far outlasted its original planned lifespan. NASA designed the helicopter to carry out up to five test flights over 30 days. But it stayed in service for over three years. Ingenuity flew over 14 times farther than originally anticipated and it had a total flight time of over two hours.
“The historic journey of Ingenuity, the first aircraft on another planet, has come to end,” NASA Administrator Bill Nelson said in a statement. “That remarkable helicopter flew higher and farther than we ever imagined and helped NASA do what we do best — make the impossible, possible. Through missions like Ingenuity, NASA is paving the way for future flight in our solar system and smarter, safer human exploration to Mars and beyond.”
After Ingenuity's initial five flights, NASA decided to keep the helicopter running as an operations demonstration. It scouted ahead for Perseverance.
On January 18, the Ingenuity team planned a short vertical flight so they could pinpoint the helicopter's location after it had to make an emergency landing on its previous jaunt. The chopper reached a height of 40 feet and hovered for 4.5 seconds before descending at a rate of 3.3 feet per second. However, it lost contact with Perseverance when it was about three feet above the surface.
It's not clear how the rotor blade sustained damage. NASA's looking into whether the blade struck the surface. Perseverance is too far away to take a look at Ingenuity itself. The chopper's own camera spotted damage on the shadow of a rotor blade.
NASA/JPL-Caltech
The hardy helicopter endured tough terrain, a dead sensor, dust storms (after which was able to clean itself) and a winter on Mars. The Ingenuity team will wind down the helicopter's operations after carrying out final tests and downloading the last data and imagery from its memory. After making history as the first aircraft from Earth to conduct a powered, controlled flight on another planet, all Ingenuity can do now is rest easy on the surface of Mars.
This article originally appeared on Engadget at https://www.engadget.com/nasas-ingenuity-helicopter-has-flown-on-mars-for-the-final-time-204004656.html?src=rss
Shortly after Japan’s space agency became the fifth country to land a spacecraft on the surface of the moon, its scientists discovered the Smart Lander for Investigating Moon (SLIM) unfortunately touched down upside down. The Japan Aerospace Exploration Agency (JAXA) said that the SLIM landed on the lunar surface on January 20 but it knew it might have bigger problems due to an issue with power generation. Just hours after making landfall, JAXA expected the power to run out, before it ultimately did.
SLIM met the moon’s surface about 55 meters east of the original target landing site, JAXA said. The agency did get all of the technical information related to its navigation prior to landing and ultimately becoming stationary on the lunar surface. JAXA captured photos of the SLIM from its The Lunar Excursion Vehicle 2, its fully autonomous robot currently exploring the moon.
The Lunar Excursion Vehicle 2 (LEV-2 / SORA-Q) has successfully taken an image of the #SLIM spacecraft on the Moon. LEV-2 is the world’s first robot to conduct fully autonomous exploration on the lunar surface. https://t.co/NOboD0ZJIrpic.twitter.com/mfuuceu2WA
— JAXA Institute of Space and Astronautical Science (@ISAS_JAXA_EN) January 25, 2024
The reason behind the main engine malfunctioning is under investigation by the space agency. There is a slim chance for regeneration because the solar cells that power the spacecraft are facing west, meaning there is a chance for SLIM recovery if enough light from the sun reaches the cells as more time passes. The SLIM JAXA team took to X earlier this week to write, “We are preparing for recovery.” The agency said it will “take the necessary preparations to gather more technical and scientific data from the spacecraft.
This article originally appeared on Engadget at https://www.engadget.com/japans-slim-lunar-spacecraft-landed-upside-down-on-the-moon-202819728.html?src=rss
Researchers at MIT have developed a rapid 3D-printing technique that uses liquid metal to allow for extremely fast prints. The process can manufacture large aluminum components in minutes, whereas many pre-existing techniques would take hours to finish the same build. The technology has already been used to create table legs, chair frames and related furniture parts.
It’s called liquid metal printing (LMP) and involves directing molten aluminum along a predefined path into a bed of tiny glass beads. These beads quickly harden into a 3D structure. Researchers say the new process is at least ten times faster than comparable metal manufacturing techniques.
However, there is one major caveat. This process sacrifices resolution for speed and scale. This is why the researchers have used it to create low-resolution items like chair legs and not, say, intricate parts with complex geometries. MIT researchers say this trade-off still makes the technology useful for creating “components of larger structures” that don’t require extremely fine details. This includes furniture parts, as mentioned above, but also components for construction and industrial design.
Despite the resolution downgrade, parts made using LMP are still durable and can withstand post-print machining, like drilling and boring. The folks behind this technology say the builds are much more durable than those built with wire arc additive manufacturing, which is a pre-existing metal printing method. This is because LMP keeps the material molten throughout the entire process, lessening the chances of cracking and warping.
The researchers recommend combining LMP with other techniques for jobs that require both speed and a high resolution. “Most of our built world — the things around us like tables, chairs, and buildings — doesn’t need extremely high resolution”, said Skylar Tibbits, a senior author of a paper that introduced the project.
It’s also worth noting that this printing method doesn’t require aluminum. It can work with other metals. The researchers chose aluminum due to its popularity in construction and the fact that it’s easily recycled.
The folks behind this tech hope to keep iterating on the concept to improve heating consistency, to prevent sticking, and allow for greater control over the molten metal. The team’s been having issues with larger nozzle diameters leading to irregular prints, which is something that needs to be worked out. Tibbits said the method could eventually become a “game-changer in metal manufacturing.”
Despite slightly falling out of favor in the commercial space, 3D printing has grown in leaps and bounds in recent years. Researchers have developed a tiny 3D printer that actually gets inserted into the body to repair and clean damaged tissue. Scientists also recently printed a working piece of the human heart.
This article originally appeared on Engadget at https://www.engadget.com/mit-researchers-have-developed-a-rapid-3d-printing-technique-that-uses-liquid-metal-194113455.html?src=rss
After a short period of worrying silence, NASA said on Saturday night that it was able to regain contact with the Ingenuity helicopter. The autonomous aircraft unexpectedly ceased communications with the Perseverance rover, which relays all transmissions between Ingenuity and Earth, on Thursday during its 72nd flight on Mars. It had already been acting up prior to this, having cut its previous flight short for an unknown reason, and NASA intended to do a systems check during the latest ascent.
Good news today: We've reestablished contact with the #MarsHelicopter after instructing @NASAPersevere to perform long-duration listening sessions for Ingenuity’s signal.
The team is reviewing the new data to better understand the unexpected comms dropout during Flight 72. https://t.co/KvCVwhZ5Rk
The space agency said in an update posted on X that it’s now reviewing the data from Ingenuity to understand what happened. Perseverance picked up its signal after the team instructed it to perform “long-duration listening sessions.” Ingenuity has experienced blackouts before, as recently as last year, and was able to return to flight. But it’s too early to say if that will be the case this time. The mini helicopter is already running well past the original timeline of its mission.
This article originally appeared on Engadget at https://www.engadget.com/nasa-says-its-reestablished-contact-with-the-ingenuity-mars-helicopter-165728606.html?src=rss
NASA is trying to figure out how to reach its Ingenuity Mars helicopter after losing contact with the craft earlier this week. During its 72nd flight — a “quick pop-up” to an altitude of about 40 feet — NASA says Ingenuity stopped communicating with the Perseverance rover before it was meant to. It went quiet on Thursday, and as of Friday afternoon, NASA still hadn’t heard from it.
Perseverance serves the go-between for all communications to and from the helicopter; Ingenuity sends information to Perseverance, which then passes it on to Earth, and vice versa. According to NASA, the small helicopter completed the ascent as planned, but ceased communications while on its way back down. “The Ingenuity team is analyzing available data and considering next steps to reestablish communications with the helicopter,” NASA said in a status update on Friday. Ingenuity had previously ended a flight earlier than it was supposed to, and Thursday’s jaunt was meant to “check out the helicopter’s systems.”
Ingenuity has been on the red planet since 2021, when it arrived with the Perseverance rover. And it’s far exceeded its mission goals. NASA originally hoped the experimental helicopter would be able to complete a handful of flights. It went on to fly more than 20 times within its first year in operation. The space agency officially extended its mission in 2022, and it’s since executed dozens more more successful flights. Ingenuity is the first aircraft to take flight from the surface of Mars.
This article originally appeared on Engadget at https://www.engadget.com/nasas-ingenuity-helicopter-has-gone-silent-on-mars-195746735.html?src=rss
Who doesn’t love showing off their collection of cool rocks? NASA was finally able to get into the asteroid Bennu sample container last week after struggling with it for a couple of months, and now, it’s sharing a look at what’s inside. The space agency published a high-resolution image of the newly opened Touch-and-Go-Sample Acquisition Mechanism (TAGSAM) on Friday, revealing all the dust and rocks OSIRIS-REx scraped off the asteroid’s surface.
The image is massive, so you can zoom in to see even the finer details of the sample. Check out the full-sized version on NASA’s website. There’s an abundance of material for scientists to work with, and as OSIRIS-REx team member Lindsay Keller said back in September, they plan to make the most of microanalytical techniques to “really tear it apart, almost down to the atomic scale.” Asteroid Bennu, estimated to be about 4.5 billion years old, may hold clues into the formation of our solar system and how the building blocks of life first came to Earth.
Scientists have already discovered signs of carbon and water in the excess material they found on the outside of the TAGSAM. While they’d hoped to get at least 2.1 ounces (60 grams) of regolith from the asteroid, OSIRIS-REx was able to grab much more. The team obtained 2.48 ounces (70.3 grams) just from the “bonus” material accumulated on the sample hardware. NASA plans to spend the next two years analyzing portions of the sample, but the majority of it will be preserved for future studies and to be shared with other scientists.
This article originally appeared on Engadget at https://www.engadget.com/take-a-look-at-the-full-asteroid-bennu-sample-in-all-its-glory-161309568.html?src=rss
Japan has become the fifth country to successfully land on the moon after confirming today that its SLIM lander survived its descent to the surface — but its mission is likely to be short lived. JAXA, the Japanese space agency, says the spacecraft is having problems with its solar cell and is unable to generate electricity. In its current state, the battery may only have enough juice to keep it running a few more hours.
Based on how the other instruments are functioning, JAXA said in a press conference this afternoon that it’s evident SLIM did make a soft landing. The spacecraft has been able to communicate with Earth and receive commands, but is operating on a low battery. It’s unclear what exactly the issue with the solar cell is beyond the fact that it’s not functioning.
There’s a chance that the panels are just not facing the right direction to be receiving sunlight right now, which would mean it could start charging when the sun changes position. But, JAXA says it needs more time to understand what has happened. LEV-1 and LEV-2, two small rovers that accompanied SLIM to the moon, were able to successfully separate from the lander as planned before it touched down, and so far appear to be in working condition.
JAXA says it’s now focusing on maximizing the operational time it has left with SLIM to get as much data as possible from the landing. SLIM — the Smart Lander for Investigating Moon — has also been called the “Moon Sniper” due to its precision landing technology, which is supposed to put it within 100 meters of its target, the Shioli crater. The agency is planning to hold another press conference next week to share more updates.
Though its time may be running out, SLIM’s landing was still a major feat. Only four other countries have successfully landed on the moon: the US, China, India and Russia. The latest American attempt, the privately led Peregrine Mission One, ended in failure after the spacecraft began leaking propellant shortly after its January 8 launch. It managed to hang on for several more days and even reached lunar distance, but had no chance of a soft landing. Astrobotic, the company behind the lander, confirmed last night that Peregrine made a controlled reentry, burning up in Earth’s atmosphere over the South Pacific.
This article originally appeared on Engadget at https://www.engadget.com/japans-slim-lunar-lander-made-it-to-the-moon-but-itll-likely-die-within-hours-195431502.html?src=rss