Aspiring Starlink competitor Logos Space Services has secured FCC clearance to launch more than 4,000 broadband satellites into low Earth orbit by 2035, as reported by Space News. Under FCC regulations, the company must deploy half of the approved amount within the next seven years.
The company is headed by its founder, Milo Medin, a former project manager at NASA as well as a former vice president of wireless services at Google. The company has been raising money since it opened its doors in 2023 and reportedly hopes to deploy its first satellite by 2027. Logos’ planned low Earth orbit constellation would beam high-speed broadband internet to customers worldwide, including government and enterprise users, much like Starlink.
While the satellite broadband market is growing, Starlink remains the biggest player by far. The European Space Agency estimates there are just over 14,000 functioning satellites currently in orbit and we know that roughly 9,600 of them are a part of the Starlink constellation. The SpaceX subsidiary recently asked the FCC for clearance to launch a million satellites, though in reality, the FCC will likely trend closer to the 7,500 it approved on the last go-around. The ESA says it expects 100,000 satellites to be in orbit by 2030.
This article originally appeared on Engadget at https://www.engadget.com/science/space/a-potential-starlink-competitor-just-got-fcc-clearance-to-launch-4000-satellites-143905076.html?src=rss
NASA started making the final preparations for the Artemis 2 mission in early January, with the hopes of opening its launch window as soon as February 6. After issues showed up during the mission’s wet dress rehearsal in the early hours of February 3, however, the agency had to push back its earliest launch opportunity to March.
“With more than three years between SLS launches, we fully anticipated encountering challenges. That is precisely why we conduct a wet dress rehearsal. These tests are designed to surface issues before flight and set up launch day with the highest probability of success,” NASA administrator Jared Isaacman said on X.
During a wet dress rehearsal, the spacecraft to be used for a mission is loaded with propellants to simulate the actual preparations and countdown to liftoff. NASA explained that Artemis 2’s Space Launch System, which was already on the launch pad, suffered from a liquid hydrogen leak that its engineers spent hours troubleshooting. They were ultimately able to fill all the rocket’s tanks and started the countdown to launch. But with approximately five minutes left in the countdown, the ground launch sequencer automatically stopped due to a spike in the spacecraft’s liquid hydrogen leak rate.
The agency admits that it has other issues to fix, based on what happened during the rehearsal. It has to make sure that the cold weather doesn’t affect the mission’s equipment during the actual launch in the same way it did in testing . The Orion crew module’s hatch pressurization process took longer than expected, and that should must not happen on launch day. NASA also has to troubleshoot the audio communication channels for its ground teams after they dropped several times during the rehearsal. Artemis’ ground crew will review data from the wet dress rehearsal and address the aforementioned problems. NASA then has to conduct another test to confirm that they were taken care of before announcing the mission’s launch window.
NASA completed a wet dress rehearsal for the Artemis II mission in the early morning hours on Feb. 3. To allow teams to review data and conduct a second wet dress rehearsal, NASA will now target March as the the earliest possible launch opportunity for the Artemis II mission.… pic.twitter.com/jSnCUPLQb6
This article originally appeared on Engadget at https://www.engadget.com/science/space/nasa-moves-artemis-2-launch-to-march-after-hydrogen-leak-during-testing-140000351.html?src=rss
Elon Musk’s SpaceX has acquired Musk’s xAI, the companies announced. The merger will “form the most ambitious, vertically-integrated innovation engine on (and off) Earth, with AI, rockets, space-based internet, direct-to-mobile device communications and the world’s foremost real-time information and free speech platform,” Musk wrote in an update.
The AI company that right now is best known for its CSAM-generating chatbot might seem like a strange fit for a rocket company. But SpaceX is key to Musk’s latest scheme to build AI data centers in space. In his update, Musk wrote that “global electricity demand for AI simply cannot be met with terrestrial solutions” and that moving the resource-intensive operations to space is “the only logical solution.” SpaceX just days ago filed an application with the FCC to create an “orbital data center” by launching a million new satellites.
Musk also claimed that, eventually, space-based data centers will enable other advancements in space travel. “The capabilities we unlock by making space-based data centers a reality will fund and enable self-growing bases on the Moon, an entire civilization on Mars and ultimately expansion to the Universe.” Notably, it’s not the first time Musk has made lofty claims about Mars. He predicted in 2017 that SpaceX would send crewed missions to Mars by 2024.
This also isn’t the first time Musk has acquired one of his own companies. He merged xAI and X last year, which means SpaceX now owns the social network Musk bought in 2022. And he recently announced that Tesla was investing $2 billion into xAI. SpaceX is planning to go public later this year in an initial public offering (IPO) that could value the company at more than $1 trillion, according toBloomberg, which notes that SpaceX has also “discussed a possible merger with Tesla.”
This article originally appeared on Engadget at https://www.engadget.com/ai/elon-musks-spacex-has-acquired-his-ai-company-xai-221617040.html?src=rss
Blue Origin plans to put a focus on the development of its human lunar capabilities, so it won’t be sending tourists to space for at least the next two years. That means we won’t be seeing any New Shepard launches for quite some time. Blue Origin is one of the companies NASA chose to develop human landing systems for its Artemis program, along with SpaceX. Specifically, it will work on landers for the Artemis III and Artemis V missions.
The company was originally contracted to build the human landing system that would transfer astronauts from NASA’s Gateway station to the moon’s South Pole region for the Artemis V mission. But last year, NASA asked Blue Origin to design an alternative lander for Artemis III after SpaceX experienced delays due to Starship’s failed tests. Artemis III is expected to be the first crewed moon landing mission of the program, and the Trump administration wants it to happen before the end of the president’s term.
New Shepard takes tourists to suborbital space, where they experience a few minutes of weightlessness before the spacecraft makes its way back to Earth. Jeff Bezos was one of the passengers on New Shepard’s first tourist flight back in 2021. Since then, it has flown and landed 37 more times and carried 98 passengers to the Karman line, including Katy Perry and William Shatner.
This article originally appeared on Engadget at https://www.engadget.com/science/space/blue-origin-is-pausing-its-space-tourist-flights-to-work-on-lunar-landers-for-nasa-143000058.html?src=rss
Since 2021, NASA's Perseverance rover has achieved a number of historic milestones, including sending back the first audio recordings from Mars. Now, nearly five years after landing on the Red Planet, it just achieved another feat. This past December, Perseverance successfully completed a route through a section of the Jezero crater plotted by Anthropic's Claude chatbot, marking the first time NASA has used a large language model to pilot the car-sized robot.
Between December 8 and 10, Perseverance drove approximately 400 meters (about 437 yards) through a field of rocks on the Martian surface mapped out by Claude. As you might imagine, using an AI model to plot a course for Perseverance wasn't as simple as inputting a single prompt.
As NASA explains, routing Perseverance is no easy task, even for a human. "Every rover drive needs to be carefully planned, lest the machine slide, tip, spin its wheels, or get beached," NASA said. "So ever since the rover landed, its human operators have painstakingly laid out waypoints — they call it a 'breadcrumb trail' — for it to follow, using a combination of images taken from space and the rover’s onboard cameras."
To get Claude to complete the task, NASA had to first provide Claude Code, Anthropic's programming agent, with the "years" of contextual data from the rover before the model could begin writing a route for Perseverance. Claude then went about the mapping process methodically, stringing together waypoints from ten-meter segments it would later critique and iterate on.
This being NASA we're talking about, engineers from the agency's Jet Propulsion Laboratory (JPL) made sure to double check the model's work before sending it to Perseverance. The JPL team ran Claude's waypoints through a simulation they use every day to confirm the accuracy of commands sent to the rover. In the end, NASA says it only had to make "minor changes" to Claude's route, with one tweak coming as a result of the fact the team had access to ground-level images Claude hadn't seen in its planning process.
"The engineers estimate that using Claude in this way will cut the route-planning time in half, and make the journeys more consistent," NASA said. "Less time spent doing tedious manual planning — and less time spent training — allows the rover’s operators to fit in even more drives, collect even more scientific data, and do even more analysis. It means, in short, that we’ll learn much more about Mars."
While the productivity gains offered by AI are often overstated, in the case of NASA, any tool that could allow its scientists to be more efficient is sure to be welcome. Over the summer, the agency lost about 4,000 employees – accounting for about 20 percent of its workforce – due to Trump administration cuts. Going into 2026, the president had proposed gutting the agency's science budget by nearly half before Congress ultimately rejected that plan in early January. Still, even with its funding preserved just below 2025 levels, the agency has a tough road ahead. It's being asked to return to the Moon with less than half the workforce it had during the height of the Apollo program.
For Anthropic, meanwhile, this is a major feat. You may recall last spring Claude couldn't even beat Pokémon Red. In less than a year, the company's models have gone from struggling to navigate a simple 8-bit Game Boy game to successfully plotting a course for a rover on a distant planet. NASA is excited about the possibility of future collaborations, saying "autonomous AI systems could help probes explore ever more distant parts of the solar system."
This article originally appeared on Engadget at https://www.engadget.com/ai/nasa-used-claude-to-plot-a-route-for-its-perseverance-rover-on-mars-203150701.html?src=rss
The Sundance documentary Ghost in the Machine boldly declares that the pursuit of artificial intelligence, and Silicon Valley itself, is rooted in eugenics.
Director Valerie Veatch makes the case that the rise of techno-fascism from the likes of Elon Musk and Peter Thiel is a feature, not a bug. That may sound hyperbolic, but Ghost in the Machine, which is built around interviews with philosophers, AI researchers, historians and computer scientists, leaves little room for doubt.
But even I was surprised to learn that we can trace the impact of eugenics in tech all the way back to Karl Pearson, the mathematician who pioneered the field of statistics, and who also spent his life trying to quantify the differences between races. (Guess who he believed was superior.) His legacy was continued by William Shockley, a co-creator of the transistor, an avowed white supremacist who spent his later years espousing (now debunked) theories around IQ and racial differences.
An early robot toy.
Valerie Veatch for "Ghost in the Machine"
As a Stanford engineering professor, Shockley fostered a culture of prioritizing white men over women and minorities, which ultimately shaped the way Silicon Valley looks today. His line of thinking could have had an influence on John McCarthy, the Stanford researcher who coined the term “artificial intelligence” in 1955,
Through its many interviews, which include the likes of AI researcher Dr. Emily Bender, historian Becca Lewis and media theorist Douglass Rushkoff, Ghost in the Machine paints the rise of AI as a fascistic project that aims to demean humans and establish the techno-elite as our de facto rulers. Given how much our lives are already dominated by gadgets and social networks from companies that have pioneered addictive engagement over user safety, it's easy to imagine history repeating itself with AI.
Ghost in the Machine doesn't leave any room for considering potential benefits around AI, which could lead proponents of the technology to dismiss it as a hit-job. But we're currently at the apex of the AI hype cycle, after Big Tech has invested hundreds of billions of dollars on this technology, and after it has spent years shoving it down our throats without proving why it’s actually useful to many people. AI should be able to withstand a bit of criticism.
This article originally appeared on Engadget at https://www.engadget.com/entertainment/tv-movies/sundance-doc-ghost-in-the-machine-draws-a-damning-line-between-ai-and-eugenics-180613367.html?src=rss
Researchers using the James Webb Space Telescope have found a galaxy that is offering new data about the early stages of the universe's existence. The latest discovery shared by astronomers is about a bright galaxy dubbed MoM-z14. According to the team, this galaxy existed 280 million years after the Big Bang.
The sounds like a long time, but in the context of the universe's estimated 13.8 billion years of existence, that's actually one of the closest examples astronomers have found to the Big Bang's occurrence. As a result, MoM-z14 can offer some insights and some surprises about what the early stages of the universe entailed.
"With Webb, we are able to see farther than humans ever have before, and it looks nothing like what we predicted, which is both challenging and exciting," lead author Rohan Naidu of Massachusetts Institute of Technology said. The findings about this galaxy were published in the Open Journal of Astrophysics.
The scientists were able to date MoM-z14 with Webb's Near-Infrared Spectrograph instrument, analyzing how light from the galaxy changed wavelengths as it traveled to reach the telescope. One of the initial questions sparked by this bright galaxy centers on the presence of nitrogen. Some early galaxies, including MoM-z14, have revealed higher nitrogen concentrations than scientists had projected was possible. Another topic of interest is about reionization, or the process of stars producing enough light or energy to permeate the dense hydrogen fog that existed in the early universe.
“It’s an incredibly exciting time, with Webb revealing the early Universe like never before and showing us how much there still is to discover” said Pennsylvania State University graduate student and team member Yijia Li.
This article originally appeared on Engadget at https://www.engadget.com/science/space/astronomers-share-new-insights-about-the-early-universe-via-the-webb-space-telescope-213311848.html?src=rss
The romance of stargazing usually hits the reality of traditional astrophotography pretty hard. Heavy mounts, polar alignment, cables running from telescope to laptop, and software that can turn a clear night into troubleshooting instead of wonder. Phone cameras and basic star apps have helped, but they still leave a gap between pointing at the sky and capturing something worth keeping.
Seestar S30 Pro is a smart telescope that weighs about 1.65kg and folds a telescope, auto-focuser, camera, Alt-Az mount, filter wheel, and controller into one compact body. You power it on, tap your phone to connect, and the system is ready. The idea is to replace a trunk full of gear with a palm-sized observatory that lives in a backpack but still feels like a serious instrument.
Setting it up under a clear sky, you open the app and choose from more than 80,000 deep-sky objects and 600,000 stars. With one-tap GOTO, the S30 Pro slews to the target, locks on, and starts tracking. You pick a mode, Stargazing, Milky Way, Solar System, or Scenery, and let the dual-lens 4K system with its IMX585 telephoto and IMX586 wide-angle sensors stack and refine frames.
Milky Way and star trails modes let you watch the sky drift across the frame or trace arcs over time-lapse. Built-in 8K stitching automatically mosaics horizontal or vertical panoramas, expanding the field far beyond a single frame. A freeze-the-ground feature separates foreground from sky, keeping the landscape sharp while the Milky Way stays crisp and trail-free, all from a single tap that would normally require separate exposures.
The 4-element apochromatic lens with extra-low dispersion glass minimizes chromatic aberration so stars stay round, and blacks stay neutral. A built-in triple filter system, dark field, UV/IR cut, and light pollution filter tuned for OIII and Hα, adapts to different conditions. Active anti-dew control gently warms the optics to prevent condensation, so long sessions do not end with a fogged-over lens and hours of wasted time.
Plan mode lets you schedule a target and let Seestar handle locating, tracking, capturing, and processing while you sleep, so you wake up to finished data instead of staying up all night monitoring exposures. For longer exposures, an equatorial mode with additional accessories counters Earth’s rotation. During the day, the same device becomes an ultra-telephoto camera for birds, distant landscapes, or rocket launches, with smart tracking to keep subjects centered.
The S30 Pro sits inside a broader ecosystem, from the telescope network that lets you control Seestar units across regions to the AI assistant that responds to voice commands, and ASCOM Alpaca support that opens it up to software like NINA for advanced workflows. It is framed less as a gadget and more as a platform for exploring the sky, making the first steps effortless while leaving room to grow.
Here's a use of AI that appears to do more good than harm. A pair of astronomers at the European Space Agency (ESA) developed a neural network that searches through space images for anomalies. The results were far beyond what human experts could have done. In two and a half days, it sifted through nearly 100 million image cutouts, discovering 1,400 anomalous objects.
The creators of the AI model, David O'Ryan and Pablo Gómez, call it AnomalyMatch. The pair trained it on (and applied it to) the Hubble Legacy Archive, which houses tens of thousands of datasets from Hubble's 35-year history. "While trained scientists excel at spotting cosmic anomalies, there's simply too much Hubble data for experts to sort through at the necessary level of fine detail by hand," the ESA wrote in its press release.
After less than three days of scanning, AnomalyMatch returned a list of likely anomalies. It still requires human eyes at the end: Gómez and O'Ryan reviewed the candidates to confirm which were truly abnormal. Among the 1,400 anomalous objects the pair confirmed, more than 800 were previously undocumented.
Most of the results showed galaxies merging or interacting, which can lead to odd shapes or long tails of stars and gas. Others were gravitational lenses. (That's where the gravity of a foreground galaxy bends spacetime so that the light from a background galaxy is warped into a circle or arc.) Other discoveries included planet-forming disks viewed edge-on, galaxies with huge clumps of stars and jellyfish galaxies. Adding a bit of mystery, there were even "several dozen objects that defied classification altogether."
"This is a fantastic use of AI to maximize the scientific output of the Hubble archive," Gómez is quoted as saying in the ESA's announcement. "Finding so many anomalous objects in Hubble data, where you might expect many to have already been found, is a great result. It also shows how useful this tool will be for other large datasets."
This article originally appeared on Engadget at https://www.engadget.com/ai/astronomers-discover-over-800-cosmic-anomalies-using-a-new-ai-tool-205135155.html?src=rss
Elon Musk is reportedly looking to finally take SpaceX public after years of resistance, according to sources who spoke to TheWall Street Journal. The company has long said it wouldn't choose an IPO until it had established a presence on Mars. That isn't happening anytime soon.
So why now? Company insiders have suggested it's because Musk wants to build AI data centers in space. Google recently announced it was looking into putting a data center in space, with test launches scheduled for 2027. Musk reportedly wants to beat his rival to the punch, but SpaceX would need the billions of dollars in capital that an IPO would deliver. Putting a giant center in space isn't cheap.
Our TPUs are headed to space!
Inspired by our history of moonshots, from quantum computing to autonomous driving, Project Suncatcher is exploring how we could one day build scalable ML compute systems in space, harnessing more of the sun’s power (which emits more power than 100… pic.twitter.com/aQhukBAMDp
Sources say that Musk wants to complete the IPO by July. SpaceX is reportedly expected to select banks to lead the stock offering in the near future.
This is also being seen as an attempt to boost xAI, which trails behind rivals like OpenAI and Google in the AI race. If SpaceX were to be successful in putting data centers in space, it's likely that xAI would get a sweetheart deal given that Musk runs both companies. Then they could pass money to one another in perpetuity, which seems to be the AI way.
Other companies have also begun considering jettisoning data centers into the great beyond. Blue Origin CEO Jeff Bezos recently suggested that shifting data centers to orbit makes sense. OpenAI CEO Sam Altman has been looking into partnering or purchasing a rocket maker called Stoke Space for a similar reason.
Of course, putting data centers in space is an extraordinary undertaking. There are serious issues that must be overcome, from latency to heat dissipation and radiation. Components must be launched and the structure must be built in space. WSJ reports that SpaceX made a breakthrough of some sort last year, but the company hasn't announced specifics.
If we need giant data centers to generate Garfield memes or whatever, I'd rather them in space. Microsoft's latest AI data center in Wisconsin takes up 325 acres. Meta recently announced a data center that would be nearly the size of Manhattan. These structures hoover up energy and water, strain local resources, create pollution and offer just a few long-term local jobs.
This article originally appeared on Engadget at https://www.engadget.com/science/space/elon-musk-is-reportedly-trying-to-take-spacex-public-170337053.html?src=rss