Boeing eats another $125 million loss over Starliner woes

Boeing has revealed that it has taken another $125 million in losses as a result of its Starliner spacecraft's delayed return from the ISS. As SpaceNews reports, the company has revealed the losses in a filing with the US Securities and Exchange Commission, along with more details about its earnings for the second quarter of the year. The company already posted $288 million in losses "primarily as a result of delaying" the Crew Flight Test mission in 2023. 

The first crewed Starliner flight took off in June with NASA astronauts Butch Wilmore and Sunita Williams on board. Boeing's spacecraft was only supposed to stay docked to the ISS for eight days before ferrying the astronauts back to Earth, but issues with its hardware prevented the mission from sticking to its original timeline. 

The company had to examine and find what caused the Starliner's degraded maneuvering thrusters while it was approaching the ISS. In addition, the helium leak that caused several delays to the spacecraft's launch seemed to have worsened, as well. Since June, the company has been putting the spacecraft through a series of tests. Just a few days ago, on July 27, it completed a hot fire test of the Starliner's reaction control system jets and made sure that the vehicle's helium leak rates remain within the acceptable margin. The tests were conducted with Williams and Wilmore onboard, because they're part of the preparations for the spacecraft's flight back home. 

NASA said the tests' results are still being reviewed. But once Boeing and the agency ensure that the Starliner is ready, they will set a date for the Starliner and the astronauts' return flight. 

This article originally appeared on Engadget at https://www.engadget.com/boeing-eats-another-125-million-loss-over-starliner-woes-130027376.html?src=rss

OpenAI vows to provide the US government early access to its next AI model

OpenAI will give the US AI Safety Institute early access to its next model as part of its safety efforts, Sam Altman has revealed in a tweet. Apparently, the company has been working with the consortium "to push forward the science of AI evaluations." The National Institute of Standards and Technology (NIST) has formally established the Artificial Intelligence Safety Institute earlier this year, though Vice President Kamala Harris announced it back in 2023 at the UK AI Safety Summit. Based on the NIST's description of the consortium, it's meant "to develop science-based and empirically backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world."

The company, along with DeepMind, similarly pledged to share AI models with the UK government last year. As TechCrunch notes, there have been growing concerns that OpenAI is making safety less of a priority as it seeks to develop more powerful AI models. There were speculations that the board decided to kick Sam Altman out of the company — he was very quickly reinstated — due to safety and security concerns. However, the company told staff members in an internal memo back then, that it was because of "a breakdown in communication."

In May this year, OpenAI admitted that it disbanded the Superalignment team it created to ensure that humanity remains safe as the company advances its work on generative artificial intelligence. Before that, OpenAI co-founder and Chief Scientist Ilya Sutskever, who was one of the team's leaders, left the company. Jan Leike, who was also one of the team's leaders, quit, as well. He said in a series of tweets that he had been disagreeing with OpenAI's leadership about the core priorities of the company for quite some time and that "safety culture and processes have taken a backseat to shiny products." OpenAI created a new safety group by the end of May, but it's led by board members that include Altman, prompting concerns about self-policing. 

This article originally appeared on Engadget at https://www.engadget.com/openai-vows-to-provide-the-us-government-early-access-to-its-next-ai-model-110017697.html?src=rss

Google makes it easier to remove explicit deepfakes from its search results

Google has rolled out updates for Search with the intention of making explicit deepfakes as hard to find as possible. As part of its long-standing and ongoing fight against realistic-looking manipulated images, the company is making it easier for people to get non-consensual fake imagery that features them removed from Search. 

It has long been possible for users to request for the removal of those kinds of images under Google's policies. Now, whenever it grants someone's removal request, Google will also filter all explicit results on similar searches about them. The company's systems will scan for any duplicates of the offending image and remove them, as well. This update could help alleviate some of the victim's fears if they're worried about the same image popping up again on other websites. 

In addition, Google has updated its ranking systems so that if a user specifically searches for explicit deepfakes with a person's name, the results will surface "high-quality, non-explicit content" instead. If there are news articles about that person, for instance, then the results will feature those. Based on Google's announcement, it seems it also has plans to school the user looking for deepfakes by showing them results that discuss their impact on society. 

Google doesn't want to wipe out results for legitimate content, like an actor's nude scene, in its bid to banish deepfakes from its results page, though. It admits it still has a lot of work to do when it comes to separating legitimate from fake explicit images. While that's still a work in progress, one of the solutions it has implemented is to demote sites that have received a high volume of removals for manipulated images in Search. That's "a pretty strong signal that it's not a high-quality site," Google explains, adding that the approach has worked well for other types of harmful content in the past.

This article originally appeared on Engadget at https://www.engadget.com/google-makes-it-easier-to-remove-explicit-deepfakes-from-its-search-results-130058499.html?src=rss

Meta explains why its AI claimed Trump’s assassination attempt didn’t happen

Meta has explained why its AI chatbot didn't want to respond to inquiries about the assassination attempt on Trump and then, in some cases, denied that the event took place. The company said it programmed Meta AI to not answer questions about an event right after it happens, because there's typically "an enormous amount of confusion, conflicting information, or outright conspiracy theories in the public domain." As for why Meta AI eventually started asserting that the attempt didn't happen "in a small number of cases," it was apparently due to hallucinations. 

An AI "hallucinates" when it generates false or misleading responses to questions that require factual replies due to various factors like inaccurate training data and AI models struggling to parse multiple sources of information. Meta says it has updated its AI's responses and admits that it should have done so sooner. It's still working to address its hallucination issue, though, so its chatbot could still be telling people that there was no attempt on the former president's life. 

In addition, Meta has also explained why its social media platforms had been incorrectly applying the fact check label to the photo of Trump with his fist in the air taken right after the assassination attempt. A doctored version of that image made it look like his Secret Service agents were smiling, and the company applied a fact check label to it. Because the original and doctored photos were almost identical, Meta's systems applied the label to the real image, as well. The company has since corrected the mistake. 

Trump's supporters have been crying foul over Meta AI's actions and have been accusing the company of suppressing the story. Google had to issue a response of its own after Elon Musk claimed that the company's search engine imposed a "search ban" on the former president. Musk shared an image that showed Google's autocomplete suggesting "president donald duck" when someone types in "president donald." Google explained that it was due to a bug affecting its autocomplete feature and said that users can search for whatever they want anytime. 

This article originally appeared on Engadget at https://www.engadget.com/meta-explains-why-its-ai-claimed-trumps-assassination-attempt-didnt-happen-120002196.html?src=rss

NASA will shut down NASA TV on cable to focus on NASA+

NASA TV is shutting down in August. The space agency is saying goodbye to its cable channel, which is available on Dish, DirecTV and similar services, as well as on local television providers. Going forward, it will put all its focus on NASA+, its on-demand streaming service that will serve as home to all its documentaries and live event coverage.

NASA+ has apparently gained four times more viewership than the agency's traditional cable channel since it was launched in November last year. "In a universe where the way we consume information is rapidly changing, NASA+ is helping us inspire and connect with our current generation of explorers: the Artemis Generation," said Marc Etkind from NASA's Office of Communication

The agency's streaming service is completely free and doesn't have ads. Viewers can access it via the official NASA app for iOS and Android when they're on mobile devices, but they can also get the agency's app for Roku, Apple TV or Fire TV if they want to watch on a bigger screen. To watch NASA's coverage and shows on a computer, users can visit the official NASA+ website on their browsers. 

In addition to announcing its cable channel's closure, NASA has also revealed its upcoming lineup for new shows, episodes and live event coverage. One of the upcoming documentaries entitled Planetary Defenders tackles humanity's efforts at asteroid detection and planetary defense, while Our Alien Earth will show NASA scientists' field work in the most extreme environments all over the world to aid in the discovery of extraterrestrial life in the universe.

This article originally appeared on Engadget at https://www.engadget.com/nasa-will-shut-down-nasa-tv-on-cable-to-focus-on-nasa-120015334.html?src=rss

Websites accuse AI startup Anthropic of bypassing their anti-scraping rules and protocol

Freelancer has accused Anthropic, the AI startup behind the Claude large language models, of ignoring its "do not crawl" robots.txt protocol to scrape its websites' data. Meanwhile, iFixit CEO Kyle Wiens said Anthropic has ignored the website's policy prohibiting the use of its content for AI model training. Matt Barrie, the chief executive of Freelancer, told The Information that Anthropic's ClaudeBot is "the most aggressive scraper by far." His website allegedly got 3.5 million visits from the company's crawler within a span of four hours, which is "probably about five times the volume of the number two" AI crawler. Similarly, Wiens posted on X/Twitter that Anthropic's bot hit iFixit's servers a million times in 24 hours. "You're not only taking our content without paying, you're tying up our devops resources," he wrote. 

Back in June, Wired accused another AI company, Perplexity, of crawling its website despite the presence of the Robots Exclusion Protocol, or robots.txt. A robots.txt file typically contains instructions for web crawlers on which pages they can and can't access. While compliance is voluntary, it's mostly just been ignored by bad bots. After Wired's piece came out, a startup called TollBit that connects AI firms with content publishers reported that it's not just Perplexity that's bypassing robots.txt signals. While it didn't name names, Business Insider said it learned that OpenAI and Anthropic were ignoring the protocol, as well. 

Barrie said Freelancer tried to refuse the bot's access requests at first, but it ultimately had to block Anthropic's crawler entirely. "This is egregious scraping [which] makes the site slower for everyone operating on it and ultimately affects our revenue," he added. As for iFixit, Wiens said the website has set alarms for high traffic, and his people got woken up at 3AM due to Anthropic's activities. The company's crawler stopped scraping iFixit after it added a line in its robots.txt file that disallows Anthropic's bot, in particular. 

The AI startup told The Information that it respects robots.txt and that its crawler "respected that signal when iFixit implemented it." It also said that it aims "for minimal disruption by being thoughtful about how quickly [it crawls] the same domains," which is why it's now investigating the case. 

AI firms use crawlers to collect content from websites that they can use to train their generative AI technologies. They've been the target of multiple lawsuits as a result, with publishers accusing them of copyright infringement. To prevent more lawsuits from being filed, companies like OpenAI have been striking deals with publishers and websites. OpenAI's content partners, so far, include News Corp, Vox Media, the Financial Times and Reddit. iFixit's Wiens seems open to the idea of signing a deal for the how-to-repair's website's articles, as well, telling Anthropic in a tweet he's willing to have a conversation about licensing content for commercial use.

This article originally appeared on Engadget at https://www.engadget.com/websites-accuse-ai-startup-anthropic-of-bypassing-their-anti-scraping-rules-and-protocol-133022756.html?src=rss

Websites accuse AI startup Anthropic of bypassing their anti-scraping rules and protocol

Freelancer has accused Anthropic, the AI startup behind the Claude large language models, of ignoring its "do not crawl" robots.txt protocol to scrape its websites' data. Meanwhile, iFixit CEO Kyle Wiens said Anthropic has ignored the website's policy prohibiting the use of its content for AI model training. Matt Barrie, the chief executive of Freelancer, told The Information that Anthropic's ClaudeBot is "the most aggressive scraper by far." His website allegedly got 3.5 million visits from the company's crawler within a span of four hours, which is "probably about five times the volume of the number two" AI crawler. Similarly, Wiens posted on X/Twitter that Anthropic's bot hit iFixit's servers a million times in 24 hours. "You're not only taking our content without paying, you're tying up our devops resources," he wrote. 

Back in June, Wired accused another AI company, Perplexity, of crawling its website despite the presence of the Robots Exclusion Protocol, or robots.txt. A robots.txt file typically contains instructions for web crawlers on which pages they can and can't access. While compliance is voluntary, it's mostly just been ignored by bad bots. After Wired's piece came out, a startup called TollBit that connects AI firms with content publishers reported that it's not just Perplexity that's bypassing robots.txt signals. While it didn't name names, Business Insider said it learned that OpenAI and Anthropic were ignoring the protocol, as well. 

Barrie said Freelancer tried to refuse the bot's access requests at first, but it ultimately had to block Anthropic's crawler entirely. "This is egregious scraping [which] makes the site slower for everyone operating on it and ultimately affects our revenue," he added. As for iFixit, Wiens said the website has set alarms for high traffic, and his people got woken up at 3AM due to Anthropic's activities. The company's crawler stopped scraping iFixit after it added a line in its robots.txt file that disallows Anthropic's bot, in particular. 

The AI startup told The Information that it respects robots.txt and that its crawler "respected that signal when iFixit implemented it." It also said that it aims "for minimal disruption by being thoughtful about how quickly [it crawls] the same domains," which is why it's now investigating the case. 

AI firms use crawlers to collect content from websites that they can use to train their generative AI technologies. They've been the target of multiple lawsuits as a result, with publishers accusing them of copyright infringement. To prevent more lawsuits from being filed, companies like OpenAI have been striking deals with publishers and websites. OpenAI's content partners, so far, include News Corp, Vox Media, the Financial Times and Reddit. iFixit's Wiens seems open to the idea of signing a deal for the how-to-repair's website's articles, as well, telling Anthropic in a tweet he's willing to have a conversation about licensing content for commercial use.

This article originally appeared on Engadget at https://www.engadget.com/websites-accuse-ai-startup-anthropic-of-bypassing-their-anti-scraping-rules-and-protocol-133022756.html?src=rss

Amazon drops the first teaser for its upcoming Yakuza adaptation

Amazon has released its first teaser video for Like A Dragon: Yakuza, its live action adaptation of SEGA's Yakuza games, at San Diego Comic-Con. There's a lot of focus on the inking process of Kazuma Kiryu's iconic dragon tattoo, but you'll also get glimpses of Kamurocho's night scene, various characters in the series and the underground fight club that shows up as a mini-game across the franchise. In the last few seconds of the video, you'll see a shirtless Kiryu heading to a circle of cheering viewers betting on his match. 

When the company announced the show in June, it described the adaptation as a "crime-suspense-action series" that "follows the life, childhood friends, and repercussions of the decisions of Kazuma Kiryu, a fearsome and peerless Yakuza warrior with a strong sense of justice, duty, and humanity." Seeing as the show is set between 1995 and 2005, it will most like be based on the first Yakuza game with glimpses of the years that took place after the events in Yakuza 0.

The first three of episodes of Like A Dragon: Yakuza will arrive on Prime Video on October 24, with the next three coming on October 31. It stars Ryoma Takeuchi (Kamen Rider Drive, Roppongi Class) as Kiryu. And as this teaser has revealed, his best friend Nishiki, who plays a pivotal role in the story, will be portrayed by Kento Kaku (Netflix's House of Ninjas).

This article originally appeared on Engadget at https://www.engadget.com/amazon-drops-the-first-teaser-for-its-upcoming-yakuza-adaptation-110442602.html?src=rss

Amazon drops the first teaser for its upcoming Yakuza adaptation

Amazon has released its first teaser video for Like A Dragon: Yakuza, its live action adaptation of SEGA's Yakuza games, at San Diego Comic-Con. There's a lot of focus on the inking process of Kazuma Kiryu's iconic dragon tattoo, but you'll also get glimpses of Kamurocho's night scene, various characters in the series and the underground fight club that shows up as a mini-game across the franchise. In the last few seconds of the video, you'll see a shirtless Kiryu heading to a circle of cheering viewers betting on his match. 

When the company announced the show in June, it described the adaptation as a "crime-suspense-action series" that "follows the life, childhood friends, and repercussions of the decisions of Kazuma Kiryu, a fearsome and peerless Yakuza warrior with a strong sense of justice, duty, and humanity." Seeing as the show is set between 1995 and 2005, it will most like be based on the first Yakuza game with glimpses of the years that took place after the events in Yakuza 0.

The first three of episodes of Like A Dragon: Yakuza will arrive on Prime Video on October 24, with the next three coming on October 31. It stars Ryoma Takeuchi (Kamen Rider Drive, Roppongi Class) as Kiryu. And as this teaser has revealed, his best friend Nishiki, who plays a pivotal role in the story, will be portrayed by Kento Kaku (Netflix's House of Ninjas).

This article originally appeared on Engadget at https://www.engadget.com/amazon-drops-the-first-teaser-for-its-upcoming-yakuza-adaptation-110442602.html?src=rss

NASA’s Perseverance rover found a rock on Mars that could indicate ancient life

NASA's Perseverance rover has been collecting samples from Mars since 2021, but one of its most recently collected rocks could help it achieve its goal of finding evidence of ancient life on the planet. Nicknamed Cheyava Falls after the tallest waterfall in the Grand Canyon, the 3.2 feet by 2 feet sample contains "chemical signatures and structures" that could've been formed by ancient microbial life from billions of years ago. 

Perseverance collected the rock on July 21 from what was once a Martian river valley carved by flowing water long ago. The sample, which you can see in close up below and from afar at the center of the image above, exhibits large white calcium sulfate veins running along its length. They indicate that water did run through the rock at one point. 

More importantly, it contains millimeter-size marks that look like "leopard spots" all over its central reddish band. On our planet, those spots could form on sedimentary terrestrial rocks when there are chemical reactions that turn hematite, one of the minerals responsible for Mars' reddish color, to white. Those reactions can release iron and phosphate, which could've served as an energy source for microbes. 

The rover's Planetary Instrument for X-ray Lithochemistry (PIXL) tool already determined that the black rings around the spots contain iron and phosphate. However, that doesn't automatically mean that the rock truly did serve as a host for ancient microbes. 

A close-up of a reddish rock.
NASA/JPL-Caltech/MSSS

The spots could've been formed by non-biological processes, and that's something scientists will have to figure out. "We cannot say right now that we have discovered life on Mars,” Katie Stack Morgan, the deputy project scientist, said. "But what we are saying is that we have a potential biosignature, which is a set of features that could have a biological origin but do need further study and more data." 

NASA still has to bring back the samples Perseverance had collected to our planet, including Cheyava Falls. As The New York Times notes, the Mars Sample Return mission is years behind schedule and would not be able to bring back rocks from the red planet until 2040 instead of in the early 2030's like originally planned. NASA recently asked aerospace companies for alternative solutions on how to get the samples to Earth much sooner and will finance their studies due later this year. Scientists will also have to conduct extensive testing to rule out contamination and non-biological processes, as well as other possible explanations for how the leopard spots had formed, before they can proclaim that they're indeed evidence of ancient Martian life. 

This article originally appeared on Engadget at https://www.engadget.com/nasas-perseverance-rover-found-a-rock-on-mars-that-could-indicate-ancient-life-150006064.html?src=rss