Nintendo profits fall 55 percent as people save their cash for the Switch 2

People are so excited for the next-gen Switch, they're likely holding off on buying Nintendo's current consoles and games. At least that's what the company's latest earnings report seems to indicate. For the quarter ending on June 30, Nintendo posted a net profit of 80.9 billion Japanese Yen, which is higher than its forecast but over 50 percent lower than its net profit for the same period last fiscal year. In addition, the company said it only sold 2.1 million Switch consoles for the quarter. That means it experienced a 46.3 percent decline on unit sales year-on-year. Even its games didn't sell well, seeing as Nintendo posted a software sales figure that's 41.3 percent lower than last fiscal year's at 30.64 million units sold. 

In its report, Nintendo admits that the low sales figures for games was caused by the lack of big releases, such as the previous year's The Legend of Zelda: Tears of the Kingdom. The Super Mario Bros. Movie also helped "energize" its business back then. But since hardware sales for this quarter are similar to the previous one's, Nintendo considers its Switch sales to be stable. 

Nintendo is expected to launch its "Switch 2" console soon. It was expected to come out sometime this year, but according to reports published in the previous months, it will be released in early 2025 instead. There's still very little known about the upcoming console, but rumors say it will have backwards compatibility, as well as 4K capabilities. 

This article originally appeared on Engadget at https://www.engadget.com/nintendo-profits-fall-55-percent-as-people-save-their-cash-for-the-switch-2-140019403.html?src=rss

Intel makes good on CPU instability issues by extending warranties by two years

Intel is extending the warranties for its controversial Core 13th and 14th-gen processors by two years, it has announced in a community post. The company says it will share more details in coming days, but for now, customers just waiting for their computers to conk out can at least know that they may not have to spend money to replace their processors. Intel revealed in July that after extensive analysis, it found that elevated operating voltage was causing these particular processor models' instability issues for a lot of people.

A microcode algorithm has apparently been sending incorrect voltage requests to the processor, causing users' computers to crash. The company is working on a patch that it plans to release in mid-August, but for some people, it may be too late: As Tom's Hardware notes, the patch will not fix processors that are already crashing. An indie gaming studio called Alderon Games reported that based on its personnel's observations, the processors' failure rate is 100 percent. Even CPUs that work well deteriorate and fail in the end. That's why an extended warranty is very much welcome, especially since some models only have a year-long warranty. 

"Intel is committed to making sure all customers who have or are currently experiencing instability symptoms on their 13th and/or 14th Gen desktop processors are supported in the exchange process," the company wrote in its announcement. It also admitted that "this has been a challenging issue to unravel and definitively root cause." For now, Intel advises those who purchased systems from computer manufacturers to reach out to the brand's support team. Meanwhile, people who purchased boxed CPUs for their PCs can contact Intel's customer support.

This article originally appeared on Engadget at https://www.engadget.com/intel-makes-good-on-cpu-instability-issues-by-extending-warranties-by-two-years-130010567.html?src=rss

Court blocks the FCC’s efforts to restore net neutrality… again

The Federal Communications Commission's voted to restore net neutrality protections back in April, but the process isn't as smooth-sailing as its proponents would like. According to Reuters and Fast Company, the Sixth Circuit US Court of Appeals has temporarily blocked the rules from taking effect because the broadband providers' legal case challenging their reinstatement will likely succeed. A group of cable, telecom and mobile internet providers sued the FCC shortly after its three Democrat commissioners voted to restore net neutrality protections

Under net neutrality rules, broadband services are classified as essential communications resources. That gives the FCC the power to regulate broadband internet and to prohibit providers from offering paid prioritization, which some ISPs have been using to charge bandwidth-heavy companies like Netflix additional fees. It will also prevent ISPs from blocking or slowing down traffic to specific websites. 

Net neutrality's opponents have long argued that the rules will put off investors. The group of providers that filed this recent case against the FCC said the rules' reinstatement would force them to "forego valuable new services, incur prohibitive compliance costs and pay more to obtain capital." In its decision, the court wrote that the "commission has failed to satisfy the high bar for imposing such regulations and that "net neutrality is likely a major question requiring clear congressional authorization."

The commission originally approved net neutrality rules back in 2015, though they have been in the works for years before that. Under the Trump administration, however, the FCC had voted to roll back the rules and to reclassify broadband internet services back to Title I, which means the agency would have less oversight on the industry. The rules were supposed to take effect on July 22 after the FCC voted to reinstate them, but a court blocked them from taking effect until August 5. Now, net neutrality's proponents will have to wait even longer. The appeals court has scheduled oral arguments discussing the issue for late October or early November, before or during the 2024 US presidential election. 

This article originally appeared on Engadget at https://www.engadget.com/court-blocks-the-fccs-efforts-to-restore-net-neutrality-again-123029311.html?src=rss

Boeing eats another $125 million loss over Starliner woes

Boeing has revealed that it has taken another $125 million in losses as a result of its Starliner spacecraft's delayed return from the ISS. As SpaceNews reports, the company has revealed the losses in a filing with the US Securities and Exchange Commission, along with more details about its earnings for the second quarter of the year. The company already posted $288 million in losses "primarily as a result of delaying" the Crew Flight Test mission in 2023. 

The first crewed Starliner flight took off in June with NASA astronauts Butch Wilmore and Sunita Williams on board. Boeing's spacecraft was only supposed to stay docked to the ISS for eight days before ferrying the astronauts back to Earth, but issues with its hardware prevented the mission from sticking to its original timeline. 

The company had to examine and find what caused the Starliner's degraded maneuvering thrusters while it was approaching the ISS. In addition, the helium leak that caused several delays to the spacecraft's launch seemed to have worsened, as well. Since June, the company has been putting the spacecraft through a series of tests. Just a few days ago, on July 27, it completed a hot fire test of the Starliner's reaction control system jets and made sure that the vehicle's helium leak rates remain within the acceptable margin. The tests were conducted with Williams and Wilmore onboard, because they're part of the preparations for the spacecraft's flight back home. 

NASA said the tests' results are still being reviewed. But once Boeing and the agency ensure that the Starliner is ready, they will set a date for the Starliner and the astronauts' return flight. 

This article originally appeared on Engadget at https://www.engadget.com/boeing-eats-another-125-million-loss-over-starliner-woes-130027376.html?src=rss

OpenAI vows to provide the US government early access to its next AI model

OpenAI will give the US AI Safety Institute early access to its next model as part of its safety efforts, Sam Altman has revealed in a tweet. Apparently, the company has been working with the consortium "to push forward the science of AI evaluations." The National Institute of Standards and Technology (NIST) has formally established the Artificial Intelligence Safety Institute earlier this year, though Vice President Kamala Harris announced it back in 2023 at the UK AI Safety Summit. Based on the NIST's description of the consortium, it's meant "to develop science-based and empirically backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world."

The company, along with DeepMind, similarly pledged to share AI models with the UK government last year. As TechCrunch notes, there have been growing concerns that OpenAI is making safety less of a priority as it seeks to develop more powerful AI models. There were speculations that the board decided to kick Sam Altman out of the company — he was very quickly reinstated — due to safety and security concerns. However, the company told staff members in an internal memo back then, that it was because of "a breakdown in communication."

In May this year, OpenAI admitted that it disbanded the Superalignment team it created to ensure that humanity remains safe as the company advances its work on generative artificial intelligence. Before that, OpenAI co-founder and Chief Scientist Ilya Sutskever, who was one of the team's leaders, left the company. Jan Leike, who was also one of the team's leaders, quit, as well. He said in a series of tweets that he had been disagreeing with OpenAI's leadership about the core priorities of the company for quite some time and that "safety culture and processes have taken a backseat to shiny products." OpenAI created a new safety group by the end of May, but it's led by board members that include Altman, prompting concerns about self-policing. 

This article originally appeared on Engadget at https://www.engadget.com/openai-vows-to-provide-the-us-government-early-access-to-its-next-ai-model-110017697.html?src=rss

Google makes it easier to remove explicit deepfakes from its search results

Google has rolled out updates for Search with the intention of making explicit deepfakes as hard to find as possible. As part of its long-standing and ongoing fight against realistic-looking manipulated images, the company is making it easier for people to get non-consensual fake imagery that features them removed from Search. 

It has long been possible for users to request for the removal of those kinds of images under Google's policies. Now, whenever it grants someone's removal request, Google will also filter all explicit results on similar searches about them. The company's systems will scan for any duplicates of the offending image and remove them, as well. This update could help alleviate some of the victim's fears if they're worried about the same image popping up again on other websites. 

In addition, Google has updated its ranking systems so that if a user specifically searches for explicit deepfakes with a person's name, the results will surface "high-quality, non-explicit content" instead. If there are news articles about that person, for instance, then the results will feature those. Based on Google's announcement, it seems it also has plans to school the user looking for deepfakes by showing them results that discuss their impact on society. 

Google doesn't want to wipe out results for legitimate content, like an actor's nude scene, in its bid to banish deepfakes from its results page, though. It admits it still has a lot of work to do when it comes to separating legitimate from fake explicit images. While that's still a work in progress, one of the solutions it has implemented is to demote sites that have received a high volume of removals for manipulated images in Search. That's "a pretty strong signal that it's not a high-quality site," Google explains, adding that the approach has worked well for other types of harmful content in the past.

This article originally appeared on Engadget at https://www.engadget.com/google-makes-it-easier-to-remove-explicit-deepfakes-from-its-search-results-130058499.html?src=rss

Meta explains why its AI claimed Trump’s assassination attempt didn’t happen

Meta has explained why its AI chatbot didn't want to respond to inquiries about the assassination attempt on Trump and then, in some cases, denied that the event took place. The company said it programmed Meta AI to not answer questions about an event right after it happens, because there's typically "an enormous amount of confusion, conflicting information, or outright conspiracy theories in the public domain." As for why Meta AI eventually started asserting that the attempt didn't happen "in a small number of cases," it was apparently due to hallucinations. 

An AI "hallucinates" when it generates false or misleading responses to questions that require factual replies due to various factors like inaccurate training data and AI models struggling to parse multiple sources of information. Meta says it has updated its AI's responses and admits that it should have done so sooner. It's still working to address its hallucination issue, though, so its chatbot could still be telling people that there was no attempt on the former president's life. 

In addition, Meta has also explained why its social media platforms had been incorrectly applying the fact check label to the photo of Trump with his fist in the air taken right after the assassination attempt. A doctored version of that image made it look like his Secret Service agents were smiling, and the company applied a fact check label to it. Because the original and doctored photos were almost identical, Meta's systems applied the label to the real image, as well. The company has since corrected the mistake. 

Trump's supporters have been crying foul over Meta AI's actions and have been accusing the company of suppressing the story. Google had to issue a response of its own after Elon Musk claimed that the company's search engine imposed a "search ban" on the former president. Musk shared an image that showed Google's autocomplete suggesting "president donald duck" when someone types in "president donald." Google explained that it was due to a bug affecting its autocomplete feature and said that users can search for whatever they want anytime. 

This article originally appeared on Engadget at https://www.engadget.com/meta-explains-why-its-ai-claimed-trumps-assassination-attempt-didnt-happen-120002196.html?src=rss

NASA will shut down NASA TV on cable to focus on NASA+

NASA TV is shutting down in August. The space agency is saying goodbye to its cable channel, which is available on Dish, DirecTV and similar services, as well as on local television providers. Going forward, it will put all its focus on NASA+, its on-demand streaming service that will serve as home to all its documentaries and live event coverage.

NASA+ has apparently gained four times more viewership than the agency's traditional cable channel since it was launched in November last year. "In a universe where the way we consume information is rapidly changing, NASA+ is helping us inspire and connect with our current generation of explorers: the Artemis Generation," said Marc Etkind from NASA's Office of Communication

The agency's streaming service is completely free and doesn't have ads. Viewers can access it via the official NASA app for iOS and Android when they're on mobile devices, but they can also get the agency's app for Roku, Apple TV or Fire TV if they want to watch on a bigger screen. To watch NASA's coverage and shows on a computer, users can visit the official NASA+ website on their browsers. 

In addition to announcing its cable channel's closure, NASA has also revealed its upcoming lineup for new shows, episodes and live event coverage. One of the upcoming documentaries entitled Planetary Defenders tackles humanity's efforts at asteroid detection and planetary defense, while Our Alien Earth will show NASA scientists' field work in the most extreme environments all over the world to aid in the discovery of extraterrestrial life in the universe.

This article originally appeared on Engadget at https://www.engadget.com/nasa-will-shut-down-nasa-tv-on-cable-to-focus-on-nasa-120015334.html?src=rss

Websites accuse AI startup Anthropic of bypassing their anti-scraping rules and protocol

Freelancer has accused Anthropic, the AI startup behind the Claude large language models, of ignoring its "do not crawl" robots.txt protocol to scrape its websites' data. Meanwhile, iFixit CEO Kyle Wiens said Anthropic has ignored the website's policy prohibiting the use of its content for AI model training. Matt Barrie, the chief executive of Freelancer, told The Information that Anthropic's ClaudeBot is "the most aggressive scraper by far." His website allegedly got 3.5 million visits from the company's crawler within a span of four hours, which is "probably about five times the volume of the number two" AI crawler. Similarly, Wiens posted on X/Twitter that Anthropic's bot hit iFixit's servers a million times in 24 hours. "You're not only taking our content without paying, you're tying up our devops resources," he wrote. 

Back in June, Wired accused another AI company, Perplexity, of crawling its website despite the presence of the Robots Exclusion Protocol, or robots.txt. A robots.txt file typically contains instructions for web crawlers on which pages they can and can't access. While compliance is voluntary, it's mostly just been ignored by bad bots. After Wired's piece came out, a startup called TollBit that connects AI firms with content publishers reported that it's not just Perplexity that's bypassing robots.txt signals. While it didn't name names, Business Insider said it learned that OpenAI and Anthropic were ignoring the protocol, as well. 

Barrie said Freelancer tried to refuse the bot's access requests at first, but it ultimately had to block Anthropic's crawler entirely. "This is egregious scraping [which] makes the site slower for everyone operating on it and ultimately affects our revenue," he added. As for iFixit, Wiens said the website has set alarms for high traffic, and his people got woken up at 3AM due to Anthropic's activities. The company's crawler stopped scraping iFixit after it added a line in its robots.txt file that disallows Anthropic's bot, in particular. 

The AI startup told The Information that it respects robots.txt and that its crawler "respected that signal when iFixit implemented it." It also said that it aims "for minimal disruption by being thoughtful about how quickly [it crawls] the same domains," which is why it's now investigating the case. 

AI firms use crawlers to collect content from websites that they can use to train their generative AI technologies. They've been the target of multiple lawsuits as a result, with publishers accusing them of copyright infringement. To prevent more lawsuits from being filed, companies like OpenAI have been striking deals with publishers and websites. OpenAI's content partners, so far, include News Corp, Vox Media, the Financial Times and Reddit. iFixit's Wiens seems open to the idea of signing a deal for the how-to-repair's website's articles, as well, telling Anthropic in a tweet he's willing to have a conversation about licensing content for commercial use.

This article originally appeared on Engadget at https://www.engadget.com/websites-accuse-ai-startup-anthropic-of-bypassing-their-anti-scraping-rules-and-protocol-133022756.html?src=rss

Websites accuse AI startup Anthropic of bypassing their anti-scraping rules and protocol

Freelancer has accused Anthropic, the AI startup behind the Claude large language models, of ignoring its "do not crawl" robots.txt protocol to scrape its websites' data. Meanwhile, iFixit CEO Kyle Wiens said Anthropic has ignored the website's policy prohibiting the use of its content for AI model training. Matt Barrie, the chief executive of Freelancer, told The Information that Anthropic's ClaudeBot is "the most aggressive scraper by far." His website allegedly got 3.5 million visits from the company's crawler within a span of four hours, which is "probably about five times the volume of the number two" AI crawler. Similarly, Wiens posted on X/Twitter that Anthropic's bot hit iFixit's servers a million times in 24 hours. "You're not only taking our content without paying, you're tying up our devops resources," he wrote. 

Back in June, Wired accused another AI company, Perplexity, of crawling its website despite the presence of the Robots Exclusion Protocol, or robots.txt. A robots.txt file typically contains instructions for web crawlers on which pages they can and can't access. While compliance is voluntary, it's mostly just been ignored by bad bots. After Wired's piece came out, a startup called TollBit that connects AI firms with content publishers reported that it's not just Perplexity that's bypassing robots.txt signals. While it didn't name names, Business Insider said it learned that OpenAI and Anthropic were ignoring the protocol, as well. 

Barrie said Freelancer tried to refuse the bot's access requests at first, but it ultimately had to block Anthropic's crawler entirely. "This is egregious scraping [which] makes the site slower for everyone operating on it and ultimately affects our revenue," he added. As for iFixit, Wiens said the website has set alarms for high traffic, and his people got woken up at 3AM due to Anthropic's activities. The company's crawler stopped scraping iFixit after it added a line in its robots.txt file that disallows Anthropic's bot, in particular. 

The AI startup told The Information that it respects robots.txt and that its crawler "respected that signal when iFixit implemented it." It also said that it aims "for minimal disruption by being thoughtful about how quickly [it crawls] the same domains," which is why it's now investigating the case. 

AI firms use crawlers to collect content from websites that they can use to train their generative AI technologies. They've been the target of multiple lawsuits as a result, with publishers accusing them of copyright infringement. To prevent more lawsuits from being filed, companies like OpenAI have been striking deals with publishers and websites. OpenAI's content partners, so far, include News Corp, Vox Media, the Financial Times and Reddit. iFixit's Wiens seems open to the idea of signing a deal for the how-to-repair's website's articles, as well, telling Anthropic in a tweet he's willing to have a conversation about licensing content for commercial use.

This article originally appeared on Engadget at https://www.engadget.com/websites-accuse-ai-startup-anthropic-of-bypassing-their-anti-scraping-rules-and-protocol-133022756.html?src=rss