Nintendo is suing the US government over Trump’s tariffs

Nintendo of America is suing the US government, including the Department of Treasury, Department of Homeland Security and US Customs and Border Protection, over its tariff policy, Aftermath reports. The video game giant already raised prices on the Nintendo Switch in August 2025 in response to “market conditions,” but has so far left the price of its newer Switch 2 console unchanged.

Nintendo’s lawsuit, filed in the US Court of International Trade, cites a Supreme Court ruling from February that confirmed a lower courts’ opinion that the Trump administration’s global tariffs were illegal. Nintendo’s lawyers claim that the video game company has been “substantially harmed by the unlawful of execution and imposition” of “unauthorized Executive Orders,” and the fees Nintendo has already paid to import products into the country. In response, the company is seeking a “prompt refund, with interest” of the tariffs it has paid.

“We can confirm we filed a request,” Nintendo of America said in a statement. “We have nothing else to share on this topic.”

While taxes and other trade policies are supposed to be set by Congress, President Donald Trump implemented a collection of global tariffs over the course of his first year in office using executive orders and the International Emergency Economic Powers Act (IEEPA), a law that gives the president expanded control over trade during a global emergency. The Trump administration has positioned tariffs as a way to punish enemies and bargain with trade partners, but many companies have passed the increased price of importing goods onto customers.

In upholding opinions from the US District Court of the District of Columbia and the US Court of International Trade, the Supreme Court removed the Trump administration’s ability to collect tariffs using IEEPA, but didn’t clarify how the tariffs the government had illegally collected should be returned to companies. Like Nintendo, other companies have decided filing a lawsuit is the best way to get refunded.

The Guardian reports that US Customs and Border Protection is already preparing a system to process refunds for affected companies, but that might not mark the end of Trump’s tariff regime. In a press conference held after the Supreme Court released its decision, the President announced plans to introduce tariffs using other, more constrained methods. Tariffs aren’t the only obstacle Nintendo faces, either. The company could also be forced to raise the price of its consoles in response to the current RAM shortage.

This article originally appeared on Engadget at https://www.engadget.com/gaming/nintendo/nintendo-is-suing-the-us-government-over-trumps-tariffs-191849003.html?src=rss

UK government delays AI copyright rules amid artist outcry

The UK government is working on a controversial data bill that would allow AI companies like Google and OpenAI to train their models on copyrighted materials without consent. However, following a two month consultation, it looks like passage of the law will be delayed. "Copyright is going to be kicked down the road," a person with knowledge of the matter told The Financial Times

Responses by stakeholders during the consultation period weren't favorable to any of the government's proposed ideas for use of copyrighted materials, the FT's sources said. There's no expectation now that an AI bill will be part of the King's Speech set for May this year. 

As a result, Ministers have decided to go back to the drawing board and spend more time exploring other options. The House of Lords Communications and Digital Committee called on the government to develop a licensing-first regime "underpinned by robust transparency that safeguards creators' livelihoods while supporting sustainable AI growth."

The UK parliament's preferred position on the bill (also argued by tech giants like Google) has been that copyright holders need to formally opt-out if they don't want their materials used to train AI models. However, publishers, filmmakers, musicians and others have said that this would be impractical and an existential threat to the UK's creative industries.

The House of Lords took the side of artists and introduced an amendment that would require tech companies to disclose which copyright-protected works were used to train AI models. That addition, however, was blocked by the UK's House of Commons in May last year.

The UK's majority Labour government — already under fire for its handling of the economy — has taken hits from publishers, musicians, authors and other creative groups over the proposed law. Elton John called the government "absolute losers" while Paul McCartney said that AI has its uses but "it shouldn't rip creative people off." McCartney and others artists were part of a "silent album" meant to show the impact of IP theft by AI. 

Baroness Beeban Kidron from the House of Lords has also ripped the government over the AI bill. "Creators do not deny the creative and economic value of AI, but we do deny the assertion that we should have to build AI for free with our work, and then rent it back from those who stole it," she said last year. "It's astonishing that a Labour government would abandon the labor force of an entire section."

This article originally appeared on Engadget at https://www.engadget.com/ai/uk-government-delays-ai-copyright-rules-amid-artist-outcry-113937154.html?src=rss

Anthropic says it will challenge Defense Department’s supply chain risk designation in court

In a new blog post, Anthropic CEO Dario Amodei has admitted that it received a letter from the Defense Department, officially labeling it a supply chain risk. He said he doesn’t “believe this action is legally sound,” and that his company sees “no choice” but to challenge it in court. Hours before Amodei published the post, the Pentagon announced that it notified the company that its “products are deemed a supply chain risk, effective immediately.”

If you’ll recall, the Defense Department (called the Department of War under the current administration) threatened to give the company the designation typically reserved for firms from adversaries like China if it didn’t agree to remove its safeguards over mass surveillance and autonomous weapons. President Trump then ordered federal agencies to stop using Anthropic’s tech.

Amodei explained that the designation has a narrow scope, because it only exists to protect the government. That is why the general public, and even Defense Department contractors, can still use Anthropic’s Claude chatbot and its AI technologies. Microsoft told CNBC that it will continue using Claude after its lawyers had concluded that it can keep on working with Anthropic on non-defense related projects.

The CEO has also admitted that his company had “productive conversations” with the department over the past few days. He said that they were looking at ways to serve the Pentagon that adheres to its two exceptions, namely that its technology not be used for mass surveillance and the development of fully autonomous weapons, and at ways to “ensure a smooth transition if that is not possible.” That confirms reports that Anthropic is back in talks with the agency in an effort to reach a new deal. In addition, he apologized for a leaked internal memo, wherein he reportedly said that OpenAI’s messaging about its own deal with the department is “just straight up lies.”

This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-says-it-will-challenge-defense-departments-supply-chain-risk-designation-in-court-054459618.html?src=rss

COPPA 2.0 passes the Senate again, unanimously this time

Today the US Senate unanimously passed proposed legislation known as COPPA 2.0. This measure, fully named the Children and Teens’ Online Privacy Protection Act, aims to create new protections for younger users online, such as blocking platforms from collecting their personal data without consent. 

COPPA 2.0 is a modernized take on the Children’s Online Privacy Protection Act of 1998, attempting to address recent changes in common online activities, like targeted advertising, that could prove harmful to minors. Lawmakers have made several attempts to get this bipartisan bill through. While it has made varying amounts of headway in the Senate, none of the COPPA 2.0 bills to date have gotten past the House of Representatives. Industry groups such as NetChoice have previously opposed COPPA 2.0 and other measures around minors' online activity such as KOSA, the Kids Online Safety Act. NetChoice members include Google, YouTube, Meta, Reddit, Discord, TikTok and X. Google specifically has since changed its stance to support COPPA 2.0, however.

"This bill expands the current law protecting our kids online to ensure companies cannot collect personal information from anyone under the age of 17," Senate Democratic Leader Chuck Schumer (D-NY) said in a statement about the latest result. "This is a big step forward for protecting our kids. We hope the House can join us. They haven’t thus far."

However, there has been a bigger push both domestically and internationally toward restrictions on when and how younger people engage online. Several states — Utah, California and Washington to name a few — have enacted laws requiring some level of age verification, either to access mature content online or to use social media apps at all. Many of these efforts have raised concerns about privacy regarding where and how people's personal information is stored and protected. COPPA 2.0 might wind up benefitting from the privacy debates since it emphasizes giving teens and parents ways to protect themselves from having their data used against them rather than asking adults to give up data in order to use the internet as usual.

Update, March 6 2026, 11:38AM ET: Article updated with additional context on Google.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/coppa-20-passes-the-senate-again-unanimously-this-time-215044656.html?src=rss

Anthropic is reportedly back in talks with the Defense Department

Anthropic is reportedly trying to reach a new deal with the US Defense Department, which could prevent the government from labeling it a supply chain risk. According to Financial Times and Bloomberg, Anthropic CEO Dario Amodei has resumed talks with the agency over the use of its AI models. In particular, the publications say that Amodel is having discussions with Emil Michael, the Under Secretary of Defense for Research and Engineering.

The two of them were trying to work out the contract over the use of Anthropic’s models before negotiations broke down and the government soured on the company. The Times reports that they couldn’t agree on language that the AI company wanted to see to ensure that its technology will not be used for mass surveillance.

In a memo sent to Anthropic staff, Amodei reportedly said that the department offered to accept the company’s terms if it deleted a specific phrase about “analysis of bulk acquired data.” He continued that it “was the single line in the contract that exactly matched” the scenario it was “most worried about.” Anthropic, which first signed a $200 million deal with the department in 2025, refused to comply with the Pentagon’s demands. The agency then threatened to cancel its existing contract and to label it a “supply chain risk,” a designation typically reserved for Chinese companies. President Trump ordered government agencies to stop using Anthropic’s technology afterward. However, there’s a “six-month phase-out period” that reportedly allowed the government to use Anthropic’s AI tools to stage an air attack on Iran.

Amodei also said in the memo that the messaging OpenAI has been trying to convey is “just straight up lies,” the Times reports. He hinted, as well, that one of the reasons his company is now on the outs with the government is because he hasn’t “given dictator-style praise to Trump” like OpenAI’s Sam Altman has.

If you’ll recall, OpenAI announced that it reached an agreement shortly after it came out that Anthropic was having issues with the agency. Its CEO, Sam Altman, said on Twitter that he told the government Anthropic shouldn’t be designated as a supply chain risk. He said during an AMA on the social media website that he didn’t know the details of Anthropic’s contract, but if it had been the same with the one OpenAI had signed, he thought Anthropic should have agreed to it. Anthropic’s Claude chatbot rose to the top of Apple’s Top Free Apps leaderboard after OpenAI announced its Defense Department contract, beating out ChatGPT.

Altman later posted on X that OpenAI will amend its deal with language that explicitly prohibits the use of its AI system on mass surveillance against Americans. When it comes to the military’s use of its technology, though, CNBC says that Altman told staffers that the company doesn’t “get to make operational decisions.” In an all-hands meeting, Altman reportedly said: “So maybe you think the Iran strike was good and the Venezuela invasion was bad. You don't get to weigh in on that.”

This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-is-reportedly-back-in-talks-with-the-defense-department-125045017.html?src=rss

Bill Gates-backed TerraPower begins nuclear reactor construction

The Nuclear Regulatory Commission has granted approval to TerraPower to begin construction of a reactor in Wyoming. The project is the first new US commercial nuclear reactor in about a decade, according to The New York Times. TerraPower was founded by Bill Gates, and it took years for the business to receive regulatory approval for this construction effort.

TerraPower is part of a push to create more efficient and less expensive nuclear facilities as an alternative power source, particularly as AI companies and data center construction places more demands on the US' current infrastructure. TerraPower's project involves tech it has dubbed Natrium in its planned reactor. Using this liquid sodium approach rather than a traditional light-water reactor is part of how the company aims to reduce costs and time frames.

Advocates see nuclear reactors as a way to generate power without the climate impact of coal or gas plants. Critics point to the safety risks as a severe downside to this approach, while others question whether the creation and disposal of nuclear waste counter the environmental gains. The Gates-backed operation still isn't coming in cheap. The proposed facility could cost at least $4 billion and still faces logistical challenges before coming online as planned in 2031.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/bill-gates-backed-terrapower-begins-nuclear-reactor-construction-221132639.html?src=rss

OpenAI will amend Defense Department deal to prevent mass surveillance in the US

OpenAI’s Sam Altman said the company will amend its deal with the Defense Department (or the Department of War) to explicitly prohibit the use of its AI system on mass surveillance against Americans. Altman has published an internal memo previously sent to employees on X, telling them that the company will tweak the agreement to add language to make that point especially clear. Specifically, it says:

“Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.

For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”

Altman has also claimed in the memo that the agency affirmed that its services will not be used by its intelligence agencies, including the NSA, without a modification to their contract. He added that if he received what he believed was an unconstitutional order, he would rather go to jail than follow it.

In addition, the OpenAI CEO has admitted in the memo that the company shouldn’t have rushed to get the deal out on Friday, February 27, since the issues were “super complex and demand clear communication.” Altman explained that the company was “trying to de-escalate things and avoid a much worse outcome” but it “looked opportunistic” in the end. If you’ll recall, OpenAI announced the partnership shortly after President Trump ordered all US government agencies to stop using Claude and any other Anthropic services. To note, Anthropic started working with the US government in 2024.

The Defense Department and Secretary Pete Hegseth had been pressuring Anthropic with to remove its AI’s guardrails so that it can be used for all “lawful” purposes. Those include mass surveillance and the development of fully autonomous weapons. Anthropic refused to bow down to Hegseth’s demands and in a statement said that “no amount of intimidation or punishment” will change its “position on mass domestic surveillance or fully autonomous weapons.” Trump issued the order as a result. The Defense Department had also taken the first steps to designate Anthropic as a “supply chain risk,” which is typically reserved for Chinese companies believed to be working with their country’s government.

Altman said that in his conversations with US officials, he reiterated that Anthropic shouldn’t be designated as a supply chain risk and that he hoped the Defense Department would offer it the same deal OpenAI agreed to. In an AMA session on X over the weekend, Altman clarified that he didn’t know the details of Anthropic’s agreement and how it differed from the one OpenAI signed. But if it had been the same, he thought Anthropic should have agreed to it.

After the news broke out about OpenAI’s deal, Anthropic climbed its way to the number one spot of the App Store's Top Free Apps leaderboard, beating out both ChatGPT and Google Gemini. Anthropic, capitalizing on Claude’s sudden popularity, launched a memory import tool to make switching to its chatbot from another company’s easier. Meanwhile, uninstalls for ChatGPT’s jumped by 295 percent day-over-day, according to Sensor Tower.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-will-amend-defense-department-deal-to-prevent-mass-surveillance-in-the-us-050637400.html?src=rss

The US reportedly used Anthropic’s AI for its attack on Iran, just after banning it

In a lengthy post on Truth Social on February 27, President Trump ordered all federal agencies to "immediately cease all use of Anthropic's technology" following strong disagreements between the Department of Defense and the AI company. A few hours later, the US conducted a major air attack on Iran with the help of Anthropic's AI tools, according to a report from The Wall Street Journal.

The president noted in his post that there would be a "six-month phase-out period for agencies like the Department of War who are using Anthropic’s products," so federal agencies are still expected to eventually move away from using Claude or other Anthropic tech. It's also not the first time that the US used Anthropic's AI for a major military operation, as the WSJ previously reported that Claude was used in the capture of the now-removed Venezuelan president Nicolás Maduro.

Moving forward, the Department of Defense may begin transitioning towards other AI options, especially after reaching deals with both xAI and OpenAI to use their models within the federal agency's network. However, the WSJ reported that it would take months to replace Anthropic's Claude with other AI models.

This article originally appeared on Engadget at https://www.engadget.com/ai/the-us-reportedly-used-anthropics-ai-for-its-attack-on-iran-just-after-banning-it-172908929.html?src=rss

OpenAI strikes a deal with the Defense Department to deploy its AI models

OpenAI has reached an agreement with the Defense Department to deploy its models in the agency’s network, company chief Sam Altman has revealed on X. In his post, he said two of OpenAI’s most important safety principles are “prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems.” Altman claimed the company put those principles in its agreement with the agency, which he called by the government’s preferred name of Department of War (DoW), and that it had agreed to honor them.

The agency has closed the deal with OpenAI, shortly after President Donald Trump ordered all government agencies to stop using Claude and any other Anthropic services. If you’ll recall, US Defense Secretary Pete Hegseth previously threatened to label Anthropic “supply chain risk” if it continues refusing to remove the guardrails on its AI, which are preventing the technology to be used for mass surveillance against Americans and in fully autonomous weapons.

It’s unclear why the government agreed to team up with OpenAI if its models also have the same guardrails, but Altman said it’s asking the government to offer the same terms to all the AI companies it works with. Jeremy Lewin, the Senior Official Under Secretary for Foreign Assistance, Humanitarian Affairs, and Religious Freedom, said on X that DoW “references certain existing legal authorities and includes certain mutually agreed upon safety mechanisms” in its contracts. Both OpenAI and xAI, which had also previously signed a deal to deploy Grok in the DoW’s classified systems, agreed to those terms. He said it was the same “compromise that Anthropic was offered, and rejected.”

Anthropic, which started working with the US government in 2024, refused to bow down to Hegseth. In its latest statement, published just hours before Altman announced OpenAI’s agreement, it repeated its stance. “No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” Anthropic wrote. “We will challenge any supply chain risk designation in court.”

Altman added in his post on X that OpenAI will build technical safeguards to ensure the company’s models behave as they should, claiming that’s also what the DoW wanted. It’s sending engineers to work with the agency to “ensure [its models’] safety,” and it will only deploy on cloud networks. As The New York Times notes, OpenAI is not yet on Amazon cloud, which the government uses. But that could change soon, as company has also just announced forming a partnership with Amazon to run its models on Amazon Web Services (AWS) for enterprise customers.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-strikes-a-deal-with-the-defense-department-to-deploy-its-ai-models-054441785.html?src=rss

Trump orders federal agencies to drop Anthropic services amid Pentagon feud

President Donald Trump has ordered all US government agencies to stop using Claude and other Anthropic services, escalating an already volatile feud between the Department of Defense and company over AI safeguards. Taking to Truth Social on Friday afternoon, the president said there would be a six-month phase out period for federal agencies, including the Defense Department, to migrate off of Anthropic's products. 

“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution,” the president wrote. “Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.”  

Before today, US Defense Secretary Pete Hegseth had threatened to label Anthropic a “supply chain risk” if it did not agree to withdraw safeguards that insist Claude not be used for mass surveillance against Americans or in fully autonomous weapons. In a post on X published after President Trump’s statement, Hegseth said he was “directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”

Anthropic did not immediately respond to Engadget's comment request. Earlier in the day, a spokesperson for the company said the contract Anthropic received after CEO Dario Amodei outlined Anthropic's position made “virtually no progress” on preventing the outlined misuses.

"New language framed as a compromise was paired with legalese that would allow those safeguards to be disregarded at will. Despite DOW's recent public statements, these narrow safeguards have been the crux of our negotiations for months," the spokesperson said. "We remain ready to continue talks and committed to operational continuity for the Department and America's warfighters." 

Advocacy groups like the Center for Democracy and Technology (CDT) quickly came out against the president’s threats. “This action sets a dangerous precedent. It chills private companies’ ability to engage frankly with the government about appropriate uses of their technology, which is especially important in national security settings that so often have reduced public visibility,” said CDT President and CEO Alexandra Givens, in a statement shared with Engadget. “These threats undermine the integrity of the innovation ecosystem, distort market incentives and normalize an expansive view of executive power that should worry Americans all across the political spectrum.”

For now, it appears the AI industry is united behind Anthropic. On Friday, hundreds of Google and OpenAI employees signed an open letter urging their companies to stand in "solidarity" with the lab. According to an internal memo seen by Axios, OpenAI CEO Sam Altman said the ChatGPT maker would draw the same red line as Anthropic.  

In a blog post published late on Friday, Anthropic vowed to “challenge any supply chain risk designation in court,” and assured its customers that only work related to the Defense Department would be affected. The company's full statement is available here, an excerpt is below:

Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company. We are deeply saddened by these developments. As the first frontier AI company to deploy models in the US government’s classified networks, Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so.

We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government.

No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court.

Update, February 27, 9PM ET: This story was updated twice after publish. First at 6PM ET to include a link to and quotes from Hegseth about the designation of Anthropic as a supply chain risk. Later, a quote from Anthropic was added, along with a link to the company’s blog post on the subject.

This article originally appeared on Engadget at https://www.engadget.com/ai/trump-orders-federal-agencies-to-drop-anthropic-services-amid-pentagon-feud-222029306.html?src=rss