Anthropic refuses to bow to Pentagon despite Hegseth’s threats

Despite an ultimatum from Defense Secretary Pete Hegseth, Anthropic said that it can't "in good conscience" comply with a Pentagon edict to remove guardrails on its AI, CEO Dario Amodei wrote in a blog post. The Department of Defense had threatened to cancel a $200 million contract and label Anthropic a "supply chain risk" if it didn't agree to remove safeguards over mass surveillance and autonomous weapons.

"Our strong preference is to continue to serve the Department and our warfighters — with our two requested safeguards in place," Amodei said. "We remain ready to continue our work to support the national security of the United States."

In response, US Under Secretary of Defense Emil Michael accused Amodei in a post on X of wanting "nothing more than to try to personally control the US military and is OK putting our nation's safety at risk."

The standoff began when the Pentagon demanded that Anthropic its Claude AI product available for "all lawful purposes" — including mass surveillance and the development of fully autonomous weapons that can kill without human supervision. Anthropic refused to offer its tech for those things, even with a "safety stack" built into that model.

Yesterday, Axios reported that Hegseth gave Anthropic a deadline of 5:01 PM on Friday to agree to the Pentagon's terms. At the same time, the DoD requested an assessment of its reliance on Claude, an initial step toward potentially labelling Anthropic as a "supply chain risk" — a designation usually reserved for firms from adversaries like China and "never before applied to an American company," Anthropic wrote. 

Amodei declined to change his stance and stated that if the Pentagon chose to offboard Anthropic, "we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations or other critical missions." Grok is one of the other providers the DoD is reportedly considering, along with Google's Gemini and OpenAI. 

It may not be that simple for the military to disentangle itself from Claude, however. Up until now, Anthropic's model has been the only one allowed for the military's most sensitive tasks in intelligence, weapons development and battlefield operations. Claude was reportedly used in the Venezuelan raid in which the US military exfiltrated the country's president, Nicolás Maduro, and his wife.

AI companies have been widely criticized for potential harm to users, but mass surveillance and weapons development would clearly take that to a new level. Anthropic's potential reply to the Pentagon was seen as a test of its claim to be the most safety-forward AI company, particularly after dropping its flagship safety pledge a few days ago. Now that Amodei has responded, the focus will shift to the Pentagon to see if it follows through on its threats, which could seriously harm Anthropic. 

This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-refuses-to-bow-to-pentagon-despite-hegseths-threats-085553126.html?src=rss

Canadian government demands safety changes from OpenAI

Canadian officials summoned leaders from OpenAI to Ottawa this week to address safety concerns about ChatGPT. The crux of the government concerns was that OpenAI did not notify authorities when it banned the account of a user who allegedly committed a mass shooting in British Columbia earlier this month. 

"The message that we delivered, in no uncertain terms, was that we have ‌an expectation that there are going to ⁠be changes implemented, and if they're not forthcoming very quickly, the government is going to be making changes," Justice Minister Sean Fraser said of the company and its AI chatbot. It's unclear what those government-led changes or rules might be. There have been two previous, unsuccessful attempts to pass an online harms act in Canada.

A recent report by The Wall Street Journal claimed that in 2025, some OpenAI employees flagged the account of the alleged shooter, Jesse Van Rootselaar, as containing potential warnings of committing real-world violence and called for leadership to notify law enforcement. Although Van Rootselaar's account was banned for policy violations, a company rep said that the account activity did not meet OpenAI's criteria for engaging the local police. 

“Those reports were deeply disturbing, reports saying that OpenAI did not contact law enforcement in a timely manner," said Canadian Artificial Intelligence Minister Evan Solomon ahead of the discussion with company leaders. "We will have a sit-down meeting to have an explanation of their safety protocols and when they escalate and their thresholds of escalation to police, so we have a better understanding of what’s happening and what they do."

OpenAI has been implicated in mulitple wrongful death suits. The company's ChatGPT was accused of encouraging "paranoid beliefs" before a man killed his mother and himself in a December 2025 lawsuit. It is also at the center of one of several wrongful death lawsuits against the makers of AI chatbots for helping teenagers plan and commit suicides.

This article originally appeared on Engadget at https://www.engadget.com/ai/canadian-government-demands-safety-changes-from-openai-204924604.html?src=rss

Anthropic weakens its safety pledge in the wake of the Pentagon’s pressure campaign

Two stories about the Claude maker Anthropic broke on Tuesday that, when combined, arguably paint a chilling picture. First, US Defense Secretary Pete Hegseth is reportedly pressuring Anthropic to yield its AI safeguards and give the military unrestrained access to its Claude AI chatbot. The company then chose the same day that the Hegseth news broke to drop its centerpiece safety pledge.

On Tuesday, Anthropic said it was modifying its Responsible Scaling Policy (RSP) to lower safety guardrails. Up until now, the company's core pledge has been to stop training new AI models unless specific safety guidelines can be guaranteed in advance. This policy, which set hard tripwires to halt development, was a big part of Anthropic's pitch to businesses and consumers.

“Two and a half years later, our honest assessment is that some parts of this theory of change have played out as we hoped, but others have not,” Anthropic wrote. Now, its updated policy approaches safety relatively, rather than with strict red lines.

Anthropic's quotes in an interview with Time sound reasonable enough in a vacuum. "We felt that it wouldn't actually help anyone for us to stop training AI models," Jared Kaplan, Anthropic's chief science officer, told Time. "We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments… if competitors are blazing ahead."

NEW YORK, NEW YORK - DECEMBER 03: (L-R) Andrew Ross Sorkin and Dario Amodei speak onstage during The New York Times DealBook Summit 2025 at Jazz at Lincoln Center on December 03, 2025 in New York City.  (Photo by David Dee Delgado/Getty Images for The New York Times)
Anthropic CEO Dario Amodei (Photo by David Dee Delgado/Getty Images for The New York Times)
David Dee Delgado via Getty Images

But you could also read those quotes as the latest example of a hot startup’s ethics becoming grayer as its valuation rises. (Remember Google’s old “Don’t be evil” mantra that it later removed from its code of conduct?) The latest versions of Claude have drawn widespread praise, especially in coding. In February, Anthropic raised $30 billion in new investments. It now has a valuation of $380 billion. (Speaking of the competition Kaplan referred to, rival OpenAI is currently valued at over $850 billion.)

In place of Anthropic's previous tripwires, it will implement new "Risk Reports" and "Frontier Safety Roadmaps." These disclosure models are designed to provide transparency to the public in place of those hard lines in the sand.

Anthropic says the change was motivated by a "collective action problem" stemming from the competitive AI landscape and the US's anti-regulatory approach. "If one AI developer paused development to implement safety measures while others moved forward training and deploying AI systems without strong mitigations, that could result in a world that is less safe," the new RSP reads. "The developers with the weakest protections would set the pace, and responsible developers would lose their ability to do safety research and advance the public benefit."

LOUISVILLE , CO - FEBRUARY 23: United States Secretary of War Pete Hegseth speaks during a visit to Sierra Space in Louisville, Colorado on Monday, February 23, 2026. (Photo by AAron Ontiveroz/The Denver Post)
Defense Secretary Pete Hegseth (Photo by AAron Ontiveroz/The Denver Post)
AAron Ontiveroz via Getty Images

Neither Anthropic's announcement nor the Time exclusive mentions the elephant in the room: the Pentagon's pressure campaign. On Tuesday, Axios reported that Hegseth told Anthropic CEO Dario Amodei that the company has until Friday to give the military unfettered access to its AI model or face penalties. The company has reportedly offered to adopt its usage policies for the Pentagon. However, it wouldn't allow its model to be used for the mass surveillance of Americans or weapons that fire without human involvement.

If Anthropic doesn't relent, experts say its best bet would be legal action. But will the Pentagon's proposed penalties be enough to scare a profit-driven startup into compliance? Hegseths' threats reportedly include invoking the Defense Production Act, which gives the president authority to direct private companies prioritize certain contracts in the name of national defense. The military could also sever its contract with Anthropic and designate it as a supply chain risk. That would force other companies working with the Pentagon to certify that Claude isn't included in their workflows.

Claude is the only AI model currently used for the military's most sensitive work. "The only reason we're still talking to these people is we need them and we need them now,” a defense official told Axios. “The problem for these guys is they are that good." Claude was reportedly used in the Maduro raid in Venezuela, a topic Amodei is said to have raised with its partner Palantir.

Time's story about the new RSP included reactions from a nonprofit director focused on AI risks. Chris Painter, director of METR, described the changes as both understandable and perhaps an ill omen. "I like the emphasis on transparent risk reporting and publicly verifiable safety roadmaps," he said. However, he also raised concerns that the more flexible RSP could lead to a "frog-boiling" effect. In other words, when safety becomes a gray area, a seemingly never-ending series of rationalizations could take the company down the very dark path it once condemned.

Painter said the new RSP shows that Anthropic "believes it needs to shift into triage mode with its safety plans, because methods to assess and mitigate risk are not keeping up with the pace of capabilities. This is more evidence that society is not prepared for the potential catastrophic risks posed by AI."

This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-weakens-its-safety-pledge-in-the-wake-of-the-pentagons-pressure-campaign-183436413.html?src=rss

The Pentagon has reportedly given Anthropic until Friday to let it use Claude as it sees fit

Defense Secretary Pete Hegseth will reportedly give Anthropic until Friday to drop certain guardrails for military use, as reported by Axios. The outlet also reported that CEO Dario Amodei met with Hegseth yesterday as the Pentagon ratcheted up pressure on the AI company to give in to its demands.

The makers of Claude have reportedly been offered an ultimatum: Either yield to the government's demands to remove limits for certain military applications, or potentially be forced to tailor its AI model to the government's needs under the Defense Production Act.

Anthropic, for its part, has said that while it was willing to adopt certain policies for the Pentagon, it would not allow its model to be used for mass surveillance of Americans or for the development of autonomous weapons.

Claude is currently the only AI model employed in some of the government's most sensitive work. "The only reason we're still talking to these people is we need them and we need them now. The problem for these guys is they are that good," a defense official told Axios.

The Pentagon is reportedly ramping up conversations with OpenAI and Google about using their models for classified work. ChatGPT and Gemini are already approved for unclassified government use. Elon Musk's xAI also recently signed with the DoD to use Grok in classified systems.

This article originally appeared on Engadget at https://www.engadget.com/ai/the-pentagon-has-reportedly-given-anthropic-until-friday-to-let-it-use-claude-as-it-sees-fit-203549467.html?src=rss

Netflix, Disney+ and other major streaming services face stricter UK oversight

Netflix, Disney+, Amazon's Prime Video and other major video on-demand (VOD) streaming services are set to face stricter regulation in the UK. Platforms with a monthly average of more than 500,000 UK viewers will be deemed “Tier 1" services that are subject to similar oversight as broadcasters like the BBC and ITV under the eye of media watchdog Ofcom

Streaming services run by public broadcasters like ITVX and Channel 4 will have to abide by the new rules as well. BBC services such as iPlayer are exempt for now as they’re regulated under the Broadcasting Code, which broadcasters have to adhere to. That said, the UK government plans to update the BBC Framework Agreement so that iPlayer is regulated in the same way as Netflix et al. 

The government said the new rules will reflect changes in how people are watching TV. It claimed that 85 percent of people use an on-demand service every month while 67 percent watch live TV. It added that two-thirds of UK households subscribe to at least one of Netflix, Prime Video and Disney.

According to Variety, the rules will not apply to video-sharing platforms such as YouTube, since those are regulated under the Online Safety Act. However, individual channels on such platforms could be subject to the VOD standards code. 

Tier 1 platforms will have to adhere to regulations regarding accuracy and impartiality, while ensuring they shield audiences from “harmful or offensive" material. Ofcom will be able to accept viewer complaints over apparent breaches of such rules and carry out investigations. The watchdog will then be able to take action if it determines that there's been a breach of the VOD standards code. That includes fines of up to £250,000 ($337,000) or five percent of "qualifying revenue" per breach.

A public consultation will help shape the VOD standards code. The public and streaming services will have the chance to weigh in on what the rules should be. The standards code will then come into force a year after Ofcom publishes it. The government says "more than 20" platforms will be subject to the code as things stand.

Separately, a VOD accessibility code will be established to bring streaming services further into line with broadcasters. Tier 1 streaming platforms will have to ensure that at least 80 percent of their total catalogues are subtitled, 10 percent have audio descriptions and five percent is signed. They'll have four years to meet the requirements of the accessibility code. 

"With UK audiences increasingly favoring on-demand platforms over live TV, we want to ensure that no one is left behind, and that everyone can enjoy the huge range of content available on video-on-demand services," Media Minister Ian Murray said in a statement. "Implementing a new Ofcom-regulated accessibility code for our largest video-on-demand services will give people with disabilities impacting their sight or hearing peace of mind that they’ll be able to stream all their favorite films and TV shows long into the future."

The UK government is implementing these rules for streaming services under the Media Act 2024. Currently, platforms including Prime Video, Disney+, Paramount+, Discovery+, Hayu and ITVX are subject to statutory rules that Ofcom enforces. However, the watchdog has no oversight of Netflix as things stand. That platform's European base is in the Netherlands. As such, the Dutch media regulator oversees Netflix instead.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/streaming/netflix-disney-and-other-major-streaming-services-face-stricter-uk-oversight-160121268.html?src=rss

Telegram founder Pavel Durov is reportedly under criminal investigation in Russia

Pavel Durov, the founder of Telegram, is reportedly under criminal investigation by Russian authorities for “abetting terrorist activities.” According to the Financial Times, state-run publications are accusing Durov of enabling attacks on Russia and Telegram of becoming an intelligence tool for Ukraine and the west. Telegram was one of the apps that Russia blocked in the country just a few days ago, along with WhatsApp, in what seemed to be an effort to push local users towards the unencrypted state-owned app, Max.

When Telegram was banned, pro-Russian voices criticized the country’s decision, because it was apparently harming frontline operations. Russia’s own soldiers are using the app to communicate and coordinate their moves. Authorities near the Ukrainian border, for instance, send out warning for incoming drone and missile attacks through the messaging app. Even Vladimir Putin’s spokesperson uses Telegram to speak to the media.

Now, the Times says Russia is accusing Telegram of being the main instrument for “NATO countries’ secret services and the Kyiv regime.” Rossiiskaya Gazeta, a Russian state-run publication, added that Telegram was “intercepting location data, selling secret information and intimidating soldiers and their families.” Digital platforms like Telegram, the publication said, are “becoming strategic weapons.” Rossiiskaya Gazeta said its information came from Russia’s Federal Security Service, the country’s primary domestic security agency.

Durov has yet to issue a statement, but after Russia blocked access to Telegram, he said the country was restricting access” to the application to “force its citizens onto a state-controlled app built for surveillance and political censorship.” The Telegram founder was born in Russia and co-founded the country’s largest social network, VK. He left his country after Kremlin pressured him to sell his stake in the social network.

This article originally appeared on Engadget at https://www.engadget.com/apps/telegram-founder-pavel-durov-is-reportedly-under-criminal-investigation-in-russia-121000511.html?src=rss

The US military will reportedly use Elon Musk’s Grok AI in its classified systems

The US Department of Defense has reportedly reached a deal to use Elon Musk's Grok in its classified systems, according to Axios. That follows news that the Pentagon is currently in a dispute with another AI company, Anthropic, over limits on its technology for things like mass surveillance.

Last year, the White ordered Grok, along with ChatGPT, Gemini and Anthropic's Claude to be approved for government use. Up until now, though, only Anthropic's model has been allowed for the military's most sensitive tasks in intelligence, weapons development and battlefield operations. Claude was reportedly used in the Venezuelan raid in which the US military exfiltrated the country's president, Nicolás Maduro, and his wife. 

However, the Pentagon demanded that Anthropic make Claude available for "all lawful purposes" including mass surveillance and the development of fully autonomous weapons. Anthropic reportedly refused to offer its tech for those things, even with a "safety stack" built into that model. 

xAI, by contrast, agreed to a standard that would allow the DoD to employ its AI for any purpose it deems "lawful." However, the xAI model is not considered by officials to be as cutting-edge or reliable as Anthropic's Claude, and they admit that replacing Claude with Grok would be a challenge. The Pentagon is reportedly also negotiating deals with OpenAI and Gemini, both of which it considers to be on par with Anthropic.

xAI had announced a version of Grok for US government agencies in July 2025. Shortly before that, though, the chatbot started spouting fascist propaganda and antisemitic rhetoric while dubbing itself "MechaHitler." All of that followed a public spat between Musk and Trump over the president's spending bill, after which GSA approval of Grok seemed to stall. Earlier this week, Anthropic accused three Chinese AI labs of abusing Claude's AI with "distillation attacks" to improve their own models. 

This article originally appeared on Engadget at https://www.engadget.com/ai/the-us-military-will-reportedly-use-elon-musks-grok-ai-in-its-classified-systems-110049021.html?src=rss

Colorado is working on a bill that would make it illegal to 3D print firearms and gun parts

A collective of Colorado lawmakers wants to put an end to "ghost guns" and their rising popularity. Earlier this week, the state's House Judiciary Committee voted in a 7-4 majority to pass the bill, HB26-1144, along for a decision with the full House of Representatives. The proposed law would "prohibit the use of a three-dimensional printer, or similar technology, to make a firearm or a firearm component."

Ghost guns are typically made from 3D printers or similar machines without serial numbers, making them virtually impossible to trace and allowing users to skirt the federal requirements for purchasing a firearm. While the bill targets using a 3D printer to make guns, large-capacity magazines and other related components, it even bans possessing and distributing the instructions to manufacture guns in this way. However, these rules would be exempt for federally licensed firearm manufacturers.

"These ghost guns are increasingly found at crime scenes, making it harder for law enforcement to track down a suspect because the gun isn’t traceable," the bill's sponsor, Lindsay Gilchrist, said in a press release.

Prior to this proposal, Colorado passed a law in 2023 that banned owning ghost guns or making frames for them. While SB23-279 laid the groundwork, HB26-1144 can be seen as the next step since it's much more encompassing by targeting ghost guns even before they're made. According to the bill, first-time violations will be treated as a misdemeanor, while repeat offenses will be upgraded to a felony charge. Looking ahead, HB26-1144 still has to secure a vote from both the Colorado Senate and House of Representatives before being delivered to the governor to be signed into law.

This article originally appeared on Engadget at https://www.engadget.com/science/colorado-is-working-on-a-bill-that-would-make-it-illegal-to-3d-print-firearms-and-gun-parts-211508169.html?src=rss

The US will send Tech Corps members to foreign countries in its latest push for AI dominance

The government agency that sends its corps members abroad to volunteer in foreign countries launched its latest initiative called Tech Corps. The Peace Corps' latest proposal will recruit STEM graduates or those with professional experience in the artificial intelligence sector and send them to participating host countries.

According to the press release, volunteers will be placed in Peace Corps countries that are part of the American AI Exports Program, which was created last year from an executive order from President Trump as a way to bolster the US' grip on the AI market abroad. Tech Corps members will be tasked with using AI to resolve issues related to agriculture, education, health and economic development. The program will offer its members 12- to 27-month in-person assignments or virtual placements, which will include housing, healthcare, a living stipend and a volunteer service award if the corps member is placed overseas.

Richard E. Swarttz, the acting director of the Peace Corps, said in a press release that Tech Corps volunteers will be "building technical capacity, supporting AI adoption across critical use cases and addressing barriers to last-mile AI implementation." While the Tech Corps program is framed at benefiting host countries, it would also help to secure the US' position in the rapidly expanding global AI market that includes growing competition from China.

This article originally appeared on Engadget at https://www.engadget.com/ai/the-us-will-send-tech-corps-members-to-foreign-countries-in-its-latest-push-for-ai-dominance-191916940.html?src=rss

US website ‘freedom.gov’ will allow Europeans to view hate speech and other blocked content

The US State Department is building a web portal, where Europeans and anyone else can see online content banned by their governments, according to Reuters. It was supposed to be launched at Munich Security Conference last month, but some state department officials reportedly voiced their concerns about the project. The portal will be hosted on freedom.gov, which currently just shows the image above. “Freedom is Coming,” the homepage reads. “Information is power. Reclaim your human right to free expression. Get Ready.”

Reuters says officials discussed making a virtual private network function available on the portal and making visitors’ traffic appear as if they were from the US, so they could see anything unavailable to them. While it’s a state department project, The Guardian has traced the domain to the Cybersecurity and Infrastructure Security Agency (CISA), which is a component of the US Department of Homeland Security. Homeland also serves as the administrator for the Immigration and Customs Enforcement (ICE).

The project could drive the wedge further between the US and its European allies. European authorities don’t usually order broad censorships preventing their citizens from being able to access large parts of the internet. Typically, they only order the blocking of hate speech, terrorist propaganda, disinformation and anything illegal under the EU’s Digital Services Act or the UK’s Online Safety Act.

“If the Trump administration is alleging that they’re gonna be bypassing content bans, what they’re gonna be helping users access in Europe is essentially hate speech, pornography, and child sexual abuse material,” Nina Jankowicz, who served as the executive director of Homeland Security’s Disinformation Governance Board, told The Guardian. The board was very short-lived and was disbanded a few months after it was formed, following complaints by Republican lawmakers that it would impinge on people’s rights to free speech.

When asked about the project, the state department said it didn’t have a program specifically meant to circumvent censorship in Europe. But the spokesperson said: “Digital freedom is a priority for the State Department, however, and that includes the proliferation of privacy and censorship-circumvention technologies like VPNs."

This article originally appeared on Engadget at https://www.engadget.com/big-tech/us-website-freedomgov-will-allow-europeans-to-view-hate-speech-and-other-blocked-content-130000014.html?src=rss