In a lengthy post on Truth Social on February 27, President Trump ordered all federal agencies to "immediately cease all use of Anthropic's technology" following strong disagreements between the Department of Defense and the AI company. A few hours later, the US conducted a major air attack on Iran with the help of Anthropic's AI tools, according to a report from The Wall Street Journal.
The president noted in his post that there would be a "six-month phase-out period for agencies like the Department of War who are using Anthropic’s products," so federal agencies are still expected to eventually move away from using Claude or other Anthropic tech. It's also not the first time that the US used Anthropic's AI for a major military operation, as the WSJpreviously reported that Claude was used in the capture of the now-removed Venezuelan president Nicolás Maduro.
Moving forward, the Department of Defense may begin transitioning towards other AI options, especially after reaching deals with both xAI and OpenAI to use their models within the federal agency's network. However, the WSJ reported that it would take months to replace Anthropic's Claude with other AI models.
This article originally appeared on Engadget at https://www.engadget.com/ai/the-us-reportedly-used-anthropics-ai-for-its-attack-on-iran-just-after-banning-it-172908929.html?src=rss
OpenAI has reached an agreement with the Defense Department to deploy its models in the agency’s network, company chief Sam Altman has revealed on X. In his post, he said two of OpenAI’s most important safety principles are “prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems.” Altman claimed the company put those principles in its agreement with the agency, which he called by the government’s preferred name of Department of War (DoW), and that it had agreed to honor them.
The agency has closed the deal with OpenAI, shortly after President Donald Trump ordered all government agencies to stop using Claude and any other Anthropic services. If you’ll recall, US Defense Secretary Pete Hegseth previously threatened to label Anthropic “supply chain risk” if it continues refusing to remove the guardrails on its AI, which are preventing the technology to be used for mass surveillance against Americans and in fully autonomous weapons.
It’s unclear why the government agreed to team up with OpenAI if its models also have the same guardrails, but Altman said it’s asking the government to offer the same terms to all the AI companies it works with. Jeremy Lewin, the Senior Official Under Secretary for Foreign Assistance, Humanitarian Affairs, and Religious Freedom, said on X that DoW “references certain existing legal authorities and includes certain mutually agreed upon safety mechanisms” in its contracts. Both OpenAI and xAI, which had also previously signed a deal to deploy Grok in the DoW’s classified systems, agreed to those terms. He said it was the same “compromise that Anthropic was offered, and rejected.”
Anthropic, which started working with the US government in 2024, refused to bow down to Hegseth. In its latest statement, published just hours before Altman announced OpenAI’s agreement, it repeated its stance. “No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” Anthropic wrote. “We will challenge any supply chain risk designation in court.”
Altman added in his post on X that OpenAI will build technical safeguards to ensure the company’s models behave as they should, claiming that’s also what the DoW wanted. It’s sending engineers to work with the agency to “ensure [its models’] safety,” and it will only deploy on cloud networks. As The New York Timesnotes, OpenAI is not yet on Amazon cloud, which the government uses. But that could change soon, as company has also just announced forming a partnership with Amazon to run its models on Amazon Web Services (AWS) for enterprise customers.
Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.
In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.
This article originally appeared on Engadget at https://www.engadget.com/ai/openai-strikes-a-deal-with-the-defense-department-to-deploy-its-ai-models-054441785.html?src=rss
President Donald Trump has ordered all US government agencies to stop using Claude and other Anthropic services, escalating an already volatile feud between the Department of Defense and company over AI safeguards. Taking to Truth Social on Friday afternoon, the president said there would be a six-month phase out period for federal agencies, including the Defense Department, to migrate off of Anthropic's products.
“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution,” the president wrote. “Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.”
Before today, US Defense Secretary Pete Hegseth had threatened to label Anthropic a “supply chain risk” if it did not agree to withdraw safeguards that insist Claude not be used for mass surveillance against Americans or in fully autonomous weapons. In a post on X published after President Trump’s statement, Hegseth said he was “directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”
Anthropic did not immediately respond to Engadget's comment request. Earlier in the day, a spokesperson for the company said the contract Anthropic received after CEO Dario Amodei outlined Anthropic's position made “virtually no progress” on preventing the outlined misuses.
"New language framed as a compromise was paired with legalese that would allow those safeguards to be disregarded at will. Despite DOW's recent public statements, these narrow safeguards have been the crux of our negotiations for months," the spokesperson said. "We remain ready to continue talks and committed to operational continuity for the Department and America's warfighters."
Advocacy groups like the Center for Democracy and Technology (CDT) quickly came out against the president’s threats. “This action sets a dangerous precedent. It chills private companies’ ability to engage frankly with the government about appropriate uses of their technology, which is especially important in national security settings that so often have reduced public visibility,” said CDT President and CEO Alexandra Givens, in a statement shared with Engadget. “These threats undermine the integrity of the innovation ecosystem, distort market incentives and normalize an expansive view of executive power that should worry Americans all across the political spectrum.”
For now, it appears the AI industry is united behind Anthropic. On Friday, hundreds of Google and OpenAI employees signed an open letter urging their companies to stand in "solidarity" with the lab. According to an internal memo seen by Axios, OpenAI CEO Sam Altman said the ChatGPT maker would draw the same red line as Anthropic.
In a blog post published late on Friday, Anthropic vowed to “challenge any supply chain risk designation in court,” and assured its customers that only work related to the Defense Department would be affected. The company's full statement is available here, an excerpt is below:
Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company. We are deeply saddened by these developments. As the first frontier AI company to deploy models in the US government’s classified networks, Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so.
We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government.
No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court.
Update, February 27, 9PM ET: This story was updated twice after publish. First at 6PM ET to include a link to and quotes from Hegseth about the designation of Anthropic as a supply chain risk. Later, a quote from Anthropic was added, along with a link to the company’s blog post on the subject.
This article originally appeared on Engadget at https://www.engadget.com/ai/trump-orders-federal-agencies-to-drop-anthropic-services-amid-pentagon-feud-222029306.html?src=rss
Despite an ultimatum from Defense Secretary Pete Hegseth, Anthropic said that it can't "in good conscience" comply with a Pentagon edict to remove guardrails on its AI, CEO Dario Amodei wrote in a blog post. The Department of Defense had threatened to cancel a $200 million contract and label Anthropic a "supply chain risk" if it didn't agree to remove safeguards over mass surveillance and autonomous weapons.
"Our strong preference is to continue to serve the Department and our warfighters — with our two requested safeguards in place," Amodei said. "We remain ready to continue our work to support the national security of the United States."
In response, US Under Secretary of Defense Emil Michael accused Amodei in a post on X of wanting "nothing more than to try to personally control the US military and is OK putting our nation's safety at risk."
The standoff began when the Pentagon demanded that Anthropic its Claude AI product available for "all lawful purposes" — including mass surveillance and the development of fully autonomous weapons that can kill without human supervision. Anthropic refused to offer its tech for those things, even with a "safety stack" built into that model.
Yesterday, Axios reported that Hegseth gave Anthropic a deadline of 5:01 PM on Friday to agree to the Pentagon's terms. At the same time, the DoD requested an assessment of its reliance on Claude, an initial step toward potentially labelling Anthropic as a "supply chain risk" — a designation usually reserved for firms from adversaries like China and "never before applied to an American company," Anthropic wrote.
Amodei declined to change his stance and stated that if the Pentagon chose to offboard Anthropic, "we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations or other critical missions." Grok is one of the other providers the DoD is reportedly considering, along with Google's Gemini and OpenAI.
It may not be that simple for the military to disentangle itself from Claude, however. Up until now, Anthropic's model has been the only one allowed for the military's most sensitive tasks in intelligence, weapons development and battlefield operations. Claude was reportedly used in the Venezuelan raid in which the US military exfiltrated the country's president, Nicolás Maduro, and his wife.
AI companies have been widely criticized for potential harm to users, but mass surveillance and weapons development would clearly take that to a new level. Anthropic's potential reply to the Pentagon was seen as a test of its claim to be the most safety-forward AI company, particularly after dropping its flagship safety pledge a few days ago. Now that Amodei has responded, the focus will shift to the Pentagon to see if it follows through on its threats, which could seriously harm Anthropic.
This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-refuses-to-bow-to-pentagon-despite-hegseths-threats-085553126.html?src=rss
Canadian officials summoned leaders from OpenAI to Ottawa this week to address safety concerns about ChatGPT. The crux of the government concerns was that OpenAI did not notify authorities when it banned the account of a user who allegedly committed a mass shooting in British Columbia earlier this month.
"The message that we delivered, in no uncertain terms, was that we have an expectation that there are going to be changes implemented, and if they're not forthcoming very quickly, the government is going to be making changes," Justice Minister Sean Fraser said of the company and its AI chatbot. It's unclear what those government-led changes or rules might be. There have been two previous, unsuccessful attempts to pass an online harms act in Canada.
A recent report by The Wall Street Journal claimed that in 2025, some OpenAI employees flagged the account of the alleged shooter, Jesse Van Rootselaar, as containing potential warnings of committing real-world violence and called for leadership to notify law enforcement. Although Van Rootselaar's account was banned for policy violations, a company rep said that the account activity did not meet OpenAI's criteria for engaging the local police.
“Those reports were deeply disturbing, reports saying that OpenAI did not contact law enforcement in a timely manner," said Canadian Artificial Intelligence Minister Evan Solomon ahead of the discussion with company leaders. "We will have a sit-down meeting to have an explanation of their safety protocols and when they escalate and their thresholds of escalation to police, so we have a better understanding of what’s happening and what they do."
OpenAI has been implicated in mulitple wrongful death suits. The company's ChatGPT was accused of encouraging "paranoid beliefs" before a man killed his mother and himself in a December 2025 lawsuit. It is also at the center of one of several wrongful death lawsuits against the makers of AI chatbots for helping teenagers plan and commit suicides.
This article originally appeared on Engadget at https://www.engadget.com/ai/canadian-government-demands-safety-changes-from-openai-204924604.html?src=rss
Two stories about the Claude maker Anthropic broke on Tuesday that, when combined, arguably paint a chilling picture. First, US Defense Secretary Pete Hegseth is reportedly pressuring Anthropic to yield its AI safeguards and give the military unrestrained access to its Claude AI chatbot. The company then chose the same day that the Hegseth news broke to drop its centerpiece safety pledge.
On Tuesday, Anthropic said it was modifying its Responsible Scaling Policy (RSP) to lower safety guardrails. Up until now, the company's core pledge has been to stop training new AI models unless specific safety guidelines can be guaranteed in advance. This policy, which set hard tripwires to halt development, was a big part of Anthropic's pitch to businesses and consumers.
“Two and a half years later, our honest assessment is that some parts of this theory of change have played out as we hoped, but others have not,” Anthropic wrote. Now, its updated policy approaches safety relatively, rather than with strict red lines.
Anthropic's quotes in an interview with Time sound reasonable enough in a vacuum. "We felt that it wouldn't actually help anyone for us to stop training AI models," Jared Kaplan, Anthropic's chief science officer, told Time. "We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments… if competitors are blazing ahead."
Anthropic CEO Dario Amodei (Photo by David Dee Delgado/Getty Images for The New York Times)
David Dee Delgado via Getty Images
But you could also read those quotes as the latest example of a hot startup’s ethics becoming grayer as its valuation rises. (Remember Google’s old “Don’t be evil” mantra that it later removed from its code of conduct?) The latest versions of Claude have drawn widespread praise, especially in coding. In February, Anthropic raised $30 billion in new investments. It now has a valuation of $380 billion. (Speaking of the competition Kaplan referred to, rival OpenAI is currently valued at over $850 billion.)
In place of Anthropic's previous tripwires, it will implement new "Risk Reports" and "Frontier Safety Roadmaps." These disclosure models are designed to provide transparency to the public in place of those hard lines in the sand.
Anthropic says the change was motivated by a "collective action problem" stemming from the competitive AI landscape and the US's anti-regulatory approach. "If one AI developer paused development to implement safety measures while others moved forward training and deploying AI systems without strong mitigations, that could result in a world that is less safe," the new RSP reads. "The developers with the weakest protections would set the pace, and responsible developers would lose their ability to do safety research and advance the public benefit."
Defense Secretary Pete Hegseth (Photo by AAron Ontiveroz/The Denver Post)
AAron Ontiveroz via Getty Images
Neither Anthropic's announcement nor the Time exclusive mentions the elephant in the room: the Pentagon's pressure campaign. On Tuesday, Axiosreported that Hegseth told Anthropic CEO Dario Amodei that the company has until Friday to give the military unfettered access to its AI model or face penalties. The company has reportedly offered to adopt its usage policies for the Pentagon. However, it wouldn't allow its model to be used for the mass surveillance of Americans or weapons that fire without human involvement.
If Anthropic doesn't relent, experts say its best bet would be legal action. But will the Pentagon's proposed penalties be enough to scare a profit-driven startup into compliance? Hegseths' threats reportedly include invoking the Defense Production Act, which gives the president authority to direct private companies prioritize certain contracts in the name of national defense. The military could also sever its contract with Anthropic and designate it as a supply chain risk. That would force other companies working with the Pentagon to certify that Claude isn't included in their workflows.
Claude is the only AI model currently used for the military's most sensitive work. "The only reason we're still talking to these people is we need them and we need them now,” a defense official told Axios. “The problem for these guys is they are that good." Claude was reportedly used in the Maduro raid in Venezuela, a topic Amodei is said to have raised with its partner Palantir.
Time's story about the new RSP included reactions from a nonprofit director focused on AI risks. Chris Painter, director of METR, described the changes as both understandable and perhaps an ill omen. "I like the emphasis on transparent risk reporting and publicly verifiable safety roadmaps," he said. However, he also raised concerns that the more flexible RSP could lead to a "frog-boiling" effect. In other words, when safety becomes a gray area, a seemingly never-ending series of rationalizations could take the company down the very dark path it once condemned.
Painter said the new RSP shows that Anthropic "believes it needs to shift into triage mode with its safety plans, because methods to assess and mitigate risk are not keeping up with the pace of capabilities. This is more evidence that society is not prepared for the potential catastrophic risks posed by AI."
This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-weakens-its-safety-pledge-in-the-wake-of-the-pentagons-pressure-campaign-183436413.html?src=rss
Defense Secretary Pete Hegseth will reportedly give Anthropic until Friday to drop certain guardrails for military use, as reported by Axios. The outlet also reported that CEO Dario Amodei met with Hegseth yesterday as the Pentagon ratcheted up pressure on the AI company to give in to its demands.
The makers of Claude have reportedly been offered an ultimatum: Either yield to the government's demands to remove limits for certain military applications, or potentially be forced to tailor its AI model to the government's needs under the Defense Production Act.
Anthropic, for its part, has said that while it was willing to adopt certain policies for the Pentagon, it would not allow its model to be used for mass surveillance of Americans or for the development of autonomous weapons.
Claude is currently the only AI model employed in some of the government's most sensitive work. "The only reason we're still talking to these people is we need them and we need them now. The problem for these guys is they are that good," a defense official told Axios.
The Pentagon is reportedly ramping up conversations with OpenAI and Google about using their models for classified work. ChatGPT and Gemini are already approved for unclassified government use. Elon Musk's xAI also recently signed with the DoD to use Grok in classified systems.
This article originally appeared on Engadget at https://www.engadget.com/ai/the-pentagon-has-reportedly-given-anthropic-until-friday-to-let-it-use-claude-as-it-sees-fit-203549467.html?src=rss
Netflix, Disney+, Amazon's Prime Video and other major video on-demand (VOD) streaming services are set to face stricter regulation in the UK. Platforms with a monthly average of more than 500,000 UK viewers will be deemed “Tier 1" services that are subject to similar oversight as broadcasters like the BBC and ITV under the eye of media watchdog Ofcom.
Streaming services run by public broadcasters like ITVX and Channel 4 will have to abide by the new rules as well. BBC services such as iPlayer are exempt for now as they’re regulated under the Broadcasting Code, which broadcasters have to adhere to. That said, the UK government plans to update the BBC Framework Agreement so that iPlayer is regulated in the same way as Netflix et al.
The government said the new rules will reflect changes in how people are watching TV. It claimed that 85 percent of people use an on-demand service every month while 67 percent watch live TV. It added that two-thirds of UK households subscribe to at least one of Netflix, Prime Video and Disney.
According to Variety, the rules will not apply to video-sharing platforms such as YouTube, since those are regulated under the Online Safety Act. However, individual channels on such platforms could be subject to the VOD standards code.
Tier 1 platforms will have to adhere to regulations regarding accuracy and impartiality, while ensuring they shield audiences from “harmful or offensive" material. Ofcom will be able to accept viewer complaints over apparent breaches of such rules and carry out investigations. The watchdog will then be able to take action if it determines that there's been a breach of the VOD standards code. That includes fines of up to £250,000 ($337,000) or five percent of "qualifying revenue" per breach.
A public consultation will help shape the VOD standards code. The public and streaming services will have the chance to weigh in on what the rules should be. The standards code will then come into force a year after Ofcom publishes it. The government says "more than 20" platforms will be subject to the code as things stand.
Separately, a VOD accessibility code will be established to bring streaming services further into line with broadcasters. Tier 1 streaming platforms will have to ensure that at least 80 percent of their total catalogues are subtitled, 10 percent have audio descriptions and five percent is signed. They'll have four years to meet the requirements of the accessibility code.
"With UK audiences increasingly favoring on-demand platforms over live TV, we want to ensure that no one is left behind, and that everyone can enjoy the huge range of content available on video-on-demand services," Media Minister Ian Murray said in a statement. "Implementing a new Ofcom-regulated accessibility code for our largest video-on-demand services will give people with disabilities impacting their sight or hearing peace of mind that they’ll be able to stream all their favorite films and TV shows long into the future."
The UK government is implementing these rules for streaming services under the Media Act 2024. Currently, platforms including Prime Video, Disney+, Paramount+, Discovery+, Hayu and ITVX are subject to statutory rules that Ofcom enforces. However, the watchdog has no oversight of Netflix as things stand. That platform's European base is in the Netherlands. As such, the Dutch media regulator oversees Netflix instead.
This article originally appeared on Engadget at https://www.engadget.com/entertainment/streaming/netflix-disney-and-other-major-streaming-services-face-stricter-uk-oversight-160121268.html?src=rss
Pavel Durov, the founder of Telegram, is reportedly under criminal investigation by Russian authorities for “abetting terrorist activities.” According to the Financial Times, state-run publications are accusing Durov of enabling attacks on Russia and Telegram of becoming an intelligence tool for Ukraine and the west. Telegram was one of the apps that Russia blocked in the country just a few days ago, along with WhatsApp, in what seemed to be an effort to push local users towards the unencrypted state-owned app, Max.
When Telegram was banned, pro-Russian voices criticized the country’s decision, because it was apparently harming frontline operations. Russia’s own soldiers are using the app to communicate and coordinate their moves. Authorities near the Ukrainian border, for instance, send out warning for incoming drone and missile attacks through the messaging app. Even Vladimir Putin’s spokesperson uses Telegram to speak to the media.
Now, the Times says Russia is accusing Telegram of being the main instrument for “NATO countries’ secret services and the Kyiv regime.” Rossiiskaya Gazeta, a Russian state-run publication, added that Telegram was “intercepting location data, selling secret information and intimidating soldiers and their families.” Digital platforms like Telegram, the publication said, are “becoming strategic weapons.” Rossiiskaya Gazeta said its information came from Russia’s Federal Security Service, the country’s primary domestic security agency.
Durov has yet to issue a statement, but after Russia blocked access to Telegram, he said the country was restricting access” to the application to “force its citizens onto a state-controlled app built for surveillance and political censorship.” The Telegram founder was born in Russia and co-founded the country’s largest social network, VK. He left his country after Kremlin pressured him to sell his stake in the social network.
This article originally appeared on Engadget at https://www.engadget.com/apps/telegram-founder-pavel-durov-is-reportedly-under-criminal-investigation-in-russia-121000511.html?src=rss
The US Department of Defense has reportedly reached a deal to use Elon Musk's Grok in its classified systems, according to Axios. That follows news that the Pentagon is currently in a dispute with another AI company, Anthropic, over limits on its technology for things like mass surveillance.
Last year, the White ordered Grok, along with ChatGPT, Gemini and Anthropic's Claude to be approved for government use. Up until now, though, only Anthropic's model has been allowed for the military's most sensitive tasks in intelligence, weapons development and battlefield operations. Claude was reportedly used in the Venezuelan raid in which the US military exfiltrated the country's president, Nicolás Maduro, and his wife.
However, the Pentagon demanded that Anthropic make Claude available for "all lawful purposes" including mass surveillance and the development of fully autonomous weapons. Anthropic reportedly refused to offer its tech for those things, even with a "safety stack" built into that model.
xAI, by contrast, agreed to a standard that would allow the DoD to employ its AI for any purpose it deems "lawful." However, the xAI model is not considered by officials to be as cutting-edge or reliable as Anthropic's Claude, and they admit that replacing Claude with Grok would be a challenge. The Pentagon is reportedly also negotiating deals with OpenAI and Gemini, both of which it considers to be on par with Anthropic.
xAI had announced a version of Grok for US government agencies in July 2025. Shortly before that, though, the chatbot started spouting fascist propaganda and antisemitic rhetoric while dubbing itself "MechaHitler." All of that followed a public spat between Musk and Trump over the president's spending bill, after which GSA approval of Grok seemed to stall. Earlier this week, Anthropic accused three Chinese AI labs of abusing Claude's AI with "distillation attacks" to improve their own models.
This article originally appeared on Engadget at https://www.engadget.com/ai/the-us-military-will-reportedly-use-elon-musks-grok-ai-in-its-classified-systems-110049021.html?src=rss