Anthropic says it will challenge Defense Department’s supply chain risk designation in court

In a new blog post, Anthropic CEO Dario Amodei has admitted that it received a letter from the Defense Department, officially labeling it a supply chain risk. He said he doesn’t “believe this action is legally sound,” and that his company sees “no choice” but to challenge it in court. Hours before Amodei published the post, the Pentagon announced that it notified the company that its “products are deemed a supply chain risk, effective immediately.”

If you’ll recall, the Defense Department (called the Department of War under the current administration) threatened to give the company the designation typically reserved for firms from adversaries like China if it didn’t agree to remove its safeguards over mass surveillance and autonomous weapons. President Trump then ordered federal agencies to stop using Anthropic’s tech.

Amodei explained that the designation has a narrow scope, because it only exists to protect the government. That is why the general public, and even Defense Department contractors, can still use Anthropic’s Claude chatbot and its AI technologies. Microsoft told CNBC that it will continue using Claude after its lawyers had concluded that it can keep on working with Anthropic on non-defense related projects.

The CEO has also admitted that his company had “productive conversations” with the department over the past few days. He said that they were looking at ways to serve the Pentagon that adheres to its two exceptions, namely that its technology not be used for mass surveillance and the development of fully autonomous weapons, and at ways to “ensure a smooth transition if that is not possible.” That confirms reports that Anthropic is back in talks with the agency in an effort to reach a new deal. In addition, he apologized for a leaked internal memo, wherein he reportedly said that OpenAI’s messaging about its own deal with the department is “just straight up lies.”

This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-says-it-will-challenge-defense-departments-supply-chain-risk-designation-in-court-054459618.html?src=rss

Anthropic is reportedly back in talks with the Defense Department

Anthropic is reportedly trying to reach a new deal with the US Defense Department, which could prevent the government from labeling it a supply chain risk. According to Financial Times and Bloomberg, Anthropic CEO Dario Amodei has resumed talks with the agency over the use of its AI models. In particular, the publications say that Amodel is having discussions with Emil Michael, the Under Secretary of Defense for Research and Engineering.

The two of them were trying to work out the contract over the use of Anthropic’s models before negotiations broke down and the government soured on the company. The Times reports that they couldn’t agree on language that the AI company wanted to see to ensure that its technology will not be used for mass surveillance.

In a memo sent to Anthropic staff, Amodei reportedly said that the department offered to accept the company’s terms if it deleted a specific phrase about “analysis of bulk acquired data.” He continued that it “was the single line in the contract that exactly matched” the scenario it was “most worried about.” Anthropic, which first signed a $200 million deal with the department in 2025, refused to comply with the Pentagon’s demands. The agency then threatened to cancel its existing contract and to label it a “supply chain risk,” a designation typically reserved for Chinese companies. President Trump ordered government agencies to stop using Anthropic’s technology afterward. However, there’s a “six-month phase-out period” that reportedly allowed the government to use Anthropic’s AI tools to stage an air attack on Iran.

Amodei also said in the memo that the messaging OpenAI has been trying to convey is “just straight up lies,” the Times reports. He hinted, as well, that one of the reasons his company is now on the outs with the government is because he hasn’t “given dictator-style praise to Trump” like OpenAI’s Sam Altman has.

If you’ll recall, OpenAI announced that it reached an agreement shortly after it came out that Anthropic was having issues with the agency. Its CEO, Sam Altman, said on Twitter that he told the government Anthropic shouldn’t be designated as a supply chain risk. He said during an AMA on the social media website that he didn’t know the details of Anthropic’s contract, but if it had been the same with the one OpenAI had signed, he thought Anthropic should have agreed to it. Anthropic’s Claude chatbot rose to the top of Apple’s Top Free Apps leaderboard after OpenAI announced its Defense Department contract, beating out ChatGPT.

Altman later posted on X that OpenAI will amend its deal with language that explicitly prohibits the use of its AI system on mass surveillance against Americans. When it comes to the military’s use of its technology, though, CNBC says that Altman told staffers that the company doesn’t “get to make operational decisions.” In an all-hands meeting, Altman reportedly said: “So maybe you think the Iran strike was good and the Venezuela invasion was bad. You don't get to weigh in on that.”

This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-is-reportedly-back-in-talks-with-the-defense-department-125045017.html?src=rss

Google reportedly muzzles Epic Games CEO Tim Sweeney until 2032

Epic Games’ courtroom battle with Google is over, but it’s reportedly going to affect how its CEO can speak about the tech giant for years for years to come. According to The Verge, part of the settlement terms Epic had signed has a clause stating that Epic and Sweeney will have to speak positively about Google’s competitiveness and app store operations going forward. “Epic believes that the Google and Android platform, with the changes in this term sheet, are procompetitive and a model for app store / platform operations, and will make good faith efforts to advocate for the same,” the clause reportedly reads.

Further, The Verge says the settlement terms between the companies will expire five years after Google is done rolling out changes to its service fees. Since Google expects to finish implementing changes worldwide by September 30, 2027, Sweeney can’t speak negatively about the app store until after September 30, 2032.

Sweeney is one of the most vocal critic of how Apple and Google operate their app stores, which had led to several lawsuits between the companies. He once called both Apple and Google “gangster-style businesses” that will “always continue” to be engaged in illegal practices and just pay the fine afterwards. Epic Games filed a lawsuit against Google in 2020, accusing it of illegal monopoly on app distribution and in-app billing services for Android devices. In 2023, Google lost the lawsuit. It then lost its appeal two years later, before the companies reached a settlement in November 2025. On March 4 this year, Google officially scrapped the 30 percent cut it takes from Play Store transactions, lowering it to 20 percent and even to 15 percent in some cases.

In response to the Google’s decision, Epic Games is bringing back Fortnite to the Play Store worldwide. “Google is opening up Android all the way with robust support for competing stores, competing payments, and a better deal for all developers. So, we've settled all of our disputes worldwide. THANKS GOOGLE!” Sweeney posted on X. Based on the clause in their settlement, future statements from the CEO about Google will need to carry a similar tone, in the next few years at least.

Update, March 5 2026, 2:13PM ET: Epic reached out to Engadget to share an important clarification: “Criticizing Google is fair game on topics not related to app store distribution/ fees,” the company wrote on X, “Epic and Google agreed to not disparage only on topics about the settlement.” We’ve updated the copy of our story to reflect the specificity of the non-disparagement agreement, and look forward to the ways in which Epic will certainly exercise its remaining capacity to be critical of Google.

This article originally appeared on Engadget at https://www.engadget.com/gaming/google-reportedly-muzzles-epic-games-ceo-tim-sweeney-until-2032-105501644.html?src=rss

TikTok won’t add end-to-end encryption to DMs

If you’ve ever wondered if TikTok would ever offer a more secure messaging experience, you now have an answer. TikTok has told the BBC that it will not protect direct messages sent in the app with end-to-end encryption, because it believes it will make users less safe. In a briefing about security at its London office, TikTok said that implementing the technology would prevent its safety teams or law enforcement from being able to read messages if needed. The ByteDance-owned app framed it as a deliberate decision, made in an effort to keep users, especially younger ones, safe on its platform.

With end-to-end encryption, only the sender and receiver are able to read messages exchanged between them. The technology isn’t typically implemented in China, where ByteDance is located, though TikTok didn’t say whether its parent company had an influence on its decision. TikTok said messages sent through its app are still protected by standard encryption and only authorized employees will be able to access them if the app gets a request from authorities or gets user reports for harmful behavior.

You have a lot of other apps to choose from if you want to communicate through apps with end-to-end encryption. Apple’s iMessage and Google Messages use the technology, and there’s also Facebook Messenger, WhatsApp, Telegram and Signal. It looks like TikTok just isn’t the place to go if you want to use secure messaging, though it’s unclear if its US entity also shares the same stance. If you’ll recall, TikTok signed a deal to spin off its US business, which is now an entity called the TikTok USDS Joint Venture. A group of non-Chinese investors, including Oracle, purchased an 80 percent stake on the app, while ByteDance retained only a 19.9 percent stake. The entity will be in charge of content moderation in the country and will retrain TikTok’s algorithm on US users’ data.

This article originally appeared on Engadget at https://www.engadget.com/apps/tiktok-wont-add-end-to-end-encryption-to-dms-123431502.html?src=rss

Charlie Brown now works for Sony

Sony Music Entertainment Japan and Sony Pictures Entertainment now officially own 80 percent of the Peanuts franchise. The companies have closed the deal, which was officially announced in December 2025 when it was still subject to regulatory approvals, for $460 million. Sony Music Japan has owned 39 percent of Peanuts since 2018, so the Sony subsidiaries are essentially buying 41 percent of the franchise from Canadian firm WildBrain with this transaction. Now that the acquisition is done, Peanuts is officially a consolidated Sony subsidiary.

The Peanuts universe started as comic strips by Charles M. Schulz back in 1950. Its characters, especially Charlie Brown and his pet dog Snoopy, have become household names since then. One cannot say “Good grief!” without associating it with Charlie Brown. The franchise has grown massively since Peanut’s inception, spawning a bunch of animated series, cartoon musicals and movies, such A Charlie Brown Christmas and Snoopy The Musical.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/charlie-brown-now-works-for-sony-125619518.html?src=rss

Meta starts testing its AI shopping assistant

Meta has started rolling out an experimental AI shopping tool to some users in the US, according to Bloomberg. At the moment, it’s reportedly only showing up on desktop browsers when select users visit Meta AI on the web. They’ll know if they have access to the feature if they see the “Shopping research” button inside the query text box. The company has confirmed that it was testing the feature, Bloomberg said, but it didn’t say when a wider release will happen.

When users ask for product suggestions, the chatbot will show them a carousel with product images and their pricing, along with a link to the e-commerce website and information about the brand. Meta AI will also include a short explanation why it recommended the item. If Meta AI can see a user’s information, such as their gender and location data, it can tailor responses for them. Bloomberg said it replied with a selection of women’s puffer jackets from shops that ship to New York, based on the tester’s profile. Users cannot check out from within the Meta AI interface, but they can click on the links it provides to shop online.

Mark Zuckerberg previously told investors that Meta is launching agentic shopping tools during an earnings call earlier this year. It doesn’t come as a surprise that the company is working on them, when rival AI companies already offer the same tools. OpenAI rolled out a dedicated shopping assistant for ChatGPT just before Black Friday last year, shortly after Google launched its own shopping tools for Gemini. Perplexity also released an AI shopping assistant at the same time.

This article originally appeared on Engadget at https://www.engadget.com/ai/meta-starts-testing-its-ai-shopping-assistant-120148124.html?src=rss

OpenAI will amend Defense Department deal to prevent mass surveillance in the US

OpenAI’s Sam Altman said the company will amend its deal with the Defense Department (or the Department of War) to explicitly prohibit the use of its AI system on mass surveillance against Americans. Altman has published an internal memo previously sent to employees on X, telling them that the company will tweak the agreement to add language to make that point especially clear. Specifically, it says:

“Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.

For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”

Altman has also claimed in the memo that the agency affirmed that its services will not be used by its intelligence agencies, including the NSA, without a modification to their contract. He added that if he received what he believed was an unconstitutional order, he would rather go to jail than follow it.

In addition, the OpenAI CEO has admitted in the memo that the company shouldn’t have rushed to get the deal out on Friday, February 27, since the issues were “super complex and demand clear communication.” Altman explained that the company was “trying to de-escalate things and avoid a much worse outcome” but it “looked opportunistic” in the end. If you’ll recall, OpenAI announced the partnership shortly after President Trump ordered all US government agencies to stop using Claude and any other Anthropic services. To note, Anthropic started working with the US government in 2024.

The Defense Department and Secretary Pete Hegseth had been pressuring Anthropic with to remove its AI’s guardrails so that it can be used for all “lawful” purposes. Those include mass surveillance and the development of fully autonomous weapons. Anthropic refused to bow down to Hegseth’s demands and in a statement said that “no amount of intimidation or punishment” will change its “position on mass domestic surveillance or fully autonomous weapons.” Trump issued the order as a result. The Defense Department had also taken the first steps to designate Anthropic as a “supply chain risk,” which is typically reserved for Chinese companies believed to be working with their country’s government.

Altman said that in his conversations with US officials, he reiterated that Anthropic shouldn’t be designated as a supply chain risk and that he hoped the Defense Department would offer it the same deal OpenAI agreed to. In an AMA session on X over the weekend, Altman clarified that he didn’t know the details of Anthropic’s agreement and how it differed from the one OpenAI signed. But if it had been the same, he thought Anthropic should have agreed to it.

After the news broke out about OpenAI’s deal, Anthropic climbed its way to the number one spot of the App Store's Top Free Apps leaderboard, beating out both ChatGPT and Google Gemini. Anthropic, capitalizing on Claude’s sudden popularity, launched a memory import tool to make switching to its chatbot from another company’s easier. Meanwhile, uninstalls for ChatGPT’s jumped by 295 percent day-over-day, according to Sensor Tower.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-will-amend-defense-department-deal-to-prevent-mass-surveillance-in-the-us-050637400.html?src=rss

Lenovo unveils the 2026 refresh of its Yoga 9i 2-in-1 convertible laptop at MWC

Lenovo has given the Yoga 9i 2-in-1 Aura Edition a refresh for 2026 and launched the new device at this year’s Mobile World Congress. The convertible laptop comes with a new Canvas Mode when the Yoga Pen Gen 2 case it’s bundled with is attached to the A-cover. When you lay the device down on a flat surface with the case attached, you’ll get a slight elevation on the display, which may make it easier to sketch or draw.

The Copilot+ laptop is powered by Intel Core Ultra Series 3 processors with integrated graphics, has up to 32GB in memory and runs Windows 11. Its 14-inch screen has a resolution of 2,880 x 1,800 pixels, has a variable refresh rate of 120 Hz and supports multi-touch. In addition to the new Canvas Mode, the device also supports Tablet, Tent, Stand and traditional Laptop Modes like its predecessors do. The Yoga 9i 2-in-1 Aura Edition Gen 11 will be available in May, with prices starting at $1,949.

Lenovo has also launched the new Yoga Pro 7a at MWC 2026. This Copilot+ laptop is powered by AMD Ryzen AI Max+ Series processors and comes with up to 128GB of RAM, so it can be used for heavy AI tasks. It has a 15.3-inch 2.5K PureSight Pro OLED display and is equipped with a big Force Pad trackpad that doubles as a drawing tablet. You can get the device starting in August this year for at least $2,099.

For a more affordable option, there’s the new IdeaPad Slim 5i Ultra laptop, which also has Copilot+ features. It’s powered by Intel Core Ultra processors and comes with either a WUXGA OLED or a WQXGA IPS LCD 14-inch display that has a VRR of 120 Hz. The device was designed for portability, with its thinnest part measuring just 11.9 mm in depth, and weighs 2.5 lbs. It will be available starting in October for at least $799.

Another affordable option is the new Idea Tab Pro Gen 2, which is specifically targeted towards students. It’s powered by theSnapdragon 8s Gen 4 Mobile Platform and has a 13-inch 3.5K display. The Tab Pro Gen 2 is Lenovo’s first tablet to ship with its Qira AI assistant and the company’s AI tools. It will be sold with a Lenovo Tab Pen Plus included for $419 starting in July.

This article originally appeared on Engadget at https://www.engadget.com/computing/laptops/lenovo-unveils-the-2026-refresh-of-its-yoga-9i-2-in-1-convertible-laptop-at-mwc-230100644.html?src=rss

OpenAI strikes a deal with the Defense Department to deploy its AI models

OpenAI has reached an agreement with the Defense Department to deploy its models in the agency’s network, company chief Sam Altman has revealed on X. In his post, he said two of OpenAI’s most important safety principles are “prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems.” Altman claimed the company put those principles in its agreement with the agency, which he called by the government’s preferred name of Department of War (DoW), and that it had agreed to honor them.

The agency has closed the deal with OpenAI, shortly after President Donald Trump ordered all government agencies to stop using Claude and any other Anthropic services. If you’ll recall, US Defense Secretary Pete Hegseth previously threatened to label Anthropic “supply chain risk” if it continues refusing to remove the guardrails on its AI, which are preventing the technology to be used for mass surveillance against Americans and in fully autonomous weapons.

It’s unclear why the government agreed to team up with OpenAI if its models also have the same guardrails, but Altman said it’s asking the government to offer the same terms to all the AI companies it works with. Jeremy Lewin, the Senior Official Under Secretary for Foreign Assistance, Humanitarian Affairs, and Religious Freedom, said on X that DoW “references certain existing legal authorities and includes certain mutually agreed upon safety mechanisms” in its contracts. Both OpenAI and xAI, which had also previously signed a deal to deploy Grok in the DoW’s classified systems, agreed to those terms. He said it was the same “compromise that Anthropic was offered, and rejected.”

Anthropic, which started working with the US government in 2024, refused to bow down to Hegseth. In its latest statement, published just hours before Altman announced OpenAI’s agreement, it repeated its stance. “No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” Anthropic wrote. “We will challenge any supply chain risk designation in court.”

Altman added in his post on X that OpenAI will build technical safeguards to ensure the company’s models behave as they should, claiming that’s also what the DoW wanted. It’s sending engineers to work with the agency to “ensure [its models’] safety,” and it will only deploy on cloud networks. As The New York Times notes, OpenAI is not yet on Amazon cloud, which the government uses. But that could change soon, as company has also just announced forming a partnership with Amazon to run its models on Amazon Web Services (AWS) for enterprise customers.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-strikes-a-deal-with-the-defense-department-to-deploy-its-ai-models-054441785.html?src=rss

OpenAI will notify authorities of credible threats after Canada mass shooter’s second account was discovered

OpenAI has vowed to strengthen its safety protocols and to notify law enforcement of credible threats sooner in a letter addressed to Canadian authorities, according to Politico and The Washington Post. If you’ll recall, Canadian politicians summoned the company’s leaders after reports came out that it didn’t notify authorities when it banned the account owned by the Tumbler Ridge, British Columbia mass shooting suspect back in 2025. Some of OpenAI’s leaders have already met with Candian officials, and British Columbia Premier David Eby said Sam Altman had also agreed to meet with him.

While OpenAI has yet to announce changes to its rules, Ann O’Leary, its vice president of global policy, reportedly wrote in the letter that the company will tweak its detection systems so that they can better prevent banned users from coming back to the platform. Apparently, after OpenAI banned the shooter’s original account due to “potential warnings of committing real-world violence,” the perpetrator was able to create another account. The company only discovered the second account after the shooter’s name was released, and it has since notified authorities.

Further, OpenAI will now notify authorities if it detects “imminent and credible” threats in ChatGPT conversations, even if the user doesn’t reveal “a target, means, and timing of planned violence.” O’Leary explained that if the new rules had been in effect when the shooter’s account was banned in 2025, the company would have notified the police. OpenAI will also establish a point of contact for Canadian law enforcement so it can quickly share information with authorities when needed.

The Canadian government sees OpenAI’s decision not to report the shooter’s original account as a failure. It threatened to regulate AI chatbots in the country if their creators cannot show that they have proper safeguards to protect its users. It’s unclear at the moment if OpenAI also plans to roll out the same changes in the US and elsewhere in the world.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-will-notify-authorities-of-credible-threats-after-canada-mass-shooters-second-account-was-discovered-112706548.html?src=rss