OpenAI will amend Defense Department deal to prevent mass surveillance in the US

OpenAI’s Sam Altman said the company will amend its deal with the Defense Department (or the Department of War) to explicitly prohibit the use of its AI system on mass surveillance against Americans. Altman has published an internal memo previously sent to employees on X, telling them that the company will tweak the agreement to add language to make that point especially clear. Specifically, it says:

“Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.

For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”

Altman has also claimed in the memo that the agency affirmed that its services will not be used by its intelligence agencies, including the NSA, without a modification to their contract. He added that if he received what he believed was an unconstitutional order, he would rather go to jail than follow it.

In addition, the OpenAI CEO has admitted in the memo that the company shouldn’t have rushed to get the deal out on Friday, February 27, since the issues were “super complex and demand clear communication.” Altman explained that the company was “trying to de-escalate things and avoid a much worse outcome” but it “looked opportunistic” in the end. If you’ll recall, OpenAI announced the partnership shortly after President Trump ordered all US government agencies to stop using Claude and any other Anthropic services. To note, Anthropic started working with the US government in 2024.

The Defense Department and Secretary Pete Hegseth had been pressuring Anthropic with to remove its AI’s guardrails so that it can be used for all “lawful” purposes. Those include mass surveillance and the development of fully autonomous weapons. Anthropic refused to bow down to Hegseth’s demands and in a statement said that “no amount of intimidation or punishment” will change its “position on mass domestic surveillance or fully autonomous weapons.” Trump issued the order as a result. The Defense Department had also taken the first steps to designate Anthropic as a “supply chain risk,” which is typically reserved for Chinese companies believed to be working with their country’s government.

Altman said that in his conversations with US officials, he reiterated that Anthropic shouldn’t be designated as a supply chain risk and that he hoped the Defense Department would offer it the same deal OpenAI agreed to. In an AMA session on X over the weekend, Altman clarified that he didn’t know the details of Anthropic’s agreement and how it differed from the one OpenAI signed. But if it had been the same, he thought Anthropic should have agreed to it.

After the news broke out about OpenAI’s deal, Anthropic climbed its way to the number one spot of the App Store's Top Free Apps leaderboard, beating out both ChatGPT and Google Gemini. Anthropic, capitalizing on Claude’s sudden popularity, launched a memory import tool to make switching to its chatbot from another company’s easier. Meanwhile, uninstalls for ChatGPT’s jumped by 295 percent day-over-day, according to Sensor Tower.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-will-amend-defense-department-deal-to-prevent-mass-surveillance-in-the-us-050637400.html?src=rss

Lenovo unveils the 2026 refresh of its Yoga 9i 2-in-1 convertible laptop at MWC

Lenovo has given the Yoga 9i 2-in-1 Aura Edition a refresh for 2026 and launched the new device at this year’s Mobile World Congress. The convertible laptop comes with a new Canvas Mode when the Yoga Pen Gen 2 case it’s bundled with is attached to the A-cover. When you lay the device down on a flat surface with the case attached, you’ll get a slight elevation on the display, which may make it easier to sketch or draw.

The Copilot+ laptop is powered by Intel Core Ultra Series 3 processors with integrated graphics, has up to 32GB in memory and runs Windows 11. Its 14-inch screen has a resolution of 2,880 x 1,800 pixels, has a variable refresh rate of 120 Hz and supports multi-touch. In addition to the new Canvas Mode, the device also supports Tablet, Tent, Stand and traditional Laptop Modes like its predecessors do. The Yoga 9i 2-in-1 Aura Edition Gen 11 will be available in May, with prices starting at $1,949.

Lenovo has also launched the new Yoga Pro 7a at MWC 2026. This Copilot+ laptop is powered by AMD Ryzen AI Max+ Series processors and comes with up to 128GB of RAM, so it can be used for heavy AI tasks. It has a 15.3-inch 2.5K PureSight Pro OLED display and is equipped with a big Force Pad trackpad that doubles as a drawing tablet. You can get the device starting in August this year for at least $2,099.

For a more affordable option, there’s the new IdeaPad Slim 5i Ultra laptop, which also has Copilot+ features. It’s powered by Intel Core Ultra processors and comes with either a WUXGA OLED or a WQXGA IPS LCD 14-inch display that has a VRR of 120 Hz. The device was designed for portability, with its thinnest part measuring just 11.9 mm in depth, and weighs 2.5 lbs. It will be available starting in October for at least $799.

Another affordable option is the new Idea Tab Pro Gen 2, which is specifically targeted towards students. It’s powered by theSnapdragon 8s Gen 4 Mobile Platform and has a 13-inch 3.5K display. The Tab Pro Gen 2 is Lenovo’s first tablet to ship with its Qira AI assistant and the company’s AI tools. It will be sold with a Lenovo Tab Pen Plus included for $419 starting in July.

This article originally appeared on Engadget at https://www.engadget.com/computing/laptops/lenovo-unveils-the-2026-refresh-of-its-yoga-9i-2-in-1-convertible-laptop-at-mwc-230100644.html?src=rss

OpenAI strikes a deal with the Defense Department to deploy its AI models

OpenAI has reached an agreement with the Defense Department to deploy its models in the agency’s network, company chief Sam Altman has revealed on X. In his post, he said two of OpenAI’s most important safety principles are “prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems.” Altman claimed the company put those principles in its agreement with the agency, which he called by the government’s preferred name of Department of War (DoW), and that it had agreed to honor them.

The agency has closed the deal with OpenAI, shortly after President Donald Trump ordered all government agencies to stop using Claude and any other Anthropic services. If you’ll recall, US Defense Secretary Pete Hegseth previously threatened to label Anthropic “supply chain risk” if it continues refusing to remove the guardrails on its AI, which are preventing the technology to be used for mass surveillance against Americans and in fully autonomous weapons.

It’s unclear why the government agreed to team up with OpenAI if its models also have the same guardrails, but Altman said it’s asking the government to offer the same terms to all the AI companies it works with. Jeremy Lewin, the Senior Official Under Secretary for Foreign Assistance, Humanitarian Affairs, and Religious Freedom, said on X that DoW “references certain existing legal authorities and includes certain mutually agreed upon safety mechanisms” in its contracts. Both OpenAI and xAI, which had also previously signed a deal to deploy Grok in the DoW’s classified systems, agreed to those terms. He said it was the same “compromise that Anthropic was offered, and rejected.”

Anthropic, which started working with the US government in 2024, refused to bow down to Hegseth. In its latest statement, published just hours before Altman announced OpenAI’s agreement, it repeated its stance. “No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” Anthropic wrote. “We will challenge any supply chain risk designation in court.”

Altman added in his post on X that OpenAI will build technical safeguards to ensure the company’s models behave as they should, claiming that’s also what the DoW wanted. It’s sending engineers to work with the agency to “ensure [its models’] safety,” and it will only deploy on cloud networks. As The New York Times notes, OpenAI is not yet on Amazon cloud, which the government uses. But that could change soon, as company has also just announced forming a partnership with Amazon to run its models on Amazon Web Services (AWS) for enterprise customers.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-strikes-a-deal-with-the-defense-department-to-deploy-its-ai-models-054441785.html?src=rss

OpenAI will notify authorities of credible threats after Canada mass shooter’s second account was discovered

OpenAI has vowed to strengthen its safety protocols and to notify law enforcement of credible threats sooner in a letter addressed to Canadian authorities, according to Politico and The Washington Post. If you’ll recall, Canadian politicians summoned the company’s leaders after reports came out that it didn’t notify authorities when it banned the account owned by the Tumbler Ridge, British Columbia mass shooting suspect back in 2025. Some of OpenAI’s leaders have already met with Candian officials, and British Columbia Premier David Eby said Sam Altman had also agreed to meet with him.

While OpenAI has yet to announce changes to its rules, Ann O’Leary, its vice president of global policy, reportedly wrote in the letter that the company will tweak its detection systems so that they can better prevent banned users from coming back to the platform. Apparently, after OpenAI banned the shooter’s original account due to “potential warnings of committing real-world violence,” the perpetrator was able to create another account. The company only discovered the second account after the shooter’s name was released, and it has since notified authorities.

Further, OpenAI will now notify authorities if it detects “imminent and credible” threats in ChatGPT conversations, even if the user doesn’t reveal “a target, means, and timing of planned violence.” O’Leary explained that if the new rules had been in effect when the shooter’s account was banned in 2025, the company would have notified the police. OpenAI will also establish a point of contact for Canadian law enforcement so it can quickly share information with authorities when needed.

The Canadian government sees OpenAI’s decision not to report the shooter’s original account as a failure. It threatened to regulate AI chatbots in the country if their creators cannot show that they have proper safeguards to protect its users. It’s unclear at the moment if OpenAI also plans to roll out the same changes in the US and elsewhere in the world.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-will-notify-authorities-of-credible-threats-after-canada-mass-shooters-second-account-was-discovered-112706548.html?src=rss

Google’s Nano Banana 2 is a faster version of Nano Banana Pro

Google has launched its new image generation model, the Nano Banana 2, which is powered by Gemini 3.1 Flash Image. The company says the new model has the capabilities, world knowledge and reasoning of Nano Banana Pro, but it can accomplish tasks at “lightning-fast speed.” That enables rapid editing and the quick creation of various iterations using a single prompt.

Nano Banana 2 will give more people access to capabilities that were previously exclusive to the Pro model. That includes Pro’s ability to pull real-time information and images from web searches to create, say, infographics and diagrams. It will also be able to generate texts on images for marketing materials and greeting cards.

Google says Nano Banana 2 can maintain character resemblance for up to five characters in a single workflow, which could be especially valuable if you’re using it to create storyboards or visual stories. It can follow precise instructions for complex requests, as well, and can generate input with up to 4K in resolution with richer textures and sharper details than its predecessors could.

Nano Banana Pro could already generate images so realistic, it’s almost impossible to tell that they were AI-generated. Google even had to limit its use due to high demand. Whether Nano Banana 2 can generate images that are markedly better than what Pro could create — and whether we could still tell if an image was made by AI — remains to be seen. The new model will replace Nano Banana Pro in the Gemini app, but Google AI Pro and Ultra subscribers will retain access to Nano Banana Pro for specialized tasks. It will also be the default model in Search for AI Mode and Lens, as well as in Google’s Flow AI creative studio.

This article originally appeared on Engadget at https://www.engadget.com/ai/googles-nano-banana-2-is-a-faster-version-of-nano-banana-pro-160000695.html?src=rss

NY AG: Valve’s loot boxes can get kids hooked on gambling

New York Attorney General Letitia James has accused Valve of promoting illegal gambling through its video games in a lawsuit filed by her office. According to the AG’s announcement, her office conducted an investigation and had concluded that Valve enabled gambling by enticing users to pay for a chance at rare items from loot boxes in Counter-Strike 2, Team Fortress 2 and Dota 2. In the lawsuit, the New York AG stressed that Valve’s loot boxes are “particularly pernicious,” because the games are popular among children and teenagers.

The lawsuit described the loot box model, which requires a player to open a mystery chest for the possibility of winning rare items, as “quintessential gambling.” It argued that people introduced to gambling at an early age are at a significantly higher risk of developing gambling addictions later on, based on research. In addition, it explained that gambling is mostly illegal in New York.

Players have to pay for chests or boxes and the keys to be able to open them in Valve’s games, and the company has reportedly sold billions of dollars’ worth of keys for Counter-Strike alone. The lawsuit said that Valve has made tens of millions of dollars in fees from the sale of virtual items on the Steam Community Market, as well. In addition to being able to sell items on Steam for funds directly credited to their Steam Wallet, players can also sell on third-party marketplaces for cash.

According to James’ office, Valve facilitates and even assists third-party marketplaces in their operations, based on its investigation. Engadget has asked Valve for a statement about the lawsuit, but we have yet to hear back. However, the company previously denied being involved with third-party marketplaces that allow the sales of its game items for real-world money. In a response to an inquiry by the Danish Gambling Authority, Valve explained that those third-party websites create sock puppet accounts to sell and receive items on Steam in exchange for cash. “[T]his behavior is in violation of our terms of service,” Valve said.

The lawsuit also pointed out that there’s a huge market for Counter-Strike skins and referenced a Bloomberg article from 2025, which reported that the market for those skins had already surpassed $4.3 billion. As an example of in-game items sold for real money, it cited the sale of a Counter-Strike 2 AK-47 skin in 2024 for $1 million. The Attorney General’s Office wants the court to stop Valve from violating New York laws, to give up money it allegedly earned from illegal activities and to pay a fine three times what it allegedly earned from illegal business practices.

This article originally appeared on Engadget at https://www.engadget.com/gaming/ny-ag-valves-loot-boxes-can-get-kids-hooked-on-gambling-122503556.html?src=rss

Spotify can reorder your playlists by BPM and key

Spotify is rolling out a new feature that’s meant to make transitions in between tracks even smoother. If you’ll recall, the streaming service released the ability to create customized transitions within playlists in August last year. It gave people a way to create uninterrupted progressions and eliminate awkward silences between songs. Now, Premium users will be able to make sure the songs in their playlists flow seamlessly even further by reordering tracks based on their keys and BPM or beats per minute.

The new feature can rearrange playlists with one tap. All paying users have to do is tap Mix on one of their playlists and then tap the Edit button. From there, they can scroll down to find the Smart Reorder option. Tapping Smart Reorder will automatically rearrange songs according to their keys and BPM without users having to do anything else. They just have to click Save so that the change to their playlist takes effect.

Spotify says users have streamed over 220 hours of their mixed playlists since it introduced custom transitions last year. It also listed some of the most popular ones on the platform, including The Weeknd’s Wake Me Up transitioning into After Hours and Flo Rida’s Low into Rihann’s S&M.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/music/spotify-can-reorder-your-playlists-by-bpm-and-key-140000101.html?src=rss

xAI’s trade secret lawsuit against OpenAI has been dismissed

OpenAI has successfully convinced the court to dismiss the lawsuit filed by Elon Musk’s xAI, accusing the company of stealing its trade secrets. In her decision, US District Judge Rita F. Lin wrote that xAI’s complaint “does not point to any misconduct by OpenAI” and instead attributes all listed misconducts to its eight former employees who “ left for OpenAI at around the same time.”

Lin said that xAI accused two of its former employees of stealing its source code before leaving at a time when they were already speaking to an OpenAI recruiter. However, the company didn’t say if the recruiter told those former employees to do so. xAI’s lawsuit also accuses two other former employees of keeping their work chats on their devices even after leaving, another of refusing to provide certifications related to confidential information after his departure, and another of unsuccessfully trying to access xAI hiring and datacenter optimization information when he was already working for OpenAI.

“Notably absent are allegations about the conduct of OpenAI itself,” the judge noted. xAI didn’t include any information that directly accuses OpenAI of making those employees steal its trade secrets. It also didn’t include allegations that those former employees used any stolen trade secrets after they were already working for OpenAI. To be precise, OpenAI’s motion for dismissal was granted with leave to amend, so the lawsuit may not be completely over just yet. That means xAI can still file an amended complaint addressing what the judge wrote in her decision until March 17, 2026.

OpenAI and xAI have a longstanding feud, and this is just one of the several lawsuits between the two companies. In fact, Musk has an ongoing complaint against OpenAI and Microsoft, accusing the former of violating its nonprofit status. Musk, who was an early funder of OpenAI, is now asking the company for $79 billion to $134 billion in damages from “wrongful gains.”

This article originally appeared on Engadget at https://www.engadget.com/ai/xais-trade-secret-lawsuit-against-openai-has-been-dismissed-101912599.html?src=rss

Samsung Galaxy Book 6 series will be available in the US starting on March 11

You can get any of the Samsung Galaxy Book 6 models in the US, starting on March 11. In fact, you can make a reservation right now through Samsung’s website and its experience stores. The company launched the Book 6 series of laptops, namely the basic Book 6, the Book 6 Pro and the Book 6 Ultra, at CES earlier this year. They’re powered by Intel’s new Core Ultra Series 3 processors, which were also announced at CES and which promise great graphics and battery life.

All three models come in grey and with AI features, such as AI Select and Search that you can use to look for information using natural language. The basic Book 6 laptop will set you back at least $1,050, while the Book 6 Pro’s prices start at $1,600. The Book 6 Ultra will cost you at least $2,450. The Galaxy Book 6 Pro will be available in 14- and 16-inch versions and will come equipped with up to Core Ultra X7 processors and Intel Arc graphics. Meanwhile, you can equip the 16-inch Galaxy Book 6 Ultra with up to Core Ultra X9 processors. The most expensive Book 6 promises significant performance improvements, thanks to its new 5th-generation MPU, Intel Arc graphics and NVIDIA’s RTX 50 series GPUs.

The Book 6 Ultra and the 16-inch Pro have slimmer profiles than their predecessors, though the former has a more traditional laptop shape and the latter looks more like the MacBook Air. It’s worth noting that Samsung redesigned the Ultra’s components across a larger surface area so that it can distribute heat more evenly. Both the Book 6 Pro and Ultra can last for up to 30 hours of video playback, since they feature Samsung’s longest-lasting batteries yet. Both models also come with AMOLED 2X (2,880 x 1,800) displays with refresh rates going up to 120Hz.

This article originally appeared on Engadget at https://www.engadget.com/computing/laptops/samsung-galaxy-book-6-series-will-be-available-in-the-us-starting-on-march-11-125140613.html?src=rss

Telegram founder Pavel Durov is reportedly under criminal investigation in Russia

Pavel Durov, the founder of Telegram, is reportedly under criminal investigation by Russian authorities for “abetting terrorist activities.” According to the Financial Times, state-run publications are accusing Durov of enabling attacks on Russia and Telegram of becoming an intelligence tool for Ukraine and the west. Telegram was one of the apps that Russia blocked in the country just a few days ago, along with WhatsApp, in what seemed to be an effort to push local users towards the unencrypted state-owned app, Max.

When Telegram was banned, pro-Russian voices criticized the country’s decision, because it was apparently harming frontline operations. Russia’s own soldiers are using the app to communicate and coordinate their moves. Authorities near the Ukrainian border, for instance, send out warning for incoming drone and missile attacks through the messaging app. Even Vladimir Putin’s spokesperson uses Telegram to speak to the media.

Now, the Times says Russia is accusing Telegram of being the main instrument for “NATO countries’ secret services and the Kyiv regime.” Rossiiskaya Gazeta, a Russian state-run publication, added that Telegram was “intercepting location data, selling secret information and intimidating soldiers and their families.” Digital platforms like Telegram, the publication said, are “becoming strategic weapons.” Rossiiskaya Gazeta said its information came from Russia’s Federal Security Service, the country’s primary domestic security agency.

Durov has yet to issue a statement, but after Russia blocked access to Telegram, he said the country was restricting access” to the application to “force its citizens onto a state-controlled app built for surveillance and political censorship.” The Telegram founder was born in Russia and co-founded the country’s largest social network, VK. He left his country after Kremlin pressured him to sell his stake in the social network.

This article originally appeared on Engadget at https://www.engadget.com/apps/telegram-founder-pavel-durov-is-reportedly-under-criminal-investigation-in-russia-121000511.html?src=rss