OpenAI’s robotics hardware lead resigns following deal with the Department of Defense

OpenAI's robotics hardware lead is out. Caitlin Kalinowski, who oversaw hardware within the robotics division of OpenAI, posted on X that she was resigning from her role, while criticizing the company's haste in partnering with the Department of Defense without investigating proper guardrails. OpenAI told Engadget that there are no plans to replace Kalinowski.

Kalinowski, who previously worked at Meta before leaving to join OpenAI in late 2024, wrote on X that "surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got." Responding to another post, the former OpenAI exec explained that "the announcement was rushed without the guardrails defined," adding that it was a "governance concern first and foremost."

OpenAI confirmed Kalinowski's resignation and said in a statement to Engadget that the company understands people have "strong views" about these issues and will continue to engage in discussions with relevant parties. The company also explained in the statement that it doesn't support the issues that Kalinowski brought up.

"We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons," the OpenAI statement read.

Kalinowski's resignation may be the most high-profile fallout from OpenAI's decision to sign a deal with the Department of Defense. The decision came just after Anthropic refused to comply with lifting certain AI guardrails around mass surveillance and developing fully autonomous weapons. However, even OpenAI's CEO, Sam Altman, said that he would amend the deal with the Department of Defense to prohibit spying on Americans.

Correction, March 8 2026, 10:30AM ET: This story has been updated to correct Kalinowski's role at OpenAI to "robotics hardware lead" instead of "head of robotics."

This article originally appeared on Engadget at https://www.engadget.com/ai/openais-robotics-hardware-lead-resigns-following-deal-with-the-department-of-defense-195918599.html?src=rss

Amazon.com is on the mend after experiencing technical issues

Amazon's website appears to be stabilizing after experiencing technical issues that kept users from logging in and prevented prices from displaying correctly. DownDetector reported a spike of outage reports around 2PM ET, but as of 5:56PM ET, user complaints have fallen significantly.

The Amazon.com homepage currently loads, and Engadgets staff have been able to load product pages and view prices without any problems. During the peak of the site’s issues, neither were loading consistently, and clicking through in some cases showed an error page with text that says "Sorry, something went wrong on our end." Users also reported being unable to log into their accounts.

“We're sorry that some customers may be experiencing issues while shopping,” Amazon said in a statement to Engadget. “We appreciate customers’ patience as we work to resolve the issue." The company shared a similar sentiment with customers on X, confirming that it’s aware there’s a problem and acknowledging that its working on a fix. Amazon has yet to confirm whether the issue is fully resolved.

As a cloud provider through its Amazon Web Services (AWS) business, Amazon has experienced its fair share of outages, including one in October 2025 that took out services like Snapchat and Amazon's own Alexa voice assistant for hours. The company's website experiencing issues without a larger AWS outage seems a bit more unusual, and might suggest the problem lies outside of its cloud infrastructure.

Update, March 5, 5:56PM ET: Updated article to reflect improved performance on Amazon.com.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/amazoncom-is-on-the-mend-after-experiencing-technical-issues-211430657.html?src=rss

Canadian government says OpenAI will take further steps to strengthen safety protocols

The Canadian government says that OpenAI CEO Sam Altman has agreed to take steps to immediately strengthen safety protocols, according to a report by Wall Street Journal. This follows a mass shooting incident at a high school in which OpenAI flagged the suspect and suspended his account, but did not alert authorities.

These changes look to primarily involve law enforcement, with commitments to notify police about potentially suspicious use of ChatGPT. We don't have any confirmation from the company at this time, but Canada's Artificial Intelligence Minister Evan Solomon says he "asked OpenAI to take several actions, which Altman has agreed to do."

Solomon attended a virtual meeting with Altman to discuss how the company "would include Canadian privacy, mental health and law enforcement experts into the process to identify and review high-risk cases involving Canadian users." He says OpenAI has pledged to provide a report to outline these new protocols.

He also asked Altman to make these changes retroactively and to review previous suspicious incidents on the platform, providing law enforcement with data when necessary. We don't know if OpenAI has consented to that part.

Engadget has reached out to OpenAI to ask about these changes and if they'll be exclusive to Canada. We'll update this post if we hear back.

This isn't the first step the company has made to make things right with Canada. Ann O’Leary, OpenAI's VP of global policy, recently suggested that the company would be tweaking its detection systems to better prevent banned users from returning to the platform. The company banned the alleged shooter's original account due to "potential warnings of committing real-world violence" but he was able to make another one.

This article originally appeared on Engadget at https://www.engadget.com/ai/canadian-government-says-openai-will-take-further-steps-to-strengthen-safety-protocols-164151618.html?src=rss

X to require AI labels on armed conflict videos from paid creators, citing ‘times of war’

X will suspend creators from its revenue sharing program if they post AI-generated videos depicting armed conflicts without disclosing they were made with AI. Head of product Nikita Bier announced the policy change on March 3, saying first-time violators will be cut off for 90 days and repeat offenders would be permanently removed from the program.

The policy is notably narrow, applying only to creators enrolled in the platform’s revenue sharing program and only to AI-generated videos of armed conflicts, not AI content in general or non-monetized accounts. Violations will be flagged through Community Notes, X's crowd-sourced fact-checking system, or by detecting metadata from generative AI tools. Bier framed the change as necessary “during times of war,” though the current conflict unfolding between the United States, Israel and Iran has not been formally, or at least not legally, declared a war. Of course, the US has not formally declared war since 1942.

The quality of AI video generation has progressed at a rapid pace, and generated content has become almost indistinguishable from real footage for most viewers. X already watermarks images and videos generated by its Grok chatbot but has not previously required users to disclose AI-generated content. The platform is separately testing a broader AI labeling toggle that would let users mark any post as containing synthetic content, as first reported by Social Media Today, though X has not shared a timeline for that feature.

This article originally appeared on Engadget at https://www.engadget.com/social-media/x-to-require-ai-labels-on-armed-conflict-videos-from-paid-creators-citing-times-of-war-183631400.html?src=rss

AI data centers could reduce power draw on demand, study says

Apparently, AI data centers are capable of sucking less (power, that is). A recent UK trial demonstrated that they can adjust their energy demands dynamically without disrupting critical workloads. This contrasts with data centers' current approach of always-on power draw, which can strain grids and drive up prices for everyone.

Over five days in December 2025, more than 200 simulated "grid events" tested a London data center’s ability to adjust its energy use on the fly. The trial used software from Emerald AI, which was involved in the study. Other partners included NVIDIA, National Grid, Nebius and the nonprofit Electric Power Research Institute.

In each simulated grid event, the data center successfully adjusted its energy use to the requested level. It reduced power draw by up to 40 percent, while critical workloads continued to run as normal throughout the trial.

The data center successfully reacted to spikes in demand during soccer match halftimes. In one case, it reduced its power draw by 10 percent for up to 10 hours. It also managed to cut its demand quickly: One event saw the data center reduce its load by 30 percent in only 30 seconds.

The study will serve as a blueprint for a 100MW “power-flexible AI factory” that NVIDIA plans to operate in Virginia. "This trial proves that NVIDIA-powered infrastructure can act as a grid-aware asset, modulating demand in real-time to support stability," Josh Paker, NVIDIA's sustainability lead, wrote in a statement. "By making AI workloads responsive, we accelerate deployment while reducing the need for costly grid upgrades."

The organizations involved in the study say they'll share their data with the AI industry, regulators and policymakers to try to influence their approach. Fortunately, we don’t need to hope that data center operators’ altruism (ha) will lead to their cooperation. Agreeing to curb usage during peak demand could be good for their balance sheets and lead to faster approvals for new data center grid connections. "We would love to get to a point where we can get customers on the network in two years, and this is part of that," Steve Smith, president of National Grid Partners, told Bloomberg.

This article originally appeared on Engadget at https://www.engadget.com/ai/ai-data-centers-could-reduce-power-draw-on-demand-study-says-180628982.html?src=rss

Anthropic brings memory to Claude’s free plan

Anthropic is bringing another paid feature to Claude's free tier. The next time you chat with Claude, you'll have the option to have it reference your previous conversation to inform its outputs. Anthropic first made its chatbot capable of remembering past interactions last August, before giving it the ability to compartmentalize memories in the fall. Making memory a free feature is well-timed; earlier today Anthropic made it easier for users to import their past conversations with a competing chatbot to Claude. If after enabling memory you decide to turn it off, you can either pause the feature, preserving Claude’s memories for use down the road, or completely delete them so they’re not saved on Anthropic’s servers.

Claude is enjoying new-found popularity, having recently jumped to the number one spot in the App Store's free app charts. This comes while Anthropic is engaged in a high-stakes contract dispute with the US government over AI safeguards. On Friday, US Defense Secretary Pete Hegseth labeled the company a "supply chain risk" after it refused to sign a contract that would allow the Pentagon to use Anthropic models for mass surveillance against Americans and in fully autonomous weapons. Following Hegseth's announcement, Anthropic vowed to challenge the designation. As of right now, we’re waiting to see how things play out, and what it might mean for Anthropic.

This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-brings-memory-to-claudes-free-plan-220729070.html?src=rss

Anthropic’s Claude can now absorb your past conversations with other AI chatbots

Anthropic has made switching to its Claude AI chatbot easier than ever. The company announced a new memory import tool that can extract all of a competing AI chatbot's memories and context of you into a text prompt that can be fed into Claude.

With Anthropic's prompt, you can then copy and paste the output into Claude's memories, and the AI chatbot will pick up where you left off with another AI chatbot, whether it's ChatGPT, Gemini or Copilot. Anthropic said it'll take about 24 hours for Claude to assimilate the new context, but you'll be able to see the change by clicking on the "See what Claude learned about you" button. Claude users can even tweak what the AI chatbot remembers in the "Manage memory" section in the app's settings. Anthropic pointed out that Claude is meant to focus on "work-related topics to enhance its effectiveness as a collaborator," adding that it might not remember personal details that are unrelated to work.

Anthropic's timing doesn't seem to be just a coincidence. Claude recently jumped to the number one spot in the App Store's free apps charts, dethroning ChatGPT in the process. The rise in popularity likely stems from its recent dispute with the Department of Defense, where Anthropic refused to budge on AI guardrails related to mass domestic surveillance and fully autonomous weapons. On the other hand, OpenAI will be taking Anthropic's vacated role with the Department of Defense, leading to a trend of users boycotting ChatGPT and canceling their subscriptions.

This article originally appeared on Engadget at https://www.engadget.com/ai/anthropics-claude-can-now-absorb-your-past-conversations-with-other-ai-chatbots-153201656.html?src=rss

Alaska could be the next state to crack down on AI-generated CSAM and restrict kids’ social media use

Alaska's House of Representatives unanimously passed HB47, a bill that imposes sweeping limits on when and how minors use social media apps, along with bans on generating or distributing harmful deepfakes of children.

The bill's original form was focused on prohibiting the possession and distribution of sexually explicit images of children using AI, but Alaska lawmakers decided to add amendments that would impose social media restrictions. The proposed limitations include a statewide curfew on using social media between 10:30 PM and 6:30 AM, banning "addictive design features" and requiring social media platforms to verify user ages and get parental consent if they are minors.

While the House bill saw 39 votes in favor and zero against, the amendments offered some hints at potential upcoming revisions. Before the bill went to a vote, some of the House representatives expressed concern about adding such broad rules on social media without consulting the companies behind them first.

The bill still has to make its way through the Alaska State Senate, which already has presented a companion bill, and the governor. Alaska is following the footsteps of many other states, and the House even modeled its social media amendments in the HB47 bill after Utah. While Utah was the first to propose social media restrictions for kids, it was later met with a preliminary injunction.

This article originally appeared on Engadget at https://www.engadget.com/social-media/alaska-could-be-the-next-state-to-crack-down-on-ai-generated-csam-and-restrict-kids-social-media-use-190506366.html?src=rss

FCC approves the merger of cable giants Cox and Charter

The Federal Communications Commission has given the go ahead for two of the US' biggest cable providers, Charter Communications and Cox Communications, to merge. Charter announced its intention to acquire Cox for $34.5 billion in May 2025, with specific plans to inherit Cox's managed IT, commercial fiber and cloud businesses, while folding the company's residential cable service into a subsidiary.

“By approving this deal, the FCC ensures big wins for Americans," FCC Chairman Brendan Carr said in a statement. "This deal means that jobs are coming back to America that had been shipped overseas. It means that modern, high-speed networks will get built out in more communities across rural America. And it means that customers will get access to lower priced plans. On top of this, the deal enshrines protections against DEI discrimination."

The FCC claims that Charter plans to invest "billions" to upgrade its network following the closure of the deal, leading to "faster broadband and lower prices." The company's "Rural Construction Initiative" will also extend those improvements to rural states lacking in consistent internet service, a project the FCC was heavily invested in during the Biden administration, but has been pulling back from since President Donald Trump appointed Carr. The FCC also claims Charter will onshore jobs currently handled off-shore by Cox employees and commit to "new safeguards to protect against DEI discrimination," which essentially amounts to hiring, recruiting and promoting employees based on "skills, qualifications, and experience."

While Carr's FCC paints a rosy picture of Charter's acquisition, history has provided multiple examples of mergers having the opposite effect on jobs and pricing. For example, redundancies created when T-Mobile merged with Sprint in 2020 led to a wave of layoffs at the carrier. And funnily enough in 2018, not long after Charter's merger with Time Warner Cable was approved by the FCC, the company raised prices on its Spectrum service by over $91 a year. 

The FCC's obsession with diversity, equity and inclusion as part of the deal is stranger, if only because it appears to fall outside of the commission's purpose of maintaining fair competition in the telecommunications industry. It does fit with other mergers the FCC has approved under Carr, however. Skydance's acquisition of Paramount was approved in 2025 under the condition it wouldn't establish any DEI programs.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/fcc-approves-the-merger-of-cable-giants-cox-and-charter-230258865.html?src=rss

Google Maps will finally be usable in South Korea

Google will finally be able to provide real-time driving and walking directions in South Korea, The New York Times reported. The company has received permission from the nation's Transport Ministry to export geographic data out of the country, which will allow it to provide GPS services as well as detailed listings for restaurants and other businesses. 

"We welcome today’s decision and look forward to our ongoing collaboration with local officials to bring a fully functioning Google Maps to Korea," Google's senior executive Cris Turner told the NYT in a statement. However, the approval is contingent “on the condition that strict security requirements are met,” a spokesperson from the Transport Ministry said. Those conditions reportedly restrict Google from displaying sensitive military sites and longitude and latitude coordinates. 

South Korea has generally restricted the export of 1/5000 scale map data over national security concerns, as it's still technically at war with its neighbor North Korea. Google hasn't been able to provide mapping directions or business details since it arrived in the nation, though it has applied twice in 2007 and 2016.

This lack of data sharing has reportedly been a bone of contention in trade talks with the US. Google argued that it was unfairly handicapped by the restrictions that allowed local apps like Naver to thrive. 

However, critics in the nation have expressed concern that Google could now come in and monopolize the market. "If Naver and Kakao are weakened or pushed out and Google later raises prices, that becomes a monopoly. Then, even companies that rely on map services — logistics firms, for example — become dependent [on Google]," geography professor Choi Jin-mu told Reuters

This article originally appeared on Engadget at https://www.engadget.com/apps/google-maps-will-finally-be-usable-in-south-korea-104301396.html?src=rss