Amazon wins a temporary injunction against Perplexity’s Comet browser

Amazon has secured a temporary win in its fight with Perplexity over the use of AI shopping bots. Bloomberg reported that a San Francisco federal court has determined that Perplexity must stop using its Comet web browser's AI agent to make purchases for users on Amazon's marketplace. The AI company will have a week to appeal the decision, otherwise it has been ordered to stop accessing any password-protected areas of Amazon's systems and destroy its copies of Amazon's data while the two companies continue to argue their cases.  

"Amazon has provided strong evidence that Perplexity, through its Comet browser, accesses with the Amazon user's permission but without authorization by Amazon, the user's password-protected account," District Judge Maxine Chesney wrote in placing the temporary block.

"The preliminary injunction will prevent Perplexity’s unauthorized access to the Amazon store and is an important step in maintaining a trusted shopping experience for Amazon customers," an Amazon spokesperson told Bloomberg.

Amazon sent a cease-and-desist letter to Perplexity over the AI company's shopping bots in November. According to Amazon, use of the Comet agent to make purchases is a violation of its terms of service. "Perplexity will continue to fight for the right of internet users to choose whatever AI they want," a representative from Perplexity said of this week's decision.

This article originally appeared on Engadget at https://www.engadget.com/ai/amazon-wins-a-temporary-injunction-against-perplexitys-comet-browser-184000462.html?src=rss

The Oversight Board says Meta needs new rules for AI-generated content

The Oversight Board is once again urging Meta to overhaul its rules around AI-generated content. This time, the board says Meta should create a separate rule for AI content that's independent of its misinformation policy, invest in more reliable detection tools and make better use of digital watermarks among other changes. 

The group's recommendations stem from an AI-generated video shared last year that claimed to show damaged buildings in the Israeli city of Haifa during the Israel-Iran conflict in 2025. The clip, which racked up more than 700,000 views, was posted by an account that claimed to be a news outlet but was actually run by someone in the Philippines.

After the video was reported to Meta, the company declined to remove it or add a "high risk" AI label that would have clearly indicated the content had been created or manipulated with AI. The board overturned Meta's decision not to add the "high risk" label and says the case shines a light on several areas where the company's current AI rules are falling short.

"Meta must do more to address the proliferation of deceptive AI- generated content on its platforms, including by inauthentic or abusive networks of accounts and pages, particularly on matters of public interest, so that users can distinguish between what is real and fake," the board wrote in its decision. Meta eventually disabled three accounts linked to the page after the board flagged "obvious signals of deception."

One of the board's top recommendations is that Meta create a dedicated rule for AI-generated content that's separate from its misinformation policy. The rule, according to the board, should include specifics about how and when users are required to label AI content as well as information about how Meta penalizes those who break the rule. 

The board was also highly critical of how Meta uses its current "AI Info" labels, noting that the way they are applied is "neither robust nor comprehensive enough to contend with the scale and velocity of AI-generated content,” especially in times of conflict or crisis. “A system overly dependent on self-disclosure of AI usage and escalated review (which occurs infrequently) to properly label this output cannot meet the challenges posed in the current environment.”

Meta, the board said, also needs to invest in more sophisticated detection technology that can reliably label AI media, including audio and video. The group added that it was "concerned" about reports that the company is "inconsistently implementing" digital watermarks on AI content created by its own AI tools. 

In a statement, Meta said it “welcomed” the decision and that it would also take action “on content that is identical and in the same context” when “it is technically and operationally possible to do so.” The company has 60 days to formally respond to its recommendations. 

The decision isn't the first time the board has been critical of Meta's handling of AI content. The group has described the company's manipulated media rules as "incoherent" on two other occasions, and has criticized it for relying on third-parties, including fact checking organizations, to flag problematic content. Meta's reliance on fact checkers and other "trusted partners" was again raised in this case, with the board saying that it had heard from these groups that Meta "is less responsive to outreach and concerns, in part due to a significant reduction in capacities for Meta’s internal teams." Meta, the board writes, "should be capable of conducting such assessments of harm itself, rather than rely solely on partners reaching out to them during an armed conflict."

While the Oversight Board's decision relates to a post from last year, the issue of AI-generated content during armed conflicts has taken on a new urgency during the latest conflict in the Middle East. Since the start of the US and Israel's strikes on Iran earlier this month, there has been a sharp rise in viral AI-generated misinformation across social media. The board, which has previously hinted that it would like to work with generative AI companies, included a suggestion that would seem to apply to not just Meta. 

"The industry needs coherence in helping users distinguish deceptive AI-generated content and platforms should address abusive accounts and pages sharing such output," it wrote.

Update, March 10, 10:53AM ET: This story was updated to reflect Meta’s response to the Oversight Board.

This article originally appeared on Engadget at https://www.engadget.com/social-media/the-oversight-board-says-meta-needs-new-rules-for-ai-generated-content-100000268.html?src=rss

You can (sort of) block Grok from editing your uploaded photos

People can block the xAI's Grok chatbot from creating modifications of their uploaded images on social network X. Neither X or xAI, both Elon Musk-owned businesses, have made a public announcement about this feature, which users began noticing on the iOS app within the image/video upload menu over the past few days. 

This option is likely a response to Grok's latest scandal, which began at the start of 2026 when the addition of image generation tools to the chatbot saw about 3 million sexualized or nudified images created. An estimated 23,000 of the images made in that 11-day period contained sexualized images of children, according to the Center for Countering Digital Hate. Grok is now facing two separate investigations by regulators in the EU over the issue.

The positive side of the recent feature addition is that X and xAI have taken a step toward limiting inappropriate uses of Grok. This block is a simple toggle and it hasn't been buried in the UI. So that's nice.

The negative side, however, is that this token gesture that doesn't amount to any serious improvement to how Grok works or can be used. It's great that the chatbot won't alter the file uploaded by one person, but as reported by The Verge, the block only limits tagging Grok in a reply to create an image edit. There are plenty of workarounds for those dedicated individuals who insist on being able to use generative AI to undress people without their consent or knowledge. 

Hopefully xAI has more powerful protective tools in the works. The limitations Grok on putting real people in scanty clothing that X announced in January seem to have had only partial success at best. If this additional and narrow use case is all the company offers, then the claims of being a zero-tolerance space for nonconsensual nudity are going to ring hollow. Especially since, as we noted at the time, xAI could stop allowing image generation at all until the issue is properly and thoroughly fixed.

This article originally appeared on Engadget at https://www.engadget.com/ai/you-can-sort-of-block-grok-from-editing-your-uploaded-photos-215356117.html?src=rss

Bluesky’s CEO is stepping down after nearly 5 years

Bluesky CEO Jay Graber, who has led the upstart social platform since 2021, is stepping down from her role as its top executive. Toni Schneider, who has been an advisor and investor in Bluesky, will take over the job temporarily while Graber stays on as Chief Innovation Officer. 

"As Bluesky matures, the company needs a seasoned operator focused on scaling and execution, while I return to what I do best: building new things," Graber wrote in a blog post. Schneider, who was previously CEO at Wordpress parent Automattic, will be that "experienced operator and leader" while Blueksy's board searches for a permanent CEO, she said.

Graber's history with Bluesky dates back to its early days as a side project at Jack Dorsey's Twitter. She was officially brought on as CEO in 2021 as Bluesky spun off into an independent company (it officially ended its association with Twitter in 2022 and Dorsey cut ties with Bluesky in 2024). She led the company through its launch and early viral success as it grew from an invitation-only platform to the 43 million-user service it is today. During that time, she's become known as an advocate for decentralized social media and for trolling Mark Zuckerberg's t-shirt choices. 

Nearly three years since it launched publicly, Bluesky has carved out a small but influential niche in the post-Twitter social landscape. The platform is less than a third of the size of Meta's competitor, Threads, which has also copied some of Bluesky's signature features. Bluesky also has yet to roll out any meaningful monetization features, though it has teased a premium subscription service in the past.

As Chief Innovation Officer, Graber will presumably still be an influential voice at the company going forward. And, as Wired points out, she still has a seat on Bluesky's board so she will get some say in who steps into the role permanently. Until then, Schneider, who is also a partner at VC firm Tre Ventures, will lead the company. "I deeply believe in what this team has built and the open social web they're fighting for," he wrote in a post on Bluesky. 


This article originally appeared on Engadget at https://www.engadget.com/social-media/blueskys-ceo-is-stepping-down-after-nearly-5-years-201900960.html?src=rss

OpenAI is reportedly pushing back the launch of its ‘adult mode’ even further

Here comes another disappointment for ChatGPT users. As first reported by Sources' Alex Heath, OpenAI is yet again delaying its "adult mode" for ChatGPT. A company spokesperson told Heath that "we're pushing out the launch of adult mode so we can focus on work that is a higher priority for more users right now."

More specifically, OpenAI's spokesperson said that things like "gains in intelligence, personality improvements, personalization, and making the experience more proactive" were being prioritized instead. However, the company still wants to release an adult mode, but it would "take more time," according to the company spokesperson.

The reveal of ChatGPT's adult mode dates back to October, when OpenAI's CEO, Sam Altman, posted on X that the company would roll out more age-gating as part of its "treat adults like adults" principle, adding that this would include "erotica for verified adults." Altman originally said this adult mode would be available in December, but an OpenAI exec later said during a December briefing that it would instead debut in the first quarter of 2026. 

With Q1 almost coming to a close, we no longer have a timeframe for when ChatGPT's adult mode will release. However, OpenAI began rolling out its age prediction tool in January, which may go hand-in-hand with the upcoming adult mode.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-is-reportedly-pushing-back-the-launch-of-its-adult-mode-even-further-213013801.html?src=rss

OpenAI’s robotics hardware lead resigns following deal with the Department of Defense

OpenAI's robotics hardware lead is out. Caitlin Kalinowski, who oversaw hardware within the robotics division of OpenAI, posted on X that she was resigning from her role, while criticizing the company's haste in partnering with the Department of Defense without investigating proper guardrails. OpenAI told Engadget that there are no plans to replace Kalinowski.

Kalinowski, who previously worked at Meta before leaving to join OpenAI in late 2024, wrote on X that "surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got." Responding to another post, the former OpenAI exec explained that "the announcement was rushed without the guardrails defined," adding that it was a "governance concern first and foremost."

OpenAI confirmed Kalinowski's resignation and said in a statement to Engadget that the company understands people have "strong views" about these issues and will continue to engage in discussions with relevant parties. The company also explained in the statement that it doesn't support the issues that Kalinowski brought up.

"We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons," the OpenAI statement read.

Kalinowski's resignation may be the most high-profile fallout from OpenAI's decision to sign a deal with the Department of Defense. The decision came just after Anthropic refused to comply with lifting certain AI guardrails around mass surveillance and developing fully autonomous weapons. However, even OpenAI's CEO, Sam Altman, said that he would amend the deal with the Department of Defense to prohibit spying on Americans.

Correction, March 8 2026, 10:30AM ET: This story has been updated to correct Kalinowski's role at OpenAI to "robotics hardware lead" instead of "head of robotics."

This article originally appeared on Engadget at https://www.engadget.com/ai/openais-robotics-hardware-lead-resigns-following-deal-with-the-department-of-defense-195918599.html?src=rss

Amazon.com is on the mend after experiencing technical issues

Amazon's website appears to be stabilizing after experiencing technical issues that kept users from logging in and prevented prices from displaying correctly. DownDetector reported a spike of outage reports around 2PM ET, but as of 5:56PM ET, user complaints have fallen significantly.

The Amazon.com homepage currently loads, and Engadgets staff have been able to load product pages and view prices without any problems. During the peak of the site’s issues, neither were loading consistently, and clicking through in some cases showed an error page with text that says "Sorry, something went wrong on our end." Users also reported being unable to log into their accounts.

“We're sorry that some customers may be experiencing issues while shopping,” Amazon said in a statement to Engadget. “We appreciate customers’ patience as we work to resolve the issue." The company shared a similar sentiment with customers on X, confirming that it’s aware there’s a problem and acknowledging that its working on a fix. Amazon has yet to confirm whether the issue is fully resolved.

As a cloud provider through its Amazon Web Services (AWS) business, Amazon has experienced its fair share of outages, including one in October 2025 that took out services like Snapchat and Amazon's own Alexa voice assistant for hours. The company's website experiencing issues without a larger AWS outage seems a bit more unusual, and might suggest the problem lies outside of its cloud infrastructure.

Update, March 5, 5:56PM ET: Updated article to reflect improved performance on Amazon.com.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/amazoncom-is-on-the-mend-after-experiencing-technical-issues-211430657.html?src=rss

Canadian government says OpenAI will take further steps to strengthen safety protocols

The Canadian government says that OpenAI CEO Sam Altman has agreed to take steps to immediately strengthen safety protocols, according to a report by Wall Street Journal. This follows a mass shooting incident at a high school in which OpenAI flagged the suspect and suspended his account, but did not alert authorities.

These changes look to primarily involve law enforcement, with commitments to notify police about potentially suspicious use of ChatGPT. We don't have any confirmation from the company at this time, but Canada's Artificial Intelligence Minister Evan Solomon says he "asked OpenAI to take several actions, which Altman has agreed to do."

Solomon attended a virtual meeting with Altman to discuss how the company "would include Canadian privacy, mental health and law enforcement experts into the process to identify and review high-risk cases involving Canadian users." He says OpenAI has pledged to provide a report to outline these new protocols.

He also asked Altman to make these changes retroactively and to review previous suspicious incidents on the platform, providing law enforcement with data when necessary. We don't know if OpenAI has consented to that part.

Engadget has reached out to OpenAI to ask about these changes and if they'll be exclusive to Canada. We'll update this post if we hear back.

This isn't the first step the company has made to make things right with Canada. Ann O’Leary, OpenAI's VP of global policy, recently suggested that the company would be tweaking its detection systems to better prevent banned users from returning to the platform. The company banned the alleged shooter's original account due to "potential warnings of committing real-world violence" but he was able to make another one.

This article originally appeared on Engadget at https://www.engadget.com/ai/canadian-government-says-openai-will-take-further-steps-to-strengthen-safety-protocols-164151618.html?src=rss

X to require AI labels on armed conflict videos from paid creators, citing ‘times of war’

X will suspend creators from its revenue sharing program if they post AI-generated videos depicting armed conflicts without disclosing they were made with AI. Head of product Nikita Bier announced the policy change on March 3, saying first-time violators will be cut off for 90 days and repeat offenders would be permanently removed from the program.

The policy is notably narrow, applying only to creators enrolled in the platform’s revenue sharing program and only to AI-generated videos of armed conflicts, not AI content in general or non-monetized accounts. Violations will be flagged through Community Notes, X's crowd-sourced fact-checking system, or by detecting metadata from generative AI tools. Bier framed the change as necessary “during times of war,” though the current conflict unfolding between the United States, Israel and Iran has not been formally, or at least not legally, declared a war. Of course, the US has not formally declared war since 1942.

The quality of AI video generation has progressed at a rapid pace, and generated content has become almost indistinguishable from real footage for most viewers. X already watermarks images and videos generated by its Grok chatbot but has not previously required users to disclose AI-generated content. The platform is separately testing a broader AI labeling toggle that would let users mark any post as containing synthetic content, as first reported by Social Media Today, though X has not shared a timeline for that feature.

This article originally appeared on Engadget at https://www.engadget.com/social-media/x-to-require-ai-labels-on-armed-conflict-videos-from-paid-creators-citing-times-of-war-183631400.html?src=rss

AI data centers could reduce power draw on demand, study says

Apparently, AI data centers are capable of sucking less (power, that is). A recent UK trial demonstrated that they can adjust their energy demands dynamically without disrupting critical workloads. This contrasts with data centers' current approach of always-on power draw, which can strain grids and drive up prices for everyone.

Over five days in December 2025, more than 200 simulated "grid events" tested a London data center’s ability to adjust its energy use on the fly. The trial used software from Emerald AI, which was involved in the study. Other partners included NVIDIA, National Grid, Nebius and the nonprofit Electric Power Research Institute.

In each simulated grid event, the data center successfully adjusted its energy use to the requested level. It reduced power draw by up to 40 percent, while critical workloads continued to run as normal throughout the trial.

The data center successfully reacted to spikes in demand during soccer match halftimes. In one case, it reduced its power draw by 10 percent for up to 10 hours. It also managed to cut its demand quickly: One event saw the data center reduce its load by 30 percent in only 30 seconds.

The study will serve as a blueprint for a 100MW “power-flexible AI factory” that NVIDIA plans to operate in Virginia. "This trial proves that NVIDIA-powered infrastructure can act as a grid-aware asset, modulating demand in real-time to support stability," Josh Paker, NVIDIA's sustainability lead, wrote in a statement. "By making AI workloads responsive, we accelerate deployment while reducing the need for costly grid upgrades."

The organizations involved in the study say they'll share their data with the AI industry, regulators and policymakers to try to influence their approach. Fortunately, we don’t need to hope that data center operators’ altruism (ha) will lead to their cooperation. Agreeing to curb usage during peak demand could be good for their balance sheets and lead to faster approvals for new data center grid connections. "We would love to get to a point where we can get customers on the network in two years, and this is part of that," Steve Smith, president of National Grid Partners, told Bloomberg.

This article originally appeared on Engadget at https://www.engadget.com/ai/ai-data-centers-could-reduce-power-draw-on-demand-study-says-180628982.html?src=rss