Apple, Amazon join push for looser greenhouse emissions reporting

The Greenhouse Gas Protocol, a widely used international environmental standard for measuring and reporting emissions, is considering changes to how certain types of the emissions are reported. Advocates for the new guidance argue that the current rules make it too easy for businesses to overstate their commitments to environmentally friendly operations, such as being powered by renewable energy or making progress toward net-zero emissions. 

Today, some major tech companies joined a call pushing back against the new guidance, asking for the new reporting rules to be optional rather than required. The joint statement argued that the proposed policies would reduce investments in sustainability programs and increase electricity prices. Apple and Amazon are among the more than 60 companies that signed the letter, Bloomberg reported. 

The protocol's three tiers of emissions present a clearer picture about companies' environmental efforts and how impactful they are in reducing emissions. Scope 1 includes emissions from sources directly owned or controlled by a business, while Scope 2 covers "how corporations measure emissions from purchased or acquired electricity, steam, heat and cooling." Scope 3 is the catch-all for any other emissions produced within a business' value chain. New proposed changes to the scope 2 guidance would place tighter requirements on how companies use renewable energy certificates to offset their electricity emissions. Rather than purchase clean energy certificates at any point during the year, companies would have to source clean energy that is both geographically close and simultaneously available to their grid-derived power. Any changes adopted by the Greenhouse Gas Protocol could take effect as early as next year.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/apple-amazon-join-push-for-looser-greenhouse-emissions-reporting-182314690.html?src=rss

Monterey Park, California has banned any data centers within its city limits

Monterey Park's city council has moved to ban construction of any data centers within its borders. The California city's leaders placed a permanent ban on these buildings, labelling them a public nuisance. A proposed plan to construct a 250,000 square foot data center was stopped after residents and advocates pushed back against the project. 

Tech journalist Brian Merchant reported on the public comment phase of the city council meeting where residents spoke decisively about data centers. "I can tell you that this issue has brought left, right and center together. It’s a quality of life issue," one commenter said. "Don’t let the rich steal our future."

Monterey Park may be the first US city to lay down the law blocking data center projects, but others are primed to follow suit. New York's state leadership is working on legislation that would prevent data center construction for three years. Maine has a similar bill that has already made it to the governor's desk. At the federal level, Rep. Alexandria Ocasio-Cortez (D-NY) and Senator Bernie Sanders (I-VT) have proposed a ban on building new data centers until there are more guardrails in place for AI development and environmental security. 

Existing facilities have also faced some pushback. For instance, the NAACP is suing xAI for alleged violations of the Clean Air Act at its data center in South Memphis.

This article originally appeared on Engadget at https://www.engadget.com/ai/monterey-park-california-has-banned-any-data-centers-within-its-city-limits-180426656.html?src=rss

Homeland Security reportedly wants to develop smart glasses for ICE

The Department of Homeland Security (DHS) is reportedly developing smart glasses that could be used to collect intelligence on immigrants and US citizens, journalist Ken Klippenstein reported. The devices would help ICE agents identify "illegal aliens" from a distance by capturing video and comparing it to biometric data like facial recognition and walking gait, according to budget documents seen by Klippenstein. The DHS wants to deploy the "ICE Glasses" by September 2027. 

"The project will deliver innovative hardware, such as operational prototypes of smart glasses, to equip agents with real-time access to information and biometric identification capabilities in the field," the document states. The glasses could allow agents to compare observed subjects against existing biometric databases and identify them in real time during interactions. 

Such devices could help make surveillance of US residents "ubiquitous," according to the report. "It might be portrayed as seeking to identify illegal aliens on the streets, but the reality is that a push in this direction affects all Americans, particularly protestors," a DHS lawyer speaking on the condition of anonymity told Klippenstein. 

The deployment of such devices is worrying to civil liberty groups, particularly in light of recent law enforcement activities under the Trump administration. The FBI was reportedly directed by the Department of Justice to "compile a list of groups or entities" who demonstrate "anti-Americanism," according to a previous Klippenstein investigation

It's not the first time smart glasses have come up in reports about the DHS. An investigation by The Independent last month found that ICE and Border Patrol agents in six states were using Meta's AI smart glasses of their own accord, in possible violation of DHS rules. Congress has reportedly been notified of the DHS's Ice Glasses project but has yet to comment publicly. 

 

This article originally appeared on Engadget at https://www.engadget.com/wearables/homeland-security-reportedly-wants-to-develop-smart-glasses-for-ice-093449347.html?src=rss

The UK government reportedly wants Anthropic to expand its presence in London

While the US and Anthropic are in the midst of a major dispute, the UK is trying to sway the San Francisco-based AI company to expand its presence on English soil. According to a report from The Financial Times, staffers at the UK's Department for Science, Innovation and Technology have worked on proposals that include expanding Anthropic's office in London, along with a potential dual stock listing.

The UK's strategy follows a public fallout between Anthropic and the US Department of Defense earlier this year. After the AI company said it wouldn't budge on certain AI guardrails, the Department of Defense pulled its contract and eventually designated Anthropic a supply chain risk. While the designation is currently temporarily blocked by a court-ordered injunction, the feud is far from over. In the meantime, the UK's efforts to court Anthropic have ramped up in the recent weeks thanks to the company's disagreements with the US, according to FT's sources.

With no end in sight for the debacle with the Department of Defense, Anthropic's CEO, Dario Amodei, is expected to visit the UK in May, according to FT. However, even in London, Anthropic will have to compete against OpenAI, which already committed to expanding its footprint in the English capital in February. 

This article originally appeared on Engadget at https://www.engadget.com/ai/the-uk-government-reportedly-wants-anthropic-to-expand-its-presence-in-london-174201049.html?src=rss

Court temporarily blocks US government from labeling Anthropic as a ‘supply chain risk’

The court has granted Anthropic’s request for a preliminary injunction, preventing the government from banning its products for federal use and from formally labeling it as a “supply chain risk,” at least for now. If you’ll recall, things turned sour between the company and the Trump administration when Anthropic refused to change the terms of its contract that would allow the government to use its technology for mass surveillance and the development of autonomous weapons.

In response to Anthropic’s refusal, the president ordered federal agencies to stop using Claude and the company’s other services. The Defense Department also officially labeled it as a supply chain risk, which is typically reserved for entities typically based in US adversaries like China that threaten national security. In addition, department secretary Pete Hegseth warned companies that if they want to work with the government, they must sever ties with Anthropic. The AI company challenged the designation in court, calling it unlawful and in violation of free speech and its rights to due process. It asked the court to put a pause on the ban while the lawsuit is ongoing, as well.

In a court filing, the Defense Department said giving Anthropic continued access to its warfighting infrastructure would “introduce unacceptable risk” to its supply chains. But Judge Rita F. Lin of the District Court for the Northern District of California said the measures the government took “appear designed to punish Anthropic.”

Lin wrote in her decision that it seems Anthropic is being punished for criticizing the government in the press. “Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” she continued. The judge also said that the supply chain risk designation is contrary to law, arbitrary and capricious. She added that the government argued that Anthropic showed its subversive tendencies by “questioning” the use of its technology. “Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government,” she wrote.

Anthropic told The New York Times that it’s “grateful to the court for moving swiftly” and that it’s now focused on “working productively with the government to ensure all Americans benefit from safe, reliable AI.” The company’s lawsuit is still ongoing, and the court has yet to issue its final decision. Judge Lin said, however, that Anthropic “has shown a likelihood of success on its First Amendment claim.”

This article originally appeared on Engadget at https://www.engadget.com/ai/court-temporarily-blocks-us-government-from-labeling-anthropic-as-a-supply-chain-risk-083857528.html?src=rss

Sanders and Ocasio-Cortez introduce a bill to pause US data center construction

File this one under "things that might have a shot after the midterms." On Wednesday, Senator Bernie Sanders (I-VT) and Rep. Alexandria Ocasio-Cortez (D-NY) introduced the Artificial Intelligence Data Center Moratorium Act. The bill would require an immediate pause on data center construction until specific new regulations are passed.

The legislation aims to address the problem that AI is advancing faster than Washington's regulatory response (basically none) has kept pace. Despite its benefits, the technology poses grave threats to the job market and the environment. Rapidly advancing deepfakes could soon leave people unable to determine truth from fiction. (That is, more than online propaganda already has.) AI also makes mass surveillance easier than ever, potentially giving unelected tech leaders unfettered control over society.

"Last year alone, AI was responsible for over 54,000 layoffs nationwide," Rep. Ocasio-Cortez said in a press conference. "And when we talk about those jobs, it's not just a number. These are industries. These are communities. These are families... All of this harm has occurred not in spite of, but because of, the absence of federal legislation to regulate AI."

BARCELONA, SPAIN - 2026/03/02: AI data centers are seen on show at the Mobile World Congress 2026 (MWC) at the Fira de Barcelona. The GSMA Mobile World Congress one of the most important technology and communications trade shows worldwide, held annually in Barcelona, with the biggest technology and mobile companies from all over the world presenting their latest products. (Photo by Davide Bonaldo/SOPA Images/LightRocket via Getty Images)
SOPA Images via Getty Images

The bill would mandate not only an immediate pause on new data center construction but also on the upgrading of existing ones. This moratorium would only be lifted after one or more laws were passed to provide federal oversight of AI products.

First, AI products would need to be proven safe for humanity. (That includes not just physical safety, but also areas like civil rights, privacy and public health.) The wealth AI generates would need to be shared with the American people, not just the billionaire tech bros pulling the strings. Protections would need to be in place to safeguard against mass unemployment. (Increasingly, companies are flat-out admitting that their layoffs are due to AI automation.)

The legislation would also require future data centers to be environmentally safe. They would need to avoid increasing electricity or other utility bills for Americans. AI data centers would have to create union jobs "with strong labor standards." Communities affected by them would be empowered to approve or reject their construction or upgrades. And no government subsidies could be provided for them.

"A moratorium will give us time," Sen. Sanders said. "Time to understand the risks. Time to protect working families. Time to defend our democracy. And time to ensure that technology works for all of us, not just the few."

UNITED STATES - MARCH 25: Sen. Bernie Sanders, I-Vt., and Rep. Alexandria Ocasio-Cortez, D-N.Y., conduct a news conference to announce the Artificial Intelligence Data Center Moratorium Act in the U.S. Capitol on Wednesday, March 25, 2026. The legislation aims to "ensure that AI benefits workers, is safe and effective and does not harm communities or destroy the environment." (Tom Williams/CQ-Roll Call, Inc via Getty Images)
Tom Williams via Getty Images

On the one hand, these could be popular proposals. In a December poll, 60 percent of Americans — including majorities of Democrats, Republicans and independents — said they supported more AI regulation.

However, in Washington's current environment, well, don’t get your hopes up. AI companies are pouring enormous sums of money into campaigns for both political parties. The industry spent at least $83 million in federal elections last year — and that was an off-year without national elections. And of course, anti-regulatory Republicans currently control the presidency, both chambers of Congress and (essentially) the Supreme Court.

So, fat chance it goes anywhere right now. But depending on how the 2026 midterms (and beyond) shake out… who knows? One can dream, anyway.

This article originally appeared on Engadget at https://www.engadget.com/ai/sanders-and-ocasio-cortez-introduce-a-bill-to-pause-us-data-center-construction-174451974.html?src=rss

The US bans all new foreign-made network routers

The Federal Communications Commission has released a notice today designating any consumer routers manufactured outside the US as a security risk. The rule states that new foreign-made product models for network routers will land on the Covered List, a set of communications equipment seen as having an unacceptable risk to national security. Previously purchased routers can still be used and retailers can still sell models that were approved by the prior FCC policies. In an exception to the usual rule, routers included on the Covered List can continue to receive updates at least through March 1, 2027, although the date could potentially be extended.

The move stems from a goal in the White House's 2025 national security strategy that reads: "the United States must never be dependent on any outside power for core components—from raw materials to parts to finished products—necessary to the nation’s defense or economy." The notice from the FCC states that companies can apply for conditional approval for new products from the Department of War or the Department of Homeland Security. However, that requires the businesses to provide a plan for shifting at least some of their manufacturing to the US in order to receive that conditional approval. 

Few, if any, brands known for consumer-grade routers currently build products stateside. It seems likely this sweeping provision could face legal challenges from and cause confusion for the many companies that have production facilities overseas. In addition to Chinese tech giants like TP-Link, US companies will also be affected. NetGear, Eero and Google Nest are all headquartered domestically but have manufacturing in Asia. At least some of that manufacturing activity happens in regions like Taiwan that have historically been on good terms with the US. Until the sector sorts out this new restriction, don't expect to see any new router models on store shelves.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/the-us-bans-all-new-foreign-made-network-routers-223622966.html?src=rss

The White House proposes new AI policy framework that supersedes state laws

The White House has announced a new AI policy framework that calls for Congress to craft federal regulation that overrules state AI laws. The Trump administration has made multiple attempts to overrule more restrictive state-level AI regulation, but has failed so far, most notably in the passing of the “One Big Beautiful Bill.”

The framework focuses on a variety of topics, covering everything from child privacy to the use of AI in the workforce. “Importantly, this framework can succeed only if it is applied uniformly across the United States,” The White House writes. “A patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race.”

In terms of child privacy protections, the framework ask for Congress to require companies to provide tools like “screen time, content exposure and account controls” while also affirming that “existing child privacy protections apply to AI systems,” including limits on how data is collected and used for AI training. The framework also says carveout states should be allowed to enforce “their own generally applicable laws protecting children, such as prohibitions on child sexual abuse material, even where such material is generated by AI.”

The energy-use and environmental impact of AI infrastructure is a going concern, but the White House’s policy proposals are primarily worried about the cost of data centers. The framework suggests federal AI regulation should make sure that higher electricity costs aren’t passed on to people living near data centers, while streamlining the process for permitting AI infrastructure construction, so companies can pursue “on-site and behind-the-meter power generation.” The framework also calls for fewer restrictions on the software-side of AI development, proposing “regulatory sandboxes for AI applications” and asking Congress to “provide resources to make federal datasets accessible to industry and academia in AI-ready formats.”

While a recently AI bill from Senator Marsha Blackburn (R-Ten.) attempts to eliminate Section 230, a piece of a larger law that says platforms can’t be held responsible for the speech they host, the framework appears to propose the opposite. “Congress should prevent the United States government from coercing technology providers, including AI providers, to ban, compel or alter content based on partisan or ideological agendas,” the White House writes. The framework is similarly hands-off when it comes to copyright and the use of intellectual property to train AI. “Although the Administration believes that training of AI models on copyrighted material does not violate copyright laws,” the White House writes, it supports the issue being settled in court rather than by legislation. Though, the White House does think Congress should “consider enabling licensing frameworks” so IP holders can bargain for compensations from AI providers.

The clincher in the White House’s proposal is the idea that federal regulation should preempt state law, specifically so that states don’t “regulate AI development,” don’t “unduly burden American’s use of AI for activity that would be lawful if performed without AI” and don’t punish AI companies “for a third party’s unlawful conduct involving their models.” The idea that AI companies aren’t liable for the illegal or harmful uses of their products is particularly problematic because it lies at the heart of multiple intersecting issues with AI right now, including it being used to generate sexually explicit images of children and allegedly playing a role in the suicide of users.

Ultimately, though, the framework might be too contradictory to be useful, Samir Jain, the Vice President of Policy for the Center for Democracy and Technology, writes in a statement to Engadget:

The White House’s high-level AI framework contains some sound statements of principles, but its usefulness to lawmakers is limited by its internal contradictions and failure to grapple with key tensions between various approaches to important topics like kids’ online safety. It rightly says that the government should not coerce AI companies to ban or alter content based on ‘partisan or ideological agendas,’ yet the Administration’s ‘woke AI’ Executive Order this summer does exactly that. On preemption, the framework asserts that states should not be permitted to regulate AI development, but at the same time rightly notes that federal law should not undermine states’ traditional powers to enforce their own laws against AI developers. States are currently leading the fight to protect Americans from harms that AI systems can create, and Congress has twice correctly decided not to pursue broad preemption.

President Donald Trump has attempted to have an active role in how AI is developed and regulated in the US with mixed results, primarily because, as Jain notes, Congress has been unwilling to give up states’ right to regulate the technology on their own terms. Without that, its hard to say how much of the framework will actually make it into federal law.

This article originally appeared on Engadget at https://www.engadget.com/ai/the-white-house-proposes-new-ai-policy-framework-that-supersedes-state-laws-192251995.html?src=rss

Senator Blackburn introduces the first draft of a federal AI bill

The White House has been promising a set of national rules to guide artificial intelligence since late last year, and today Sen. Marsha Blackburn (R-Tenn.) fired the first volley. The senator shared a discussion draft for codifying the executive order signed by President Donald Trump in December calling for an AI bill. Her stated goal is a policy that "protects children, creators, conservatives and communities from harm."

Blackburn has called for tougher policies for AI safety, and one of the core messages in this discussion draft is that it "places a duty of care on AI developers in the design, development and operation of AI platforms to prevent and mitigate foreseeable harm to users." It also draws a line on the many copyright infringement questions raised by creative industries: "an AI model's unauthorized reproduction, copying, or processing of copyrighted works for the purpose of training, fine-tuning, developing, or creating AI does not constitute fair use under the Copyright Act." 

Some of the other notable provisions are:

  • Requires covered online platforms, including social media platforms, to implement tools and safeguards to protect users under the age of 17 against online harms.

  • Protects the voice and visual likenesses of individuals and creators from the proliferation of digital replicas without their consent.

  • Sets new federal transparency guidelines for marking, authenticating and detecting AI-generated content.

  • Requires certain companies and federal agencies to issue reports on AI-related job effects, including layoffs and job displacement to the U.S. Department of Labor (DOL) on a quarterly basis.

It includes ending Section 230, marking the latest attempt to retire a law that has been questioned as a possible loophole for AI companies to escape liability when their tools cause harm. While AI critics might see positive signs here, remember that this is just the initial version of the framework. Lawmakers will likely spend a lot of time negotiating over the eventual result, which may be notably de-fanged from its current state. It could wind up with a lot more requirements echoing this Republican complaint: "Combats the consistent pattern of bias against conservative figures demonstrated by AI systems by requiring third-party audits to prevent discrimination based on political affiliation." Despite the claims of suppression and censorship, we’ve consistently seen this conservative argument to be false — or at the very least misleading.

This article originally appeared on Engadget at https://www.engadget.com/ai/senator-blackburn-introduces-the-first-draft-of-a-federal-ai-bill-202509852.html?src=rss

Amazon will reportedly cut its USPS shipments by at least two-thirds

A recent change in how the US Postal Service handles shipping partners appears to have forced Amazon to make alternative plans. The company reportedly plans to cut the number of packages it ships through USPS by at least two-thirds later this year. It says the decision came after USPS ended negotiations “at the eleventh hour” in favor of a new bidding process.

On Tuesday, the Wall Street Journal reported that Amazon plans to reduce the shipments it hands off to USPS. Last year, the company accounted for nearly 15 percent of the Postal Service’s package deliveries. Cutting that by nearly two-thirds diminishes one of the USPS’s most reliable sources of revenue. In fiscal 2025, the agency reported a net loss of $9 billion.

Amazon’s current contract with USPS ends on September 30. In a public response to the WSJ story, the company said it notified USPS in October 2025 that it would need to complete a new deal by December. “You can't add capacity for hundreds of millions of packages overnight — it requires major capital investment, long-term infrastructure planning, hiring, and logistics coordination,” Amazon wrote.

According to Amazon, USPS then pulled the plug on negotiations at the last second. “We negotiated with [USPS] in good faith for more than a year to reach a deal that would bring them billions in revenue and believed we were heading toward an agreement,” Amazon wrote in a statement. “Our goal was to increase our volumes with USPS, not reduce them — until USPS abruptly walked away at the eleventh hour in December.”

FILE - Postmaster General David Steiner speaks at an event marking the 250th anniversary of postal service's founding, July 23, 2025, in Washington. (AP Photo/Cliff Owen, File)
Postmaster General David Steiner (AP Photo/Cliff Owen, File)
ASSOCIATED PRESS

That’s when Postmaster General David Steiner implemented a new bidding process for last-mile deliveries, replacing a long-established one where USPS negotiated with shipping partners individually. He described the move as “a fair bidding process that enables the marketplace to find the best mix of local shipping attributes for the best volume-driven pricing.” Steiner was appointed to the post in May 2025, following the departure of former head Louis DeJoy.

Amazon said it submitted a bid in February using the new system but hasn’t heard back. “This creates significant uncertainty for our long-term network planning,” the company said. “Despite this, we participated in good faith and submitted a bid in February 2026. We've received no response.”

USPS plans to announce the bidding results in Q2 2026. Contracts are expected to be finalized by Q3. Despite apparently moving forward with the contingency plan, Amazon said it’s still “ready to continue this partnership.”

As for Postmaster Steiner, he spent Tuesday asking Congress to loosen USPS regulations and let him raise prices. Warning that the agency will “run out of cash” in about a year, he told a House subcommittee that he wants to raise the agency’s current $15 billion debt cap. He also asked for the ability to increase postage prices and reform its retiree pension obligations.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/amazon-will-reportedly-cut-its-usps-shipments-by-at-least-two-thirds-200915702.html?src=rss