FAA opens up real world testing for air taxi startups

US regulators have approved eight pilot programs across 26 states that will allow Archer, Joby and other eVTOL companies to finally start testing aircraft this summer, according to a US Department of Transportation (DoT) press release. That will allow those manufacturers to run trials for use cases like urban air taxi services, regional passenger transportation, cargo, emergency medical operations and autonomous flight technology. 

The new projects were made possible by the White House's Advanced Air Mobility and eVTOL Integration Pilot Program (e-IPP) approved last year to allow certification for such aircraft to progress after being stuck in the mud for years. "By safely testing the deployment of these futuristic air taxis and other AAM vehicles, we can fundamentally improve how the traveling public and products move," US Transportation Secretary Sean Duffy said at the time

Other FAA aircraft partners include Beta, Electra, Elroy Air, Wisk, Ampaire and Reliable Robotics. Key pilot programs were approved for the Texas, Utah, Pennsylvania, Louisiana and North Carolina Departments of Transportation, along with New York and New Jersey Port Authority and the City of Albuquerque. We've already glimpsed some of the ideas, like Archer's plan to use air taxis between New York's major airports and city heliports.

A number of eVTOL startups have launched in recent years, but so far none of the aircraft have received "type certificates" for carrying passengers or other commercial purposes. Archer and Joby are the farthest along in that process, having been granted the FAA's final airworthiness criteria — the final step before full approval. 

The delays are mostly about safety and working eVTOL planes into existing aviation flows. "The gap isn't technical capability anymore. It's regulatory synchronization," the FAA's Kalea Texeira said last year on LinkedIn. "[That includes factors like] vertiports. Energy supply chains. Part 135 [commercial] integration. Pilot training frameworks that match the aircraft timeline." In the same post, Texeira added that Joby wouldn't certify until mid-2027 at the earliest, with Archer following in 2028.

The new program could help accelerate plane-makers' plans. In a YouTube video, Beta CEO Kyle Clark said selection for the program will help his company start operations a year earlier than it previously expected. Archer, meanwhile, compared the program to robotaxi testing and said it will help build trust with the public for its Midnight aircraft. "This is the clearest sign yet... that bringing air taxis to market in the United States is a real priority," said Archer CEO Adam Goldstein.

This article originally appeared on Engadget at https://www.engadget.com/transportation/faa-opens-up-real-world-testing-for-air-taxi-startups-112219316.html?src=rss

UK government delays AI copyright rules amid artist outcry

The UK government is working on a controversial data bill that would allow AI companies like Google and OpenAI to train their models on copyrighted materials without consent. However, following a two month consultation, it looks like passage of the law will be delayed. "Copyright is going to be kicked down the road," a person with knowledge of the matter told The Financial Times

Responses by stakeholders during the consultation period weren't favorable to any of the government's proposed ideas for use of copyrighted materials, the FT's sources said. There's no expectation now that an AI bill will be part of the King's Speech set for May this year. 

As a result, Ministers have decided to go back to the drawing board and spend more time exploring other options. The House of Lords Communications and Digital Committee called on the government to develop a licensing-first regime "underpinned by robust transparency that safeguards creators' livelihoods while supporting sustainable AI growth."

The UK parliament's preferred position on the bill (also argued by tech giants like Google) has been that copyright holders need to formally opt-out if they don't want their materials used to train AI models. However, publishers, filmmakers, musicians and others have said that this would be impractical and an existential threat to the UK's creative industries.

The House of Lords took the side of artists and introduced an amendment that would require tech companies to disclose which copyright-protected works were used to train AI models. That addition, however, was blocked by the UK's House of Commons in May last year.

The UK's majority Labour government — already under fire for its handling of the economy — has taken hits from publishers, musicians, authors and other creative groups over the proposed law. Elton John called the government "absolute losers" while Paul McCartney said that AI has its uses but "it shouldn't rip creative people off." McCartney and others artists were part of a "silent album" meant to show the impact of IP theft by AI. 

Baroness Beeban Kidron from the House of Lords has also ripped the government over the AI bill. "Creators do not deny the creative and economic value of AI, but we do deny the assertion that we should have to build AI for free with our work, and then rent it back from those who stole it," she said last year. "It's astonishing that a Labour government would abandon the labor force of an entire section."

This article originally appeared on Engadget at https://www.engadget.com/ai/uk-government-delays-ai-copyright-rules-amid-artist-outcry-113937154.html?src=rss

Anthropic says it will challenge Defense Department’s supply chain risk designation in court

In a new blog post, Anthropic CEO Dario Amodei has admitted that it received a letter from the Defense Department, officially labeling it a supply chain risk. He said he doesn’t “believe this action is legally sound,” and that his company sees “no choice” but to challenge it in court. Hours before Amodei published the post, the Pentagon announced that it notified the company that its “products are deemed a supply chain risk, effective immediately.”

If you’ll recall, the Defense Department (called the Department of War under the current administration) threatened to give the company the designation typically reserved for firms from adversaries like China if it didn’t agree to remove its safeguards over mass surveillance and autonomous weapons. President Trump then ordered federal agencies to stop using Anthropic’s tech.

Amodei explained that the designation has a narrow scope, because it only exists to protect the government. That is why the general public, and even Defense Department contractors, can still use Anthropic’s Claude chatbot and its AI technologies. Microsoft told CNBC that it will continue using Claude after its lawyers had concluded that it can keep on working with Anthropic on non-defense related projects.

The CEO has also admitted that his company had “productive conversations” with the department over the past few days. He said that they were looking at ways to serve the Pentagon that adheres to its two exceptions, namely that its technology not be used for mass surveillance and the development of fully autonomous weapons, and at ways to “ensure a smooth transition if that is not possible.” That confirms reports that Anthropic is back in talks with the agency in an effort to reach a new deal. In addition, he apologized for a leaked internal memo, wherein he reportedly said that OpenAI’s messaging about its own deal with the department is “just straight up lies.”

This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-says-it-will-challenge-defense-departments-supply-chain-risk-designation-in-court-054459618.html?src=rss

COPPA 2.0 passes the Senate again, unanimously this time

Today the US Senate unanimously passed proposed legislation known as COPPA 2.0. This measure, fully named the Children and Teens’ Online Privacy Protection Act, aims to create new protections for younger users online, such as blocking platforms from collecting their personal data without consent. 

COPPA 2.0 is a modernized take on the Children’s Online Privacy Protection Act of 1998, attempting to address recent changes in common online activities, like targeted advertising, that could prove harmful to minors. Lawmakers have made several attempts to get this bipartisan bill through. While it has made varying amounts of headway in the Senate, none of the COPPA 2.0 bills to date have gotten past the House of Representatives. Industry groups such as NetChoice have previously opposed COPPA 2.0 and other measures around minors' online activity such as KOSA, the Kids Online Safety Act. NetChoice members include Google, YouTube, Meta, Reddit, Discord, TikTok and X. Google specifically has since changed its stance to support COPPA 2.0, however.

"This bill expands the current law protecting our kids online to ensure companies cannot collect personal information from anyone under the age of 17," Senate Democratic Leader Chuck Schumer (D-NY) said in a statement about the latest result. "This is a big step forward for protecting our kids. We hope the House can join us. They haven’t thus far."

However, there has been a bigger push both domestically and internationally toward restrictions on when and how younger people engage online. Several states — Utah, California and Washington to name a few — have enacted laws requiring some level of age verification, either to access mature content online or to use social media apps at all. Many of these efforts have raised concerns about privacy regarding where and how people's personal information is stored and protected. COPPA 2.0 might wind up benefitting from the privacy debates since it emphasizes giving teens and parents ways to protect themselves from having their data used against them rather than asking adults to give up data in order to use the internet as usual.

Update, March 6 2026, 11:38AM ET: Article updated with additional context on Google.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/coppa-20-passes-the-senate-again-unanimously-this-time-215044656.html?src=rss

Trump orders federal agencies to drop Anthropic services amid Pentagon feud

President Donald Trump has ordered all US government agencies to stop using Claude and other Anthropic services, escalating an already volatile feud between the Department of Defense and company over AI safeguards. Taking to Truth Social on Friday afternoon, the president said there would be a six-month phase out period for federal agencies, including the Defense Department, to migrate off of Anthropic's products. 

“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution,” the president wrote. “Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.”  

Before today, US Defense Secretary Pete Hegseth had threatened to label Anthropic a “supply chain risk” if it did not agree to withdraw safeguards that insist Claude not be used for mass surveillance against Americans or in fully autonomous weapons. In a post on X published after President Trump’s statement, Hegseth said he was “directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”

Anthropic did not immediately respond to Engadget's comment request. Earlier in the day, a spokesperson for the company said the contract Anthropic received after CEO Dario Amodei outlined Anthropic's position made “virtually no progress” on preventing the outlined misuses.

"New language framed as a compromise was paired with legalese that would allow those safeguards to be disregarded at will. Despite DOW's recent public statements, these narrow safeguards have been the crux of our negotiations for months," the spokesperson said. "We remain ready to continue talks and committed to operational continuity for the Department and America's warfighters." 

Advocacy groups like the Center for Democracy and Technology (CDT) quickly came out against the president’s threats. “This action sets a dangerous precedent. It chills private companies’ ability to engage frankly with the government about appropriate uses of their technology, which is especially important in national security settings that so often have reduced public visibility,” said CDT President and CEO Alexandra Givens, in a statement shared with Engadget. “These threats undermine the integrity of the innovation ecosystem, distort market incentives and normalize an expansive view of executive power that should worry Americans all across the political spectrum.”

For now, it appears the AI industry is united behind Anthropic. On Friday, hundreds of Google and OpenAI employees signed an open letter urging their companies to stand in "solidarity" with the lab. According to an internal memo seen by Axios, OpenAI CEO Sam Altman said the ChatGPT maker would draw the same red line as Anthropic.  

In a blog post published late on Friday, Anthropic vowed to “challenge any supply chain risk designation in court,” and assured its customers that only work related to the Defense Department would be affected. The company's full statement is available here, an excerpt is below:

Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company. We are deeply saddened by these developments. As the first frontier AI company to deploy models in the US government’s classified networks, Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so.

We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government.

No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court.

Update, February 27, 9PM ET: This story was updated twice after publish. First at 6PM ET to include a link to and quotes from Hegseth about the designation of Anthropic as a supply chain risk. Later, a quote from Anthropic was added, along with a link to the company’s blog post on the subject.

This article originally appeared on Engadget at https://www.engadget.com/ai/trump-orders-federal-agencies-to-drop-anthropic-services-amid-pentagon-feud-222029306.html?src=rss

OpenAI will notify authorities of credible threats after Canada mass shooter’s second account was discovered

OpenAI has vowed to strengthen its safety protocols and to notify law enforcement of credible threats sooner in a letter addressed to Canadian authorities, according to Politico and The Washington Post. If you’ll recall, Canadian politicians summoned the company’s leaders after reports came out that it didn’t notify authorities when it banned the account owned by the Tumbler Ridge, British Columbia mass shooting suspect back in 2025. Some of OpenAI’s leaders have already met with Candian officials, and British Columbia Premier David Eby said Sam Altman had also agreed to meet with him.

While OpenAI has yet to announce changes to its rules, Ann O’Leary, its vice president of global policy, reportedly wrote in the letter that the company will tweak its detection systems so that they can better prevent banned users from coming back to the platform. Apparently, after OpenAI banned the shooter’s original account due to “potential warnings of committing real-world violence,” the perpetrator was able to create another account. The company only discovered the second account after the shooter’s name was released, and it has since notified authorities.

Further, OpenAI will now notify authorities if it detects “imminent and credible” threats in ChatGPT conversations, even if the user doesn’t reveal “a target, means, and timing of planned violence.” O’Leary explained that if the new rules had been in effect when the shooter’s account was banned in 2025, the company would have notified the police. OpenAI will also establish a point of contact for Canadian law enforcement so it can quickly share information with authorities when needed.

The Canadian government sees OpenAI’s decision not to report the shooter’s original account as a failure. It threatened to regulate AI chatbots in the country if their creators cannot show that they have proper safeguards to protect its users. It’s unclear at the moment if OpenAI also plans to roll out the same changes in the US and elsewhere in the world.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-will-notify-authorities-of-credible-threats-after-canada-mass-shooters-second-account-was-discovered-112706548.html?src=rss

US website ‘freedom.gov’ will allow Europeans to view hate speech and other blocked content

The US State Department is building a web portal, where Europeans and anyone else can see online content banned by their governments, according to Reuters. It was supposed to be launched at Munich Security Conference last month, but some state department officials reportedly voiced their concerns about the project. The portal will be hosted on freedom.gov, which currently just shows the image above. “Freedom is Coming,” the homepage reads. “Information is power. Reclaim your human right to free expression. Get Ready.”

Reuters says officials discussed making a virtual private network function available on the portal and making visitors’ traffic appear as if they were from the US, so they could see anything unavailable to them. While it’s a state department project, The Guardian has traced the domain to the Cybersecurity and Infrastructure Security Agency (CISA), which is a component of the US Department of Homeland Security. Homeland also serves as the administrator for the Immigration and Customs Enforcement (ICE).

The project could drive the wedge further between the US and its European allies. European authorities don’t usually order broad censorships preventing their citizens from being able to access large parts of the internet. Typically, they only order the blocking of hate speech, terrorist propaganda, disinformation and anything illegal under the EU’s Digital Services Act or the UK’s Online Safety Act.

“If the Trump administration is alleging that they’re gonna be bypassing content bans, what they’re gonna be helping users access in Europe is essentially hate speech, pornography, and child sexual abuse material,” Nina Jankowicz, who served as the executive director of Homeland Security’s Disinformation Governance Board, told The Guardian. The board was very short-lived and was disbanded a few months after it was formed, following complaints by Republican lawmakers that it would impinge on people’s rights to free speech.

When asked about the project, the state department said it didn’t have a program specifically meant to circumvent censorship in Europe. But the spokesperson said: “Digital freedom is a priority for the State Department, however, and that includes the proliferation of privacy and censorship-circumvention technologies like VPNs."

This article originally appeared on Engadget at https://www.engadget.com/big-tech/us-website-freedomgov-will-allow-europeans-to-view-hate-speech-and-other-blocked-content-130000014.html?src=rss

Homeland Security has reportedly sent out hundreds of subpoenas to identify ICE critics online

The Department of Homeland Security (DHS) has reportedly been asking tech companies for information on accounts posting anti-ICE sentiments. According to The New York Times, DHS has sent hundreds of administrative subpoenas to Google, Reddit, Discord and Meta over the past few months. Homeland Security asked the companies for names, email addresses, telephone numbers and any other identifying detail for accounts that have criticized the US Immigration and Customs Enforcement agency or have reported the location of its agents. Google, Meta and Reddit have complied with some of the requests

Administrative subpoenas are different from warrants and are issued by the DHS. The Times says they were rarely used in the past and were mostly sent to companies for the investigation of serious crimes, such as child trafficking. Apparently, though, the government has ramped up its use in the past year. “It’s a whole other level of frequency and lack of accountability,” Steve Loney, a senior supervising attorney for ACLU, told the publication.

Companies can choose whether to comply with the authorities or not, and some of them give the subject of a subpoena up to 14 days to fight it in court. Google told The Times that its review process for government requests is “ designed to protect user privacy while meeting [its] legal obligations” and that it informs users when their accounts have been subpoenaed unless it has been legally ordered not to or in exceptional circumstances. “We review every legal demand and push back against those that are overbroad,” the company said.

Some of the accounts that were subpoenaed belong to users posting ICE activity in Montgomery County, Pennsylvania on Facebook and Instagram in English and Spanish. The DHS asked Meta for their names and details on September 11, and the users were notified about it on October 3. They were told that if Meta didn’t receive documentation that they were fighting the subpoena in court within 10 days, Meta will give Homeland Security the information it was asking for. The ACLU filed a motion for the users in court, arguing that the DHS is using administrative subpoenas as a tool to suppress speech of people it didn’t agree with.

In late January, Meta started blocking links to ICE List, a website that lists thousands of ICE and Border Patrol agents’ names. A few days ago, House Judiciary Committee member Jamie Raskin (D-MD) also asked Apple and Google to turn over all their communication with the US Department of Justice to investigate the removal of ICE-tracking apps from their respective app stores.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/homeland-security-has-reportedly-sent-out-hundreds-of-subpoenas-to-identify-ice-critics-online-135245457.html?src=rss

New York lawmakers introduce bill that aims to halt data center development for three years

On Friday, New York State Senators Liz Krueger and Kristen Gonzales introduced a bill that would stop the issuance of permits for new data centers for at least three years and ninety days to give time for impact assessments and to update regulations. The bill would require the Department of Environmental Conservation and Public Service Commissions to issue impact statements and reports during the pause, along with any new orders or regulations that they deem necessary to minimize data centers' impacts on the environment and consumers in New York.

The bill would require these departments to study data centers' water, electricity and gas usage, and their impact on the rates of these resources, among other things. The bill, citing a Bloomberg analysis, notes that, "Nationally, household electricity rates increased 13 percent in 2025, largely driven by the development of data centers." New York is the sixth state this year to introduce a bill aiming to put the brakes on data centers, following in the footsteps of Georgia, Maryland, Oklahoma, Vermont and Virginia, according to Wired. It's still very much in the early stages, and is now with the Senate Environmental Conservation Committee for consideration. 

This article originally appeared on Engadget at https://www.engadget.com/big-tech/new-york-lawmakers-introduce-bill-that-aims-to-halt-data-center-development-for-three-years-224005266.html?src=rss

The State Department is scrubbing its X accounts of all posts from before Trump’s second term

The State Department is wiping the post history of its X accounts and making it so you'll have to file a Freedom of Information Act request if you want to access any of the content it removed, according to NPR. The publication reports that the State Department is removing all posts from before President Trump's current term — a move that affects several accounts associated with the department, including those for US embassies, and posts from the Biden and Obama administrations. Posts from Trump's first term will be taken down too. 

Unlike how past administrations have handled the removal of social media content and the transition of accounts, these posts won't be kept in a public archive. A spokesperson for the State Department confirmed this to NPR, and said the move is meant "to limit confusion on U.S government policy and to speak with one voice to advance the President, Secretary, and Administration's goals and messaging. It will preserve history while promoting the present." The spokesperson also called the X accounts "one of our most powerful tools for advancing the America First goals." 

The Trump administration has been purging information from government websites since he took office last year. Just this week, the CIA unexpectedly took down its World Factbook, a global reference guide that's been available on the internet since 1997.


This article originally appeared on Engadget at https://www.engadget.com/social-media/the-state-department-is-scrubbing-its-x-accounts-of-all-posts-from-before-trumps-second-term-205515745.html?src=rss