Senator Blackburn introduces the first draft of a federal AI bill

The White House has been promising a set of national rules to guide artificial intelligence since late last year, and today Sen. Marsha Blackburn (R-Tenn.) fired the first volley. The senator shared a discussion draft for codifying the executive order signed by President Donald Trump in December calling for an AI bill. Her stated goal is a policy that "protects children, creators, conservatives and communities from harm."

Blackburn has called for tougher policies for AI safety, and one of the core messages in this discussion draft is that it "places a duty of care on AI developers in the design, development and operation of AI platforms to prevent and mitigate foreseeable harm to users." It also draws a line on the many copyright infringement questions raised by creative industries: "an AI model's unauthorized reproduction, copying, or processing of copyrighted works for the purpose of training, fine-tuning, developing, or creating AI does not constitute fair use under the Copyright Act." 

Some of the other notable provisions are:

  • Requires covered online platforms, including social media platforms, to implement tools and safeguards to protect users under the age of 17 against online harms.

  • Protects the voice and visual likenesses of individuals and creators from the proliferation of digital replicas without their consent.

  • Sets new federal transparency guidelines for marking, authenticating and detecting AI-generated content.

  • Requires certain companies and federal agencies to issue reports on AI-related job effects, including layoffs and job displacement to the U.S. Department of Labor (DOL) on a quarterly basis.

It includes ending Section 230, marking the latest attempt to retire a law that has been questioned as a possible loophole for AI companies to escape liability when their tools cause harm. While AI critics might see positive signs here, remember that this is just the initial version of the framework. Lawmakers will likely spend a lot of time negotiating over the eventual result, which may be notably de-fanged from its current state. It could wind up with a lot more requirements echoing this Republican complaint: "Combats the consistent pattern of bias against conservative figures demonstrated by AI systems by requiring third-party audits to prevent discrimination based on political affiliation." Despite the claims of suppression and censorship, we’ve consistently seen this conservative argument to be false — or at the very least misleading.

This article originally appeared on Engadget at https://www.engadget.com/ai/senator-blackburn-introduces-the-first-draft-of-a-federal-ai-bill-202509852.html?src=rss

The Defense Department reportedly plans to train AI models on classified military data

The Pentagon is making plans to have AI companies train versions of their models specifically for military use on classified information, according to the MIT Technology Review. If true, it wouldn’t come as a surprise, seeing as the US is aiming to become an “AI-first" warfighting force, based on the statement [PDF] released by Secretary of Defense Pete Hegseth earlier this year.

The department is already using AI models in the military: For instance, the US reportedly used Anthropic’s Claude to help with the capture of Venezuelan President Nicolás Maduro and with its attack on Iran, even after President Trump ordered federal agencies to ban its technology. But models trained on actual classified data could give more accurate and detailed responses, say, for situations similar to what happened in the past that aren’t public information.

MIT Tech Review says the department is looking to conduct the training in a secure data center that’s allowed to host classified government projects. The Pentagon would train copies of AI models, but it would remain the only owner of any data used for training. In rare cases, someone from the AI company could be granted the appropriate security clearance to see classified information.

Aalok Mehta, who previously led AI policy efforts at Google and OpenAI, told the publication that training models on classified data carries certain risks. It’s not that the information could go public, since the the models trained would be versions made specifically for military purposes. However, if the same model is used across the whole Defense Department, for instance, personnel without the correct clearance level could end up getting information that they weren’t supposed to have access to.

If the initiative pushes through, the department would likely be training models from OpenAI and xAI, which recently signed agreements with the agency. Anthropic, which has long worked with the government, might not be part of this project. The company refused to allow its technology to be used for mass surveillance and the development of autonomous weapons, and Trump ordered all federal offices to ban it as a result.

This article originally appeared on Engadget at https://www.engadget.com/ai/the-defense-department-reportedly-plans-to-train-ai-models-on-classified-military-data-120332113.html?src=rss

Defense Department says Anthropic poses ‘unacceptable risk’ to national security

The Department of Defense said giving Anthropic continued access to its warfighting infrastructure would “introduce unacceptable risk” to its supply chains in a court filing submitted in response to the AI company’s lawsuit. If you’ll recall, Anthropic sued the government to challenge the supply chain risk designation it received for refusing to allow its model to be used for mass surveillance and the development of autonomous weapons.

In its filing, the department explained that its secretary, Pete Hegseth, had a provision incorporated into AI service contracts, allowing the agency to use their technologies for any lawful purpose. Anthropic refused its terms and apparently, the company’s behavior caused the Pentagon to question whether it truly was a “trusted partner” that it could work with when it comes to “highly sensitive” initiatives. “After all, AI systems are acutely vulnerable to manipulation, and Anthropic could attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations, if Anthropic — in its discretion — feels that its corporate “red lines” are being crossed,” the Pentagon wrote in its filing. “DoW deemed that an unacceptable risk to national security,” it added, referring to the agency as the Department of War, which is the Trump administration’s preferred name for it.

It was due to those concerns that President Trump ordered federal agencies to stop using its technology, the filing reads. The company is asking the court to issue a preliminary injunction and put a pause on a ban while it’s challenging its supply chain risk designation in court. While Anthropic’s clients could continue working with the company on non-defense-related projects, it says the label could cause it to lose billions of dollars in revenue. It’s not quite clear if Anthropic is still trying to reach a new deal with the government, as was reported before it filed its lawsuit. As The New York Times notes, Microsoft, Google and OpenAI had filed friend-of-the-court briefs in support of Anthropic since then.

This article originally appeared on Engadget at https://www.engadget.com/ai/defense-department-says-anthropic-poses-unacceptable-risk-to-national-security-094328717.html?src=rss

Trump administration will reportedly get $10 billion for brokering the TikTok deal

There may have been some extra incentive for the Trump administration to get the TikTok US deal done. According to a report from The Wall Street Journal, the Trump administration is set to receive a total of $10 billion in the deal that allowed TikTok to remain in the US. The new investors who acquired stakes in the US entity of TikTok already paid a $2.5 billion fee to the administration when the deal closed in January, but WSJ's latest report noted that the group of investors would continue to make payments until the total hits $10 billion.

After a group of investors, which includes Oracle along with the Silver Lake and MGX investment firms, acquired stakes in the US-based TikTok entity called TikTok USDS Joint Venture, the WSJ previously reported that the administration would receive a "multibillion-dollar fee" for its work on the deal. To better contextualize the recently-revealed $10 billion fee the Trump administration is receiving, the US entity of TikTok was valued at $14 billion by Vice President JD Vance.

The Trump administration has previously involved itself in major deals with other US corporations. Last year, the administration invested $8.9 billion into Intel and received a nearly 9 percent equity stake. In terms of unprecedented windfalls, the Trump administration also received a Boeing 747-8 as a gift from the Qatari government in May.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/trump-administration-will-reportedly-get-10-billion-for-brokering-the-tiktok-deal-180954979.html?src=rss

BallotGuessr is Geoguessr for budding political pundits

Fancy yourself as one of those folks who stands in front of an expensive touchscreen display on a news network on election night, zooming in and out of counties while bleating about polling and voting data? If so, you might get a kick out of BallotGuessr

This is a riff on GeoGuessr that tasks you with guessing how a county voted in the 2024 presidential election. All you have to go on to figure out the identity of each county are contextual clues from Google Street View images. You can move around the environment a bit, but unless you get lucky, you'll need to have a good sense of politics and geography to do well here.

Once you think you have an idea of where the county is, you move a slider to guess whether residents voted for the Democrat or Republican ticket and by how many points. In the daily challenge mode, you only have 30 seconds to make your guess in each of five rounds. I'm bad at it, but it's a fun take on GeoGuessr all the same.

BallotGuessr features 2,845 curated Google Street View locations from all 50 states, with a maximum of 15 locations for each county. Its creator plans to expand the game with data for the 2022 midterms and 2020 presidential election, as well as recent elections in France, Germany and the United Kingdom.

This article originally appeared on Engadget at https://www.engadget.com/gaming/ballotguessr-is-geoguessr-for-budding-political-pundits-170028894.html?src=rss

Social Security watchdog investigating claims that DOGE engineer copied its databases

The inspector general's office of the Social Security Administration is investigating allegations of a security breach by a member of the so-called Department of Government Efficiency operation spearheaded by Elon Musk. A whistleblower has claimed that a former software engineer from DOGE said he possessed two databases from the SSA, "Numident" and the "Master Death File." The person reportedly asked for help transferring the databases from a thumb drive "to his personal computer so that he could ‘sanitize’ the data before using it at [the company]," an unnamed government contractor where he is currently employed. Those databases include personal information about more than 500 million living and deceased Americans. 

The Washington Post reported that the whistleblower complaint was filed with the inspector general in January. "When The Post contacted the agency and the company in January, both said they had not heard of the complaint. Both said they subsequently looked into the allegations and did not find evidence to confirm the claims," the publication said. It is unclear why the complaint is now being investigated and neither party offered comment this week for The Post's article. The SSA watchdog informed both members of Congress and the Government Accountability Office of its investigation. 

These allegations follow a different whistleblower complaint filed last August about DOGE access and mishandling of data from the SSA. Charles Borges, former chief data officer at the agency, claimed that a SSA database was stored in an unsecured cloud environment. "This is absolutely the worst-case scenario," Borges told The Post of the latest claims. "There could be one or a million copies of it, and we will never know now."

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/social-security-watchdog-investigating-claims-that-doge-engineer-copied-its-databases-212722061.html?src=rss

Google to Provide Pentagon with Gemini-powered AI agents

Google is rolling out Gemini AI agents to the Department of Defense's more than 3 million civilian and military employees, according to Bloomberg. The agents will initially operate on unclassified networks, with talks underway to expand them to classified and top-secret systems, according to Emil Michael, the Under Secretary of Defense for Research and Engineering.

Eight pre-built agents will automate tasks like summarizing meeting notes, building budgets and checking proposed actions against the national defense strategy. Google Vice President Jim Kelly said in a blog post on Tuesday that Defense Department personnel can also create custom agents using natural language.

Google's AI chatbot, accessible through the Pentagon's GenAI.mil portal, has been used by 1.2 million Defense Department employees for unclassified work since December, with personnel running 40 million unique prompts and uploading more than 4 million documents. Training has reportedly not kept pace with adoption, however, as only 26,000 people have completed AI training since December, but future sessions are fully booked, something that suggests more employees are getting on board.

The expansion comes as the Pentagon rapidly broadens its AI partnerships after its standoff with Anthropic, which refused to remove guardrails against domestic surveillance and autonomous weapons from its technology. The Pentagon has since classified the American AI company as a "supply chain risk," which Anthropic will fight in court. Roughly 900 Google and 100 OpenAI employees have since signed an open letter urging their employers to hold firm on the same guardrails. Google quietly altered its "AI Principles" regarding these exact uses in early February.

The Department of Defense has since struck deals with OpenAI and xAI for restricted networks. Google itself faced internal backlash over Pentagon work in 2018 when thousands of employees protested Project Maven, a program that used AI to analyze drone video feeds. It did not renew that contract but has since loosened its restrictions on military work.

This article originally appeared on Engadget at https://www.engadget.com/ai/google-to-provide-pentagon-with-gemini-powered-ai-agents-161037444.html?src=rss

X says it suspended 800 million accounts in 2024 over spam and manipulation

X has told UK’s MPs or Members of Parliament that it suspended 800 million accounts to combat state-backed campaigns on the website, according to The Guardian. Wifredo Fernández, X’s head of global government affairs, told the officials that the suspensions happened over a 12-month period in 2024 and that the accounts were suspended for violating X’s rules on platform manipulation and spam. Russia was allegedly behind most of the accounts that were flooding the website with spam, followed by state actors from China and Iran.

The Russian accounts were trying to “stoke division” and disseminate a “particular type of narrative” to manipulate the 2024 US Presidential Elections, he told MPs on the foreign affairs committee during a video call. Fernández also claimed that the attempts to manipulate discussions and spam on the service aren’t done yet. “There are efforts every single day to create inauthentic networks of accounts,” he said. Apparently, X suspended an additional “several hundred million accounts” last year as well, presumably also due to foreign state-backed manipulation campaigns.

To note, Statista estimates the number of users on X to be 429 million in early 2024. The Guardian also says the platform has approximately 300 million monthly users worldwide.

This article originally appeared on Engadget at https://www.engadget.com/apps/x-says-it-suspended-800-million-accounts-in-2024-over-spam-and-manipulation-123000201.html?src=rss

TikTok can continue its operations in Canada after agreeing to enhanced security measures

TikTok doesn’t have to close its offices in Canada after all. The country will allow TikTok to keep its business operational after a national security review, Minister of Industry Mélanie Joly has announced. This is a complete 180 of the country’s decision back in 2024 to order TikTok to shut down its operations, citing unspecified “national security risks” posed by the company and its China-based parent ByteDance. Canadian authorities said back then that their decision was based on evidence collected by the country’s security and intelligence community.

As Bloomberg notes, the order was paused shortly after Mark Carney replaced Justin Trudeau as Prime Minister in early 2025. Carney was the first Canadian PM to visit China in years and had a discussion with President Xi Jinping about tariffs. Joly said TikTok will be allowed to operate in Canada with new enhancements in data security and regulatory oversight. To start with, it will have to implement privacy-enhancing technologies to reduce the risk of unauthorized access that compromise Canadians’ personal information. It will also have to add enhanced protections for minors and ensure transparency by letting an independent third party “audit and continuously verify data access controls.”

“…this decision will protect Canadian jobs, ensuring that TikTok Canada maintains a physical presence in Canada, with commitments to invest in its cultural sector,” Joly said in a statement. “TikTok Canada will support the growth of Canadian creators, artists and cultural organizations, while strengthening the production and accessibility of Canadian cultural content in both official and Indigenous languages across the country.”

This article originally appeared on Engadget at https://www.engadget.com/apps/tiktok-can-continue-its-operations-in-canada-after-agreeing-to-enhanced-security-measures-095239399.html?src=rss

Dutch intelligence services warn of Russian hackers targeting Signal and WhatsApp

The Netherlands’ military intelligence service and domestic intelligence agency have issued a join warning claiming that Russian hackers have launched "a large-scale global cyber campaign to gain access to Signal and WhatsApp accounts belonging to dignitaries, military personnel and civil servants." According to the Dutch alert, hackers are imitating support chatbots to trick key targets into revealing their PINs for those communication platforms, which allows the bad actors to access incoming messages.

Last year in the US, the Pentagon advised members not to use Signal after the platform was subjected to similar phishing scams by Russian hackers. (Although the same US military leaders proved capable of creating their own security breaches without foreign interference just days prior.) 

Having another national government raise concerns about Signal and WhatsApp phishing scams offers yet another reminder to never provide security details or click links without a check on who is really asking for your info.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/dutch-intelligence-services-warn-of-russian-hackers-targeting-signal-and-whatsapp-203707202.html?src=rss