Biden executive order aims to stop Russia and China from buying Americans’ personal data

President Joe Biden will issue an executive order that aims to limit the mass-sale of Americans’ personal data to “countries of concern,” including Russia and China. The order specifically targets the bulk sale of geolocation, genomic, financial, biometric, health and other personally identifying information.

During a briefing with reporters, a senior administration official said that the sale of such data to these countries poses a national security risk. “Our current policies and laws leave open access to vast amounts of American sensitive personal data,” the official said. “Buying data through data brokers is currently legal in the United States, and that reflects a gap in our national security toolkit that we are working to fill with this program.”

Researchers and privacy advocates have long warned about the national security risks posed by the largely unregulated multibillion-dollar data broker industry. Last fall, researchers at Duke University reported that they were able to easily buy troves of personal and health data about US military personnel while posing as foreign agents.

Biden’s executive order attempts to address such scenarios. It bars data brokers and other companies from selling large troves of Americans’ personal information to countries or entities in Russia, China, Iran, North Korea, Cuba and Venezuela either directly or indirectly. There are likely to be additional restrictions on companies’ ability to sell data as part of cloud service contracts, investment agreements and employment agreements.

Though the White House described the step as “the most significant executive action any President has ever taken to protect Americans’ data security,” it’s unclear how exactly enforcement of the new policies will be handled within the Justice Department. A DoJ official said the executive order would require due diligence from data brokers to vet who they are dealing with, similar to the way companies are expected to adhere to US sanctions.

As the White House points out, there are currently few regulations for the multibillion-dollar data broker industry. The order will do nothing to slow the bulk sale of Americans’ data to countries or companies not deemed to be a security risk. “President Biden continues to urge Congress to do its part and pass comprehensive bipartisan privacy legislation, especially to protect the safety of our children,” a White House statement says.

Update February 28, 2024, 3:00 PM ET: This article was modified to clarify that, while the White House says the order will be issued today, it is unclear whether it has been issued at time of writing.

This article originally appeared on Engadget at https://www.engadget.com/biden-signs-executive-order-to-stop-russia-and-china-from-buying-americans-personal-data-100029820.html?src=rss

Biden executive order aims to stop Russia and China from buying Americans’ personal data

President Joe Biden will issue an executive order that aims to limit the mass-sale of Americans’ personal data to “countries of concern,” including Russia and China. The order specifically targets the bulk sale of geolocation, genomic, financial, biometric, health and other personally identifying information.

During a briefing with reporters, a senior administration official said that the sale of such data to these countries poses a national security risk. “Our current policies and laws leave open access to vast amounts of American sensitive personal data,” the official said. “Buying data through data brokers is currently legal in the United States, and that reflects a gap in our national security toolkit that we are working to fill with this program.”

Researchers and privacy advocates have long warned about the national security risks posed by the largely unregulated multibillion-dollar data broker industry. Last fall, researchers at Duke University reported that they were able to easily buy troves of personal and health data about US military personnel while posing as foreign agents.

Biden’s executive order attempts to address such scenarios. It bars data brokers and other companies from selling large troves of Americans’ personal information to countries or entities in Russia, China, Iran, North Korea, Cuba and Venezuela either directly or indirectly. There are likely to be additional restrictions on companies’ ability to sell data as part of cloud service contracts, investment agreements and employment agreements.

Though the White House described the step as “the most significant executive action any President has ever taken to protect Americans’ data security,” it’s unclear how exactly enforcement of the new policies will be handled within the Justice Department. A DoJ official said the executive order would require due diligence from data brokers to vet who they are dealing with, similar to the way companies are expected to adhere to US sanctions.

As the White House points out, there are currently few regulations for the multibillion-dollar data broker industry. The order will do nothing to slow the bulk sale of Americans’ data to countries or companies not deemed to be a security risk. “President Biden continues to urge Congress to do its part and pass comprehensive bipartisan privacy legislation, especially to protect the safety of our children,” a White House statement says.

Update February 28, 2024, 3:00 PM ET: This article was modified to clarify that, while the White House says the order will be issued today, it is unclear whether it has been issued at time of writing.

This article originally appeared on Engadget at https://www.engadget.com/biden-signs-executive-order-to-stop-russia-and-china-from-buying-americans-personal-data-100029820.html?src=rss

Google is reportedly paying publishers thousands of dollars to use its AI to write stories

Google has been quietly striking deals with some publishers to use new generative AI tools to publish stories, according to a report in Adweek. The deals, reportedly worth tens of thousands of dollars a year, are apparently part of the Google News Initiative (GNI), a six-year-old program that funds media literacy projects, fact-checking tools, and other resources for newsrooms. But the move into generative AI publishing tools would be a new, and likely controversial, step for the company.

According to Adweek, the program is currently targeting a “handful” of smaller publishers. “The beta tools let under-resourced publishers create aggregated content more efficiently by indexing recently published reports generated by other organizations, like government agencies and neighboring news outlets, and then summarizing and publishing them as a new article,” Adweek reports.

In a statement to Engadget, a Google spokesperson denied the tools were used being used to "re-publish" the work of other publications. "This speculation about this tool being used to re-publish other outlets’ work is inaccurate," the spokesperson said. "The experimental tool is being responsibly designed to help small, local publishers produce high quality journalism using factual content from public data sources – like a local government’s public information office or health authority. Publishers remain in full editorial control of what is ultimately published on their site."

It’s not clear exactly how much publishers are being paid under the arrangement, though Adweek says it’s a “five-figure sum” per year. In exchange, media organizations reportedly agree to publish at least three articles a day, one weekly newsletter and one monthly marketing campaign using the tools.

Of note, publishers in the program are apparently not required to disclose their use of AI, nor are the aggregated websites informed that their content is being used to create AI-written stories on other sites. The AI-generated copy reportedly uses a color-coded system to indicate the reliability of each section of text to help human editors review the content before publishing. 

In a statement to Adweek Google said it was “in the early stages of exploring ideas to potentially provide AI-enabled tools to help journalists with their work.” The spokesperson added that the AI tools “are not intended to, and cannot, replace the essential role journalists have in reporting, creating and fact-checking their articles.”

It’s not clear what Google is getting out of the arrangement, though it wouldn’t be the first tech company to pay newsrooms to use proprietary tools. The arrangement bears some similarities to the deals Facebook once struck with publishers to create live video content in 2016. The social media company made headlines as it paid publishers millions of dollars to juice its nascent video platform and dozens of media outlets opted to “pivot to video” as a result.

Those deals later evaporated after Facebook discovered it had wildly miscalculated the number of views such content was getting. The social network ended its live video deals soon after and has since tweaked its algorithm to recommend less news content. The media industry’s “pivot to video” cost hundreds of journalists their jobs, by some estimates.

While the GNI program appears to be much smaller than what Facebook attempted nearly a decade ago with live video, it will likely raise fresh scrutiny over the use of generative AI tools by publishers. Publications like CNET and Sports Illustrated have been widely criticized for attempting to pass off AI-authored articles as written by human staffers.

Update February 28, 2024, 1:10 PM ET: This story has been edited to add additional information from a Google spokesperson. 

This article originally appeared on Engadget at https://www.engadget.com/google-is-reportedly-paying-publishers-thousands-of-dollars-to-use-its-ai-to-write-stories-215943624.html?src=rss

Google is reportedly paying publishers thousands of dollars to use its AI to write stories

Google has been quietly striking deals with some publishers to use new generative AI tools to publish stories, according to a report in Adweek. The deals, reportedly worth tens of thousands of dollars a year, are apparently part of the Google News Initiative (GNI), a six-year-old program that funds media literacy projects, fact-checking tools, and other resources for newsrooms. But the move into generative AI publishing tools would be a new, and likely controversial, step for the company.

According to Adweek, the program is currently targeting a “handful” of smaller publishers. “The beta tools let under-resourced publishers create aggregated content more efficiently by indexing recently published reports generated by other organizations, like government agencies and neighboring news outlets, and then summarizing and publishing them as a new article,” Adweek reports.

In a statement to Engadget, a Google spokesperson denied the tools were used being used to "re-publish" the work of other publications. "This speculation about this tool being used to re-publish other outlets’ work is inaccurate," the spokesperson said. "The experimental tool is being responsibly designed to help small, local publishers produce high quality journalism using factual content from public data sources – like a local government’s public information office or health authority. Publishers remain in full editorial control of what is ultimately published on their site."

It’s not clear exactly how much publishers are being paid under the arrangement, though Adweek says it’s a “five-figure sum” per year. In exchange, media organizations reportedly agree to publish at least three articles a day, one weekly newsletter and one monthly marketing campaign using the tools.

Of note, publishers in the program are apparently not required to disclose their use of AI, nor are the aggregated websites informed that their content is being used to create AI-written stories on other sites. The AI-generated copy reportedly uses a color-coded system to indicate the reliability of each section of text to help human editors review the content before publishing. 

In a statement to Adweek Google said it was “in the early stages of exploring ideas to potentially provide AI-enabled tools to help journalists with their work.” The spokesperson added that the AI tools “are not intended to, and cannot, replace the essential role journalists have in reporting, creating and fact-checking their articles.”

It’s not clear what Google is getting out of the arrangement, though it wouldn’t be the first tech company to pay newsrooms to use proprietary tools. The arrangement bears some similarities to the deals Facebook once struck with publishers to create live video content in 2016. The social media company made headlines as it paid publishers millions of dollars to juice its nascent video platform and dozens of media outlets opted to “pivot to video” as a result.

Those deals later evaporated after Facebook discovered it had wildly miscalculated the number of views such content was getting. The social network ended its live video deals soon after and has since tweaked its algorithm to recommend less news content. The media industry’s “pivot to video” cost hundreds of journalists their jobs, by some estimates.

While the GNI program appears to be much smaller than what Facebook attempted nearly a decade ago with live video, it will likely raise fresh scrutiny over the use of generative AI tools by publishers. Publications like CNET and Sports Illustrated have been widely criticized for attempting to pass off AI-authored articles as written by human staffers.

Update February 28, 2024, 1:10 PM ET: This story has been edited to add additional information from a Google spokesperson. 

This article originally appeared on Engadget at https://www.engadget.com/google-is-reportedly-paying-publishers-thousands-of-dollars-to-use-its-ai-to-write-stories-215943624.html?src=rss

The Odysseus has become the first US spacecraft to land on the moon in 50 years

The Odysseus spacecraft made by Houston-based Intuitive Machines has successfully landed on the surface of the moon. It marks the first time a spacecraft from a private company has landed on the lunar surface, and it’s the first US-made craft to reach the moon since the Apollo missions.

Odysseus was carrying NASA instruments, which the space agency said would be used to help prepare for future crewed missions to the moon under the Artemis program. NASA confirmed the landing happened at 6:23 PM ET on February 22. The lander launched from Earth on February 15, with the help of a SpaceX Falcon 9 rocket.

According to The New York Times, there were some “technical issues with the flight” that delayed the landing for a couple of hours. Intuitive Machines CTO Tim Crain told the paper that “Odysseus is definitely on the moon and operating but it remains to be seen whether the mission can achieve its objectives.” Odysseus has a limited window of about a week to send data back down to Earth before darkness sets in and makes the solar-powered craft inoperable.

Intuitive Machines wasn’t the first private company to attempt a landing. Astrobotic made an attempt last month with its Peregrine lander, but was unsuccessful. Intuitive Machines is planning to launch two other lunar landers this year.

This article originally appeared on Engadget at https://www.engadget.com/the-odysseus-spacecraft-has-become-the-first-us-spacecraft-to-land-on-the-moon-in-50-years-010041179.html?src=rss

Reddit files for IPO and will let some longtime users buy shares

After years of speculation, Reddit has officially filed paperwork for an Initial Public Offering on the New York Stock Exchange. The company, which plans to use RDDT as its ticker symbol, will also allow some longtime users to participate by buying shares.

In a note shared in the company’s S-1 filing with the SEC, Reddit CEO Steve Huffman said that many longtime users already feel a “deep sense of ownership” over their communities on the platform. “We want this sense of ownership to be reflected in real ownership—for our users to be our owners,” he wrote. “With this in mind, we are excited to invite the users and moderators who have contributed to Reddit to buy shares in our IPO, alongside our investors.”

The company didn’t say how many users might be able to participate, but said that eligible users would be determined based on their karma scores while “moderator contributions will be measured by membership and moderator actions.”

The filing also offers up new details about the inner workings of Reddit’s business. The company had 500 million visitors during the month of December and has recently averaged just over 73 million “daily active unique” visitors. In 2023, the company brought in $804 million in revenue (Reddit has yet to turn a profit). The document also notes that the company is “exploring” deals with AI companies to license its content as it looks to expand its revenue in the future.

Earlier in the day, Reddit and Google announced that they had struck such a deal, reportedly valued at around $60 million a year. “We believe our growing platform data will be a key element in the training of leading large language models (“LLMs”) and serve as an additional monetization channel for Reddit,” the company writes.

This article originally appeared on Engadget at https://www.engadget.com/reddit-files-for-ipo-and-will-let-some-longtime-users-buy-shares-234127305.html?src=rss

Reddit is licensing its content to Google to help train its AI models

Google has struck a deal with Reddit that will allow the search engine maker to train its AI models on Reddit’s vast catalog of user-generated content, the two companies announced. Under the arrangement, Google will get access to Reddit’s Data API, which will help the company “better understand” content from the site.

The deal also provides Google with a valuable source of content it can use to train its AI models. “Google will now have efficient and structured access to fresher information, as well as enhanced signals that will help us better understand Reddit content and display, train on, and otherwise use it in the most accurate and relevant ways,” the company said in a statement.

Access to Reddit’s data became a hot-button issue last year when the company announced it would start charging developers to the use its API. The changes resulted in the shuttering of many third-party Reddit clients, and a sitewide protest in which thousands of subreddits temporarily “went dark.” Reddit justified the changes, in part, by saying that large AI companies were scraping its data without paying. In a statement, Reddit noted that the new arrangement with Google “does not change Reddit's Data API Terms or Developer Terms” and that “API access remains free for non-commercial usage.”

The deal comes as Reddit is expected to go public in the coming weeks. Neither Google or Reddit disclosed the terms of their arrangement but Bloomberg reported last week that Reddit had struck a licensing deal with a “large AI company” valued at “about $60 million” a year. That amount was also confirmed by Reuters, which was first to report Google’s involvement.

This article originally appeared on Engadget at https://www.engadget.com/reddit-is-licensing-its-content-to-google-to-help-train-its-ai-models-200013007.html?src=rss

Meta’s Oversight Board will now hear appeals from Threads users, too

Meta’s Oversight Board is expanding its purview to include Threads. The group announced that Threads users will now be able to appeal Meta’s content moderation decisions, giving the independent group the ability to influence policies for Meta’s newest app.

It’s a notable expansion for the Oversight Board, which up until now has weighed in on content moderation issues related to Facebook and Instagram posts. “Having independent accountability early on for a new app such as Threads is vitally important.,” board co-chair Helle Thorning-Schmidt said in a statement.

According to the Oversight Board, user appeals on Threads will function similarly to how they do on Instagram and Facebook. When users have “exhausted” Meta’s internal process, they’ll be able to request a review from the Oversight Board. Under the rules established when the board was formed, Meta is required to implement the board's decisions regarding specific posts, but isn’t obligated to adhere to its policy recommendations.

Adding Threads’ content moderation to the board’s scope underscores the growing influence of the Twitter-like app that launched last summer. Threads has already grown to 130 million users and Mark Zuckerberg has speculated that it could one day reach a billion users.

Officially, Threads has the same rules as Instagram. But Meta has already encountered some pushback from users over its policies for recommending content. Threads currently blocks search terms related to COVID-19 and other “potentially sensitive” topics. The company also raised some eyebrows when it said last week that it wouldn’t recommend accounts that post too much political content unless users choose to opt-in to such suggestions.

Regardless of whether the board ends up weighing in on those choices, it will likely be some time before Threads users see any changes as the result of the board’s recommendations. The Oversight Board only accepts a tiny fraction of user appeals, and it can take several weeks or months for the group to make a decision, and many more months for Meta to change any of its rules as a result of the guidance. (The board can, in some cases, expedite the process.)

This article originally appeared on Engadget at https://www.engadget.com/metas-oversight-board-will-now-hear-appeals-from-threads-users-too-130003273.html?src=rss

X let terrorist groups pay for verification, report says

X has allowed dozens of sanctioned individuals and groups to pay for its premium service, according to a new report from the Tech Transparency Project (TTP). The report raises questions about whether X is running afoul of US sanctions.

The report found 28 verified accounts belonging to people and groups the US government considers to be a national security threat. The group includes two leaders of Hezbollah, accounts associated with Houthis in Yemen and state-run media accounts from Iran and Russia. Of those, 18 of the accounts were verified after X began charging for verification last spring.

“The fact that X requires users to pay a monthly or annual fee for premium service suggests that X is engaging in financial transactions with these accounts, a potential violation of U.S. sanctions,” the report says. As the TTP points out, X’s own policies state that sanctioned individuals are prohibited from paying for premium services. Some of the accounts identified by the TTP also had ads in their replies, according to the group, “raising the possibility that they could be profiting from X’s revenue-sharing program.”

Changing up Twitter’s verification policy was one of the most significant changes implemented by Elon Musk after he took over the company. Under the new rules, anyone can pay for a blue checkmark if they subscribe to X Premium. X doesn’t require users to submit identification, and the company has at times scrambled to shut down impersonators.

X also offers gold checkmarks to advertisers as part of its “verified organizations” tier, which starts at $200 a month. The TTP report found that accounts belonging to Iran’s Press TV and Russia’s Tinkoff Bank — both sanctioned entities — had gold checks. X has also given away gold checks to at least 10,000 companies. As the report points out, even giving away the gold badge to sanctioned groups could violate US government policies.

X didn’t immediately respond to a request for comment, but it appears that the company has removed verification from some of the accounts named in the TTP’s report. “X, formerly known as Twitter, has removed the blue check and suspended the paid subscriptions of several Iranian outlets,” Press TV tweeted from its account, which still has a gold check. The Hezbollah leaders’ accounts are also no longer verified.

In a statement shared by the company's @Safety account, X said that it was reviewing the TTP report and would "take action if necessary." 

"X has a robust and secure approach in place for our monetization features, adhering to legal obligations, along with independent screening by our payments providers," the company wrote. "Several of the accounts listed in the Tech Transparency Report are not directly named on sanction lists, while some others may have visible account check marks without receiving any services that would be subject to sanctions."

Update February 14, 2024, 4:52 PM ET: This story has been updated to include a statement from X.

This article originally appeared on Engadget at https://www.engadget.com/x-let-terrorist-groups-pay-for-verification-report-says-201254824.html?src=rss

Meta takes down Chinese Facebook accounts posing as US military families

Meta has taken down a network of fake accounts that posed as US military families and anti-war activists. The fake accounts on Facebook and Instagram originated in China and targeted US audiences, according to the company’s security researchers.

Meta detailed the takedowns in its latest report on coordinated inauthentic behavior (CIB). The cluster of fake accounts was relatively small — 33 Facebook accounts, four Instagram profiles, six pages and six groups on Facebook. The accounts posted about US aircraft carriers and other “military themes,” as well as “criticism of US foreign policy towards Taiwan and Israel and its support of Ukraine,” Meta wrote in its report.

The group also ran accounts on YouTube and Medium and shared an online petition “claiming to have been written by Americans to criticize US support for Taiwan.” The company’s researchers said the fake accounts originated in China, but didn’t attribute the effort to a specific entity or group. During a call with reporters, Meta’s global threat intelligence lead Ben Nimmo said that there has been a rise in China-based influence operations over the last year.

“The greatest change in the threat landscape,” Nimmo said, “has been this emergence of Chinese influence operations.” Nimmo said. He noted that Meta has taken down 10 CIB networks originating in China since 2017, but that six of those takedowns came in the last year. Last summer, Meta discovered and removed an especially large network of thousands of fake accounts that attempted to spread pro-China propaganda messages on the platform.

In both cases, the fake accounts were apparently unsuccessful at spreading their message. The latest network only managed to reach about 3,000 Facebook accounts, according to Meta, and the two Instagram pages had no followers at the time they were discovered.

Still, Meta’s researchers note that attempts like this will likely continue ahead of the 2024 election and that people with large audiences should be wary of resharing unverified information. “Our threat research shows that, historically, the main way that CIB networks get through to authentic communities is when they manage to co-opt real people — politicians, journalists or influencers — and tap into their audiences,” the report says. “Reputable opinion-makers represent an attractive target and should exercise caution before amplifying information from unverified sources, particularly ahead of major elections.”

This article originally appeared on Engadget at https://www.engadget.com/meta-takes-down-chinese-facebook-accounts-posing-as-us-military-families-160059602.html?src=rss