The UK’s antitrust regulator will formally investigate Alphabet’s $2.3 billion Anthropic investment

The UK’s competition regulator is probing Alphabet’s investment in AI startup Anthropic. After opening public comments this summer, the Competition and Market Authority (CMA) said on Thursday it has “sufficient information” to begin an initial investigation into whether Alphabet’s reported $2.3 billion investment in the Claude AI chatbot maker harms competition in UK markets.

The CMA breaks its merger probes into two stages: a preliminary scan to determine whether there’s enough evidence to dig deeper and an optional second phase where the government gathers as much evidence as possible. After the second stage, it ultimately decides on a regulatory outcome.

The probe will formally kick off on Friday. By December 19, the CMA will choose whether to move to a phase 2 investigation.

Google told Engadget that Anthropic isn’t locked into its cloud services. “Google is committed to building the most open and innovative AI ecosystem in the world,” a company spokesperson wrote in an email. “Anthropic is free to use multiple cloud providers and does, and we don't demand exclusive tech rights.” Engadget also reached out to the CMA for comment, and we’ll update this story if we hear back.

TechCrunch notes that Alphabet reportedly invested $300 million in Anthropic in early 2023. Later that year, it was said to back the AI startup with an additional $2 billion. Situations like this can be classified as a “quasi-merger,” where deep-pocketed tech companies essentially take control of emerging startups through strategic investments and hiring founders and technical workers.

Amazon has invested even more in Anthropic: a whopping $4 billion. After an initial public comment period, the CMA declined to investigate that investment last month. The CMA said Amazon avoided Alphabet’s fate at least in part because of its current rules: Anthropic’s UK turnover didn’t exceed £70 million, and the two parties didn’t combine to account for 25 percent or more of the region’s supply (in this case, AI LLMs and chatbots).

Although the CMA hasn’t specified, something in Alphabet’s $2.3 billion Anthropic investment constituted a deeper dive. Of course, Google’s Gemini competes with Claude, and both companies make large language models they provide to small businesses and enterprise customers.

Update, October 25, 2024, 11:10AM ET: This story has been updated to add a quote from a Google representative.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/the-uks-antitrust-regulator-will-formally-investigate-alphabets-23-billion-anthropic-investment-171043846.html?src=rss

The UK’s antitrust regulator will formally investigate Alphabet’s $2.3 billion Anthropic investment

The UK’s competition regulator is probing Alphabet’s investment in AI startup Anthropic. After opening public comments this summer, the Competition and Market Authority (CMA) said on Thursday it has “sufficient information” to begin an initial investigation into whether Alphabet’s reported $2.3 billion investment in the Claude AI chatbot maker harms competition in UK markets.

The CMA breaks its merger probes into two stages: a preliminary scan to determine whether there’s enough evidence to dig deeper and an optional second phase where the government gathers as much evidence as possible. After the second stage, it ultimately decides on a regulatory outcome.

The probe will formally kick off on Friday. By December 19, the CMA will choose whether to move to a phase 2 investigation.

Google told Engadget that Anthropic isn’t locked into its cloud services. “Google is committed to building the most open and innovative AI ecosystem in the world,” a company spokesperson wrote in an email. “Anthropic is free to use multiple cloud providers and does, and we don't demand exclusive tech rights.” Engadget also reached out to the CMA for comment, and we’ll update this story if we hear back.

TechCrunch notes that Alphabet reportedly invested $300 million in Anthropic in early 2023. Later that year, it was said to back the AI startup with an additional $2 billion. Situations like this can be classified as a “quasi-merger,” where deep-pocketed tech companies essentially take control of emerging startups through strategic investments and hiring founders and technical workers.

Amazon has invested even more in Anthropic: a whopping $4 billion. After an initial public comment period, the CMA declined to investigate that investment last month. The CMA said Amazon avoided Alphabet’s fate at least in part because of its current rules: Anthropic’s UK turnover didn’t exceed £70 million, and the two parties didn’t combine to account for 25 percent or more of the region’s supply (in this case, AI LLMs and chatbots).

Although the CMA hasn’t specified, something in Alphabet’s $2.3 billion Anthropic investment constituted a deeper dive. Of course, Google’s Gemini competes with Claude, and both companies make large language models they provide to small businesses and enterprise customers.

Update, October 25, 2024, 11:10AM ET: This story has been updated to add a quote from a Google representative.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/the-uks-antitrust-regulator-will-formally-investigate-alphabets-23-billion-anthropic-investment-171043846.html?src=rss

X updates its privacy policy to allow third parties to train AI models with its data

X is updating its privacy policy with new language that allows it to provide users’ data to third-party “collaborators” in order to train AI models. The new policy, which takes effect November 15, 2024, would seem to open the door to Reddit-like arrangements in which outside companies can pay to license data from X.

The updated policy shared by X includes a new section titled “third-party collaborators.”

Depending on your settings, or if you decide to share your data, we may share or disclose your information with third parties. If you do not opt out, in some instances the recipients of the information may use it for their own independent purposes in addition to those stated in X’s Privacy Policy, including, for example, to train their artificial intelligence models, whether generative or otherwise.

While the policy mentions the ability to opt out, it’s not clear how users would actually do so. As TechCrunch notes, the policy points to users’ settings menu, but there’ doesn’t appear to be an control for opting out of data sharing. The policy doesn’t go into effect until next month, though, so there’s still a chance that could change. X didn’t respond to a request for comment.

If X were to begin licensing its data to other companies, it could open up a significant new revenue stream for the social media company, which has seen waning interest from major advertisers.

In addition to the privacy policy, X is also updating its terms of service with stricter penalties for entities that are caught “scraping” large numbers of tweets. In a section titled “liquidated damages” the company states anyone viewing or accessing more than a million posts a day will be subject to a penalty of $15,000.

Protecting our users’ data and our system resources is important to us. You further agree that, to the extent permitted by applicable law, if you violate the Terms, or you induce or facilitate others to do so, in addition to all other legal remedies available to us, you will be jointly and severally liable to us for liquidated damages as follows for requesting, viewing, or accessing more than 1,000,000 posts (including reply posts, video posts, image posts, and any other posts) in any 24-hour period - $15,000 USD per 1,000,000 posts.

X owner Elon Musk has previously railed against “scraping.” Last year, the company temporarily blocked people from viewing tweets while logged out, in a move Musk attributed to fending off scrapers. He also moved X’s API behind a paywall, which has drastically hindered researchers’ ability to study what’s happening on the platform. He’s also used allegations of “scraping” to justify lawsuits against organizations that have attempted to study hate speech and other issues on the platform.

This article originally appeared on Engadget at https://www.engadget.com/social-media/x-updates-its-privacy-policy-to-allow-third-parties-to-train-ai-models-with-its-data-234207599.html?src=rss

YouTube is testing a new version of its Premium Lite subscription

YouTube is testing a revamp of its Premium Lite subscription tier. User screenshots made the rounds on social media this week, and today a Google rep later confirmed to multiple other outlets that the plan is being tested in Australia, Germany and Thailand. This new version would have "limited ads," which the fine print describes as most videos being ad-free, "but you may see video ads on music content and Shorts, and non-interruptive ads when you search and browse."

The original Premium Lite subscription began testing in Europe in 2021, but it only lasted a few years, with the video platform eliminating the option in October 2023. The plan's only benefit was removing all ads; it didn't offer the offline or background viewing options of the regular Premium offering.

We were able to confirm that the pricing model in Australia is $9 a month for Premium Lite, compared with $17 a month for full Premium access. That's in line with the costs from the original Lite, which were about half the rate of a regular plan. With the current costs of a YouTube subscription — $14 a month for an individual or $23 a month for the family option — having a mid-tier choice could certainly be appealing.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/youtube/youtube-is-testing-a-new-version-of-its-premium-lite-subscription-220050877.html?src=rss

FCC now requires georouting for wireless calls to 988, the National Suicide Prevention Hotline

The Federal Communications Commission has passed rules that will require all wireless calls to the 988 Lifeline to be georouted. Geographic routing ensures that attempts to reach the National Suicide Prevention Hotline for intervention services will be sent to the location where the call is placed rather than to the location of the caller's area code and exchange.

Once the rules take effect, national providers will have 30 days to implement georouting for these calls. Smaller, non-national providers have a timeline of 24 months to comply. The agency also issued a proposal that the same georouting policy be applied to texts sent to 988.

The FCC has taken several steps to expand the reach of the 988 Lifeline over the past few years. After voting to make the three-digit number the shortcut for reaching the National Suicide Prevention Hotline in 2020, the agency expanded the service to include text support in 2021. T-Mobile was one of the first telecoms to activate 988 for customers to access mental health services.

If you are struggling and need someone to listen, please, call 988. The full number is 1-800-273-8255 (1-800-273-TALK), or you can reach the Lifeline by webchat.

This article originally appeared on Engadget at https://www.engadget.com/mobile/fcc-now-requires-georouting-for-wireless-calls-to-988-the-national-suicide-prevention-hotline-192030468.html?src=rss

Two Sudanese brothers accused of launching a dangerous series of DDoS attacks

Newly unsealed grand jury documents revealed that two Sudanese nationals allegedly attempted to launch thousands of distributed denial of services (DDoS) attacks on systems across the world. The documents allege that these hacks aimed to cause serious financial and technical harm to government entities and companies and even physical harm in some cases.

The US Department of Justice (DoJ) unsealed charges against Ahmed Salah Yousif Omer and Alaa Salah Yusuuf Omer that resulted in federal grand jury indictments. The two are allegedly connected to more than 35,000 DDoS attacks against hundreds of organizations, websites and networks as part of a “hacktivism” scheme as part of the cybercrime group Anonymous Sudan and a for-profit cyberattack service.

Even though Anonymous Sudan claimed to be an activist group, the pair also held some companies and entity’s systems for ransom for rates as high as $1,700 per month.

Both face indictments for their role in the coordinated cyberattacks including one count each of conspiracy to damage protected computers. Ahmed also faces three additional counts of damaging protected computers and could receive a statutory maximum sentence of life in federal prison, according to court records filed last June in the US Central District Court of California.

The brothers’ activities date back to early 2023. The two used a distributed cloud attack tool (DCAT) referred to as “Skynet Botnet” in order to “conduct destructive DDoS attacks and publicly claim credit for them,” according to a DoJ statement. Ahmed posted a message on Anonymous Sudan’s Telegram channel, “The United States must be prepared, it will be a very big attack, like what we did in Israel, we will do in the United States ‘soon.’”

One of the indictments listed 145 “overt acts” on organizations and entities in the US, the European Union, Israel, Sudan and the United Arab Emirates (UAE). The Skynet Botnet attacks attempted to disrupt services and networks in airports, software networks and companies including Cloudflare, X, Paypal and Microsoft that caused outages for Outlook and OneDrive in June of last year. The attacks also targeted state and federal government agencies and websites including the Federal Bureau of Investigation (FBI), the Pentagon and the DoJ and even hospitals including one major attack on Cedars-Sinai Hospital in Los Angeles causing a slowdown of health care services as patients were diverted to other hospitals. The hospital attack led to the hacking charges against Ahmed that carry potential life sentences.

“3 hours+ and still holding,” Ahmed posted on Telegram in February, “they're trying desperately to fix it but to no avail Bomb our hospitals in Gaza, we shut down yours too, eye for eye...”

FBI special agents gathered evidence of the pair’s illegal activities including logs showing that they sold access to Skynet Botnet to more than 100 customers to carry out attacks against various victims who worked with investigators including Cloudflare, Crowdstrike, Digital Ocean, Google, PayPal and others.

Several Amazon Web Services (AWS) clients were among Anonymous Sudan’s victims as part of the hacking-for-hire scheme, according to court records and an AWS statement. AWS security teams worked with FBI cybercrime investigators to track the attacks back to “an array of cloud-based servers," many of which were based in the US. The discovery helped the FBI determine that the Skynet Botnet attacks were coming from a DCAT instead of a botnet that forwarded the DDoS to its victims through cloud-based servers and open proxy resolvers.

Perhaps the group’s most brazen and dangerous attack took place in April of 2023 that targeted Israel’s rocket alert system called Red Alert. The mobile app provides real time updates for missile attacks and security threats. The DDoS attacks attempted to infiltrate some of Red Alert’s Internet domains. Ahmed claimed responsibility for the Red Alert attacks on Telegram along with similar DDoS strikes on Israeli utilities and the Jerusalem Post news website.

“This group’s attacks were callous and brazen — the defendants went so far as to attack hospitals providing emergency and urgent care to patients,” US Attorney Martin Estrada said in a released statement. “My office is committed to safeguarding our nation’s infrastructure and the people who use it, and we will hold cyber criminals accountable for the grave harm they cause.”

Update, October 16, 7:25PM ET: This article was modified after publish to make clear that AWS clients, rather than AWS, were the target of Anonymous Sudan.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/two-sudanese-brothers-accused-of-launching-a-dangerous-series-of-ddos-attacks-215638291.html?src=rss

FCC launches a formal inquiry into why broadband data caps are terrible

The Federal Communications Commission announced that it will open a renewed investigation into broadband data caps and how they impact both consumer experience and company competition. The FCC is soliciting stories from consumers about their experiences with capped broadband service. The agency also opened a formal Notice of Inquiry to collect public comment that will further inform its actions around broadband data caps.

"Restricting consumers' data can cut off small businesses from their customers, slap fees on low-income families and prevent people with disabilities from using the tools they rely on to communicate," FCC Chairwoman Jessica Rosenworcel said. "As the nation’s leading agency on communications, it’s our duty to dig deeper into these practices and make sure that consumers are put first."

This topic has been a hot one of late, and the FCC launched another notice of inquiry about the practice of capping Internet access last year. In April 2024, the agency successfully required that ISPs offer clear information labels on their service plans, detailing additional fees, discounts, and upload and download speeds. Data caps could also come under additional fire as the FCC attempts to restore net neutrality rules, which classify broadband as an essential service. Returning net neutrality has not been a simple journey, however, as the agency faces legal challenges from broadband providers.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/fcc-launches-a-formal-inquiry-into-why-broadband-data-caps-are-terrible-182129773.html?src=rss

The New York Times tells Perplexity to stop using its content

One of the nation’s largest newspapers is targeting another AI firm for reusing its content without its permission. The Wall Street Journal reported that the New York Times sent a cease and desist letter to Perplexity, the AI startup funded by Amazon founder Jeff Bezos. The letter states that Perplexity’s use of the New York Times’ content to create answers and summaries with its AI portal violates copyright law. The letter states that Perplexity and its backers “have been unjustly enriched by using, without authorizations, The Times’ expressive, carefully written and researched, and edited journalism without a license” and gave the startup until October 30 to respond before taking legal action.

Perplexity CEO Aravind Srinivas told the Journal that they aren’t ignoring the notice. He added they are “very much interested in working with every single publisher, including the New York Times.”

This isn’t the first time an AI company has earned the wrath of the New York Times’ legal team. The newspaper took OpenAI and Microsoft to court over claims that both used articles from its pages to train its AI software. The suit alleges both companies used more than 66 million records across its archives to train its AI modes representing “almost a century’s worth of copyrighted content.”

Amazon Web Services’ cloud division also started an investigation over the summer into Perplexity AI. Wired reported that a machine hosted on Amazon Web Services and operated by Perplexity visited hundreds of Condé Nast publications and properties hundreds of times to scan for content to use in its response and data collections.

This article originally appeared on Engadget at https://www.engadget.com/ai/the-new-york-times-tells-perplexity-to-stop-using-its-content-175853131.html?src=rss

You’ll soon be able to safely and easily move your passkeys between password managers

By now, most people know passkeys offer a better way to protect their online credentials than passwords. Nearly every tech company of note, including Apple, Google and Microsoft, supports the protocol. Moreover, despite a slow start, adoption has dramatically increased in the last year, with, for instance, password manager Dashlane recently noting a 400% increase in use since the beginning of 2024. Amazon, meanwhile, said today more than 175 million of its customers are using passkeys to protect their accounts. Still, not everyone knows they don’t need to rely on passwords to protect their online identity, and transferring your passkeys between platforms isn’t as easy as it should be.

That’s why the FIDO Alliance, the coalition of organizations behind the technology, is working to make it easier to do just that. On Tuesday, the group published draft specifications for the Credential Exchange Protocol (CXP) and Credential Exchange Format (CXF), two standards that, once adopted by the industry, will allow you to safely and seamlessly move all your passkeys and passwords between different apps and platforms. 

With some of the biggest names in the industry collaborating on the effort (including Apple, Google, 1Password, Bitwarden, and Dashlane, to name a few), there’s a very good chance we’re looking at a future where your current password manager — particularly if you use one of the first-party ones offered by Apple or Google — won’t be the reason you can’t switch platforms. And that’s a very good thing.

“It is critical that users can choose the credential management platform they prefer, and switch credential providers securely and without burden,” the FIDO Alliance said. “Until now, there has been no standard for the secure movement of credentials, and often the movement of passwords or other credentials has been done in the clear.”

The CXP and CXF standards aren’t ready for prime time just yet. The FIDO Alliance plans to collect feedback before it publishes the final set of specifications and gives its members the go-ahead to implement the technology.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/youll-soon-be-able-to-safely-and-easily-move-your-passkeys-between-password-managers-161025573.html?src=rss

China calls allegations that it infiltrated US critical infrastructure a ‘political farce’

China has denied allegations by the US government and Microsoft that a state-sponsored hacking group called the Volt Typhoon has infiltrated US critical infrastructure, according to Bloomberg. The country's National Computer Virus Emergency Response Center called the claims a "political farce" orchestrated by US officials in a new report. It also reportedly cited more than 50 cybersecurity experts who agreed with the agency that there's no sufficient evidence linking Volt Typhoon to the Chinese government. 

Moreover, the Chinese agency said that it's the US that uses "cyber warfare forces" to penetrate networks and conduct intelligence gathering. It even accused the US of using a tool called "Marble" that can insert code strings in the Chinese and Russian languages to frame China and Russia for its activities.

Microsoft and the National Security Agency (NSA) first reported about Volt Typhoon back in May 2023. They said that the group installed surveillance malware in "critical" systems on the island of Guam and other parts of the US and has had access to those systems for at least the past five years. In February this year, the Cybersecurity and Infrastructure Security Agency (CISA), the NSA and the FBI issued an advisory warning critical infrastructure organizations that state-sponsored cyber actors from China "are seeking to pre-position themselves on IT networks for disruptive or destructive cyberattacks."

The US agencies said Volt Typhoon had infiltrated the US Department of Energy, US Environmental Protection Agency, as well as various government agencies in Australia, the UK, Canada and New Zealand. Volt Typhoon doesn't act like other cyberattackers and espionage groups do. It hasn't used the malware it installed to attack any of its targets — at least not yet. The group is "pre-positioning" itself so that it can disrupt critical infrastructure functions when it wants to, which the US government believes is "in the event of potential geopolitical tensions and/or military conflicts" with the United States.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/china-calls-allegations-that-it-infiltrated-us-critical-infrastructure-a-political-farce-120023769.html?src=rss