Alphabet no longer has a controlling stake in its life sciences business Verily

Alphabet's life sciences business Verily is restructuring and raising money as a new corporate entity. Verily announced that with its $300 million investment round, it will change from an LLC to a corporation and rename itself Verily Health Inc. As a result, Alphabet now has a minority stake rather than a controlling one in the business. 

Similar to every other tech business, this chapter for Verily will be focused on AI. “From research to care, our customers need solutions that bring the best of clinical and scientific rigor together with AI to deliver the next generation of healthcare - one that is as precise as it is personal," Chairman and CEO Stephen Gillett said.

Google Life Sciences was renamed Verily in 2015, around the same time as Google also rebranded to Alphabet. It has worked on a wide range of projects over the years, such as using eye scans to predict heart disease and an opioid addiction center. In 2025, it closed its medical device division, a move that may have signaled its shift toward AI.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/alphabet-no-longer-has-a-controlling-stake-in-its-life-sciences-business-verily-221718631.html?src=rss

Don’t be surprised that the FBI is buying your location data

The FBI has confirmed to the Senate it is once again buying data which can be used to track the locations of US citizens. That may have surprised the people who thought the precedent in Carpenter v. United States prohibited it. But while that case examined if it was legal for law enforcement to obtain location data from mobile networks without a warrant, here the FBI and other agencies have found a way to skirt the Fourth Amendment entirely. Over the last few years, they have taken to just buying location data from the same companies which power the enormous online advertising ecosystem.

When your phone is connected to the internet, it broadcasts about itself, and so do the apps and platforms you use. That information includes your IP address and device type, as well as your longitude and latitude if your device has GPS. This data, known as Bidstream, alongside any third party cookies tied to your device, enables the process of Real Time Bidding (RTB). RTB is the process where your attention is auctioned off to the highest bidder in the milliseconds after you’ve loaded a page. In order to make the auctions work, these platforms need to know as much about you as they can.

As I explained in depth back in 2021, data such as your location and IP address is broadcast over the ad networks. This information can also be aggregated, licensed or sold to data brokers who can pair this with any “deterministic data” available. For instance, if you sign up to a platform and tell them your name, email address and annual income, that data could be licensed to a data broker. Even banks looking for new revenue streams are planning to license anonymized customer data to these companies. Data brokers can easily combine the two streams of information to build out a fairly extensive picture of you as a person, and what advertisers will be the most interested in you. Unfortunately, it’s extremely difficult to opt out of this and, even if it were, it would be even more difficult to destroy the data already in circulation.

In 2018 French company Vectaury, which acted as an ad sales intermediary for mobile apps, was inspected by the French data protection regulator. Officials found the company had built a database containing the personal data of 67.6 million people without proper consent.

Data brokers don’t just harvest and hoard this data to make online ad sales, however, they will also license and sell its databases to others. Lawmakers believe that these brokers have sold this data to rival nations looking for ways to spy on US citizens.

In January, 404Media revealed the US Immigration and Customs Enforcement Agency (ICE) bought access to tools supplied by cybersecurity company Penlink. Specifically, it purchased access to tools named Tangles and Webloc, which can be used to surveil large numbers of people at once. The latter tool reportedly has the power to identify smartphones in a given area and time, and can then follow them on their journey through the day and back to their home at night.

Given the secretive nature of its business, Penlink does not reveal much about how its tools operate. A since-removed marketing page says Webloc automatically analyzes “location based information” available in “endless digital channels from the web ecosystem.” And 404Media’s report says these tools access “commercially available smartphone location data,” supplied by third-party data brokers. Forbes reports the system can also pull together data from a variety of sources, including social media, to offer a real-time view of an event. The Texas Observer says Webloc can use this information to enable “warrantless device tracking.”

A number of other US law enforcement agencies have also purchased location data from data brokers, including the Department of Homeland Security, Customs and Border Protection, the Secret Service and the Internal Revenue Service. This isn’t just limited to government agencies, however, as anti-abortion groups did similar while targeting people visiting Planned Parenthood clinics.

The Fourth Amendment guarantees the right of the people to be protected from “unreasonable searches and seizures,” made without probable case. But, as Dori H. Rahbar wrote in the Columbia Law Review, “the Fourth Amendment does not regulate open market transactions.” Aaron X Sobel, writing in the Yale Law and Policy Review, described the practice as “end-running warrants,” and urged legislators to close this loophole. The Electronic Frontier Foundation (EFF), is also pushing for legislation under the Fourth Amendment is Not For Sale Act.

It’s not likely that such legislation will be passed for a long time, and a cynic would suggest it’s not possible under the current administration. But, even if it is, it won’t address the bigger issue of the ad tech industry and its partners vacuuming up as much information about us as it can. When these companies — many of which aren’t even known to the public — are able to store up enough information on us that, if they were so motivated, they could follow our path through the day, it’s a sign something is very rotten indeed. If we’re concerned about governments having this sort of access, then we should be equally nervous about anyone else having it as well.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/dont-be-surprised-that-the-fbi-is-buying-your-location-data-182047627.html?src=rss

UK fines 4chan nearly $700,000 for failing its online safety act obligations

UK’s Ofcom has fined 4chan a total of £520,000 ($690,000) over the website’s failure to comply with the rules of Online Safety Act 2023. The biggest chunk of the amount came from 4chan’s failure to ensure children cannot encounter pornographic content on its website by implementing an effective age check mechanism. For that violation, the website has received a penalty of £450,000 ($598,000) and an order to apply an age check system by April 2. It carries a daily rate penalty of £500 ($664) until the website is compliant or until June 1, whichever comes sooner.

Ofcom also found that 4chan has failed to carry out sufficient illegal content risk assessment on its website and has fined it £50,000 ($66,400) for that violation. 4chan has until April 2 to conduct a risk assessment, or it has to pay an additional £200 ($266) per day. Finally, the regulator has determined that 4chan failed to include provisions in its terms of service that specify how it protects users from illegal content. That carries a fine of £20,000 ($26,600), with a daily rate penalty of £100 ($133) a day from its compliance deadline of April 2 to June 1.

The regulator started investigating 4chan, famous for its anonymous and unmoderated messaging boards, in June 2025 to determine if it was failing to meet its obligations under the law. In October, Ofcom announced its decision for some of the investigations it opened. It slapped 4chan with a £20,000 ($26,700) fine for ignoring its requests for a copy of the website’s illegal harms risk assessment and to provide information about its qualifying worldwide revenue. The regulator has confirmed to Engadget that 4chan has yet to pay that previous fine, which also earned cumulative daily punishment fees for 60 days.

This article originally appeared on Engadget at https://www.engadget.com/social-media/uk-fines-4chan-nearly-700000-for-failing-its-online-safety-act-obligations-115106264.html?src=rss

A Meta agentic AI sparked a security incident by acting without permission

The Information reported that an AI agent within Meta took unauthorized action that led to an employee creating a security breach at the social company last week. According to the publication, an employee used an in-house agentic AI to analyze a query from a second employee on an internal forum. The AI agent posted a response to the second employee with advice even though the first person did not direct it to do so. 

The second employee took the agent's recommended action, sparking a domino effect that led to some engineers having access to Meta systems that they shouldn't have permission to see. A representative from the company confirmed the incident to The Information and said that "no user data was mishandled." Meta's internal report indicated that there were unspecified additional issues that led to the breach. A source said that there was no evidence that anyone took advantage of the sudden access or that the data was made public during the two hours when the security breach was active. However, that may be the result of dumb luck more than anything else. 

Many tech leaders and companies have touted the benefits of artificial intelligence, this is just the latest incident where human employees have lost control over an AI agent. Amazon Web Services experienced a 13-hour outage earlier this year that also (apparently coincidentally) involved its Kiro agentic AI coding tool. Moltbook, the social network for AI agents recently acquired by Meta, had a security flaw that exposed user information thanks to an oversight in the vibe-coded platform.

This article originally appeared on Engadget at https://www.engadget.com/ai/a-meta-agentic-ai-sparked-a-security-incident-by-acting-without-permission-224013384.html?src=rss

OpenAI’s adult mode reportedly won’t generate pornographic audio, images or video

OpenAI's forthcoming "adult mode" will allow users to engage in lewd conversations with ChatGPT, but not use the chatbot to generate explicit images, audio or video. In response to reporting from The Wall Street Journal, an OpenAI spokesperson characterized the upcoming release as capable of producing smut rather than pornography.

OpenAI CEO Sam Altman first floated the idea of allowing people to use ChatGPT for erotica last October, saying the company wanted to "treat adult users like adults." OpenAI originally planned to release adult mode at the start of 2026. Since then, the company has pushed back the feature a handful of times, with the most recent delay coming at the start of March so that OpenAI could "focus on work that is a higher priority for more users."

Through The Journal's reporting, we're learning OpenAI forged ahead with work on adult mode despite reservations from its council on wellbeing and AI. The group of eight researchers and experts were reportedly unanimous in warning the company AI-generated erotica could lead to people developing an unhealthy emotional dependence on ChatGPT, and that underage users would almost certainly find ways to access the feature. According to The Journal, one council member, citing cases where people have taken their own lives after becoming attached to ChatGPT, said the company was at risk of creating a "sexy suicide coach."

Those concerns appear to have been well-founded. At one point, the company's age verification technology was misidentifying underage users as adults about 12 percent of the time, according to The Journal. At OpenAI's scale, with around 100 million teens using ChatGPT every week, that error rate would have translated to millions of minors accessing erotic chats. OpenAI told The Journal its prediction algorithm performs to industry standards, adding no such system will ever be completely foolproof.

This article originally appeared on Engadget at https://www.engadget.com/ai/openais-adult-mode-reportedly-wont-generate-pornographic-audio-images-or-video-150744035.html?src=rss

Meta is reportedly planning to cut up to 20 percent of its staff in upcoming layoffs

Meta could be preparing for one of the largest layoffs in its history, according to a Reuters report. The tech giant is planning to cut about 20 percent of its workforce, according to the outlet's sources. According to the report, neither a date nor the exact number of layoffs has been finalized yet.

However, Reuters reported that Meta's top executives have told "other senior leaders" to start "planning how to pare back." In its latest financial report, the company's employee headcount was 78,865 as of December 31, 2025, while revenue reached nearly $60 billion for the fourth quarter and more than $200 billion for the entire year. A Meta spokesperson told Reuters that this was "speculative reporting about theoretical approaches."

Meta is no stranger to major layoffs. Earlier this year, Meta targeted about 1,000 employees in its layoffs with the Reality Labs division that's responsible for the company's virtual reality and metaverse efforts. Early last year, Meta laid off about five percent of its workforce, following a smaller round of firings that same month. Meanwhile, the company has been spending heavily to acquire AI startups, like Moltbook, a social network designed for AI agents, and Manus, a startup focused on AI agents for task automation.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/meta-is-reportedly-planning-to-cut-up-to-20-percent-of-its-staff-in-upcoming-layoffs-160812304.html?src=rss

Meta is killing end-to-end encryption in Instagram DMs

Meta is killing end-to-end encryption in Instagram DMs. The feature will "no longer be supported after May 8, 2026," the company wrote in an update on its support page. Unlike WhatsApp, Meta never made encryption available to all Instagram users and it was never a default setting. Instead, users in "some areas" had the ability to opt-in to encryption on a per-chat basis.

In a statement, a Meta spokesperson said the feature was being retired due to low adoption. "Very few people were opting in to end-to-end encrypted messaging in DMs, so we're removing this option from Instagram in the coming months," the spokesperson said. "Anyone who wants to keep messaging with end-to-end encryption can easily do that on WhatsApp.”

Interestingly, Meta's statement doesn't mention the status of encryption on Messenger. The company began turning on end-to-end encryption as a default setting in 2023 after years of work on the feature. A support page for Messenger currently states that the company "is in the process of securing personal messages with end-to-end encryption by default."

Meta's approach to encrypted messaging has changed several times over the years. It started encrypting WhatsApp chats in 2016. In 2019, Mark Zuckerberg outlined a "privacy-focused" revamp of the company's apps, saying at the time that "implementing end-to-end encryption for all private communications is the right thing to do." In 2021, the company's head of safety said that Meta was delaying its encryption work until 2023 in order to create stronger safety features.  

Meta’s use of encryption has been repeatedly criticized by law enforcement and some child safety organizations that say the feature makes it harder to catch predators who target children on social media. Recently, the topic has been raised numerous times during a trial in New Mexico over child safety. Internal documents that have surfaced as part of the trial show Meta executives and researchers debating the trade-offs between safety and privacy as it relates to encryption. 

In testimony that was broadcast during the trial, Zuckerberg said that safety issues were "a large part of the reason why it took so long" to bring encryption to Messenger. "There's been debate about this, but I think the majority of folks, from people who use our products to people who are involved in security overall, believe that strong encryption is positive," he said.


This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-is-killing-end-to-end-encryption-in-instagram-dms-195207421.html?src=rss

X could be breaching US sanctions on Iran, watchdog warns

The newly verified X account for Iran's supreme leader could be putting the company on the wrong side of US sanctions, according to a watchdog group. The Tech Transparency Project, which last month published a report on X granting premium perks to sanctioned officials in Iran, now says that the verified account for the country's new leader raises fresh questions about the issue. 

The TTP notes that the X account for Iran's new supreme leader, Mojtaba Khamenei, appears to be paying for an X premium subscription despite being on the US government's list of sanctioned individuals since 2019. As the group points out, the Iran-based account was created this month and currently bears a blue checkmark, which typically indicates the account holder is paying for a subscription. 

The account belonging to Mojtaba Khamenei has been boosted by other state-linked accounts in Iran, including the one that previously belonged to Khamenei's father. That account has had a gray checkmark, which indicates it belongs to a verified government official. Verified accounts on X are rewarded with extra visibility on the platform, along with other perks. The younger Khamenei's verified account has already gained more than 20,000 new followers in the hours since TTP first posted about it. 

"The new Supreme Leader's account is just the latest account for a sanctioned entity apparently paying X for premium services," TTP director Katie Paul said in a statement to Engadget. "TTP has identified dozens of accounts, many linked to designated terrorists, that subscribed to X premium over the past three years. What's more concerning than the blatant disregard for U.S. sanctions law is the fact that Musk's companies have a contract with the Pentagon while X is actively profiting from U.S. adversaries."

As Paul notes, this isn't the first time TTP has raised questions about whether X is running afoul of US sanctions via its premium service. In 2024, the group published a report noting that X was accepting paid verification from more than two dozen sanctioned individuals and groups. The company said at the time that it had a "a robust and secure approach in place for our monetization features." 

X didn't respond to a request for comment. But in the hours after Engadget reached out about Khamenei’s account, the blue checkmark was removed. The company also removed blue checks from a handful of Iran-based accounts flagged by TTP last month following reporting from Wired.

Update, March 13, 2026, 9:08AM PT: This story was updated to reflect changes made to Khamenei’s account following publication.

This article originally appeared on Engadget at https://www.engadget.com/social-media/x-could-be-breaching-us-sanctions-on-iran-watchdog-warns-213550284.html?src=rss

Ukraine allows allies to train AI models on its battlefield data

Ukraine's four-year war with Russia has made it the world leader in battlefield drone technology. One byproduct of that is that the data it collects has become one of the country's most valuable assets. On Thursday, Ukraine played that card, saying it will begin sharing its battlefield data with allies to train drone AI software.

"In modern warfare, we must defeat Russia in every technological cycle," Ukraine Defense Minister Mykhailo Fedorov wrote on Telegram (translated from Ukrainian). "Artificial intelligence is one of the key areas of this competition."

Fedorov previewed the move when he took his post in January. At the time, the tech-savvy cabinet member pledged to "more actively" bring allies into projects. Foreign allies and companies have sought access to the country's data as, for better or worse, AI increasingly becomes an integral element of warfare.

Fedorov says Ukraine has a platform that will safely train partners' AI models without providing sensitive data. The system is said to provide continually updating datasets, including large volumes of photos and videos.

"For us, this is the next step in the development of win-win cooperation," Fedorov wrote. "Partners get the opportunity to train their AI models on real data from modern warfare. And [for] Ukraine: faster development of autonomous systems and new technological solutions for the front."

Last year, Ukrainian President Volodymyr Zelenskyy warned global leaders of a dangerous escalation tied to drone tech and AI. “We are now living through the most destructive arms race in human history,” he said at a meeting of the UN General Assembly in September. However, given the ugly realities in his country, Zelenskyy reiterated his need for armaments. “The only guarantee of security is friends and weapons,” he said.

This article originally appeared on Engadget at https://www.engadget.com/ai/ukraine-allows-allies-to-train-ai-models-on-its-battlefield-data-165104853.html?src=rss

Google’s GFiber internet business is merging with Astound Broadband

Google has announced that GFiber is merging with Astound Broadband, in an agreement that sees Astound’s parent company Stonepeak become the majority owner, with Alphabet retaining a minority stake.

No financial specifics were detailed in a press release, but the new combined business will be an independent provider led by GFiber’s executive team, who Google says will use its "expertise in high-speed fiber innovation to manage the combined network footprint." Astound already serves over one million customers across the US, and by joining forces Google says the two providers will be able to grant better internet access to more communities.

GFiber, formerly known as Google Fiber, has been around for nearly 15 years, and currently offers speeds of up to 8Gbps on its $150/month Edge 8 Gig plan. A 20 Gig service was expected to leave early access later in 2026.

The fiber broadband service is part of Alphabet’s "Other Bets" portfolio, which also includes Waymo, Verily, and Wing, a combined segment that recorded an operating loss of $16.8 billion in 2025, CNBC reports. The company’s deal with Stonepeak is subject to regulatory approval and is expected to close in Q4 of this year.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/googles-gfiber-internet-business-is-merging-with-astound-broadband-132832086.html?src=rss