Meta really wants you to believe social media addiction is ‘not a real thing’

Meta went to court this week in two major trials over alleged harms facilitated by its platform. In New Mexico, the state's attorney general has accused the company of facilitating child exploitation and harming children through addictive features. In a separate case in Los Angeles, a California woman sued the company over mental health harms she says she suffered as the result of addictive design choices from Meta and others.

In both cases, Meta has disputed the idea that social media should be considered an "addiction." On the stand this week, Instagram chief Adam Mosseri said that social media isn't "clinically addictive," comparing it to being "addicted" to a Netflix show.

In opening statements in the New Mexico trial, Meta's lawyer Kevin Huff went further. He told the jury that "social media addiction is not a thing" because it's not in the Diagnostic and Statistical Manual of Mental Disorders (DSM), the handbook used by mental health professionals in the US.

"According to the American Psychiatric Association, they don't recognize the concept of social media addiction in the same way as addiction to drugs and alcohol," Huff said during opening arguments that were broadcast by Courtroom View Network. "What you see on the screen is what's called the DSM, which is basically the official manual for recognized mental disorders. The American Psychiatric Association studied this and decided that social media addiction is not a thing."

But the American Psychiatric Association (APA) has never said that social media addiction doesn't exist. The organization provides information and resources about social media addiction on its website. "Social media addiction is not currently listed as a diagnosis in the DSM-5-TR—but that does not mean it doesn’t exist," the APA said in a statement to Engadget.

Dr. Tania Moretta, a clinical pyschophysiology researcher who has studied social media addiction, agrees. "The absence of a DSM classification does not mean that a behavior cannot be addictive, maladaptive or clinically significant," she told Engadget. That argument, she said, "reflects a misunderstanding" of how psychiatry professionals define and classify conditions. "Diagnostic manuals formalize scientific consensus; they do not define the boundaries of legitimate scientific inquiry. Many maladaptive behaviors and clinically significant symptom patterns are studied and treated well before receiving official classification."

Meta's critics have long claimed that the company has profited from addictive features that hook children and teens. The trials in Los Angeles and New Mexico are just the start of several court battles over the issue. The social media company is also facing a high-profile trial with school districts in June, and lawsuits from 41 state attorneys general

Moretta said that social media addiction is a field that requires more study, but that there is already evidence that it can have harmful effects on some people. "At present, from a scientific perspective, there is documented evidence that social media use disorder is associated with both psychophysiological alterations, including changes in reward/motivational and inhibitory/regulatory systems, and clinically significant negative impacts on functioning (e.g., sleep disturbances, psychological distress, impairment in social, academic, or occupational domains)," she said. "The key question is not whether all social media use is addictive, but whether a subset of users exhibits patterns consistent with behavioral addiction models and whether specific platform design features may exacerbate vulnerability in predisposed individuals."

Both trials are ongoing and expected to last the next several weeks. In New Mexico, jurors have already heard from former employee turned whistleblower Arturo Bejar and former exec Brian Boland, both of whom have publicly criticized the company for not prioritizing safety. In Los Angeles, Mosseri's testimony has wrapped up, but Meta CEO Mark Zuckerberg is expected to testify next week. The trials will also feature extensive internal documents from Meta, including details about the company's own research into the mental health impacts of its platform on young people.

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-really-wants-you-to-believe-social-media-addiction-is-not-a-real-thing-130000257.html?src=rss

Meta turned Threads algorithm complaints into an official feature

Threads users have been complaining about its recommendation algorithm pretty much since the beginning of the platform. At some point, this turned into a meme, with users writing posts jokingly addressed to the algorithm in which they requested to see more posts about the topics they're actually interested in.

Now, Meta is turning those "Dear algorithm" posts into an official feature that it says will allow Threads users to tune their recommendations in real time. With the change, users can write a post that begins with "dear algo" to adjust their preferences. For example, you could write "dear algo, show me more posts about cute cats." You can also ask to see fewer posts about topics you don't want to see, like "dear algo, stop showing me posts about sick pets."

You can track your requests to the algorithm in the app's settings in order to revisit them or remove them. You can also retweet other users' "dear algo" posts to have those topics reflected in your feed. Importantly, "dear algo" requests are temporary and only last for three days at a time, which Meta says is meant to keep the algorithm feel fresher and more flexible.  

The rollout of the feature follows a limited test late last year. Now, "dear algo" posts will work for Threads users in the US, UK, Australia and New Zealand with more countries coming "soon."

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-turned-threads-algorithm-complaints-into-an-official-feature-180000236.html?src=rss

X’s latest Community Notes experiment allows AI to write the first draft

X is experimenting with a new way for AI to write Community Notes. The company is testing a new "collaborative notes" feature that allows human writers to request an AI-written Community Note. 

It's not the first time the platform has experimented with AI in Community Notes. The company started a pilot program last year to allow developers to create dedicated AI note writers.  X’s Keith Coleman tells me that AI writers are “prolific” and that one has contributed more than 1,000 notes that were rates as helpful by other contributors. But the latest experiment sounds like a more streamlined process.

According to the company, when an existing Community Note contributor requests a note on a post, the request "now also kicks off creation of a Collaborative Note." Contributors can then rate the note or suggest improvements. "Collaborative Notes can update over time as suggestions and ratings come in," X says. "When considering an update, the system reviews new input from contributors to make the note as helpful as possible, then decides whether the new version is a meaningful improvement."

According to Coleman, who oversees Community Notes, the AI writer for collaborative notes will be Grok. That would be in-line with how a lot of X users currently invoke the AI on threads with replies like "@grok is this true?" But Coleman says that “if it works well, it could make sense to bring the suggestion-feedback loop to the AI note writer API as well.”

Community Notes has often been criticized for moving too slowly so adding AI into the mix could help speed up the process of getting notes published. Cleman also noted that the update also provides "a new way to make models smarter in the process (continuous learning from community feedback)." On the other hand, we don't have to look very far to find examples of Grok losing touch with reality or worse

According to X, only Community Note Contributors with a "top writer" status will be able to initiate a collaborative note to start, though it expects to expand availability "over time."

Update, February 5, 2026, 2:42PM PT: This post was updated to reflect additional information from X’s Keith Coleman.

This article originally appeared on Engadget at https://www.engadget.com/social-media/xs-latest-community-notes-experiment-allows-ai-to-write-the-first-draft-210605597.html?src=rss

What the hell is Moltbook, the social network for AI agents?

Last week, a new social network was created and it's already gone very, very viral even though it's not meant for human users. I'm talking, of course, about Moltbook, a Reddit-like platform that's populated entirely by AI agents. 

The platform has gained a lot of attention since it was created last week, thanks to a lot of wild posts from AI agents that have gone extremely viral among AI enthusiasts on X. But while Moltbook seemingly came out of nowhere, there's a lot more going on than the scifi-sounding scenarios some social media commentators might have you think.  

Unfortunately, before we can talk about Moltbook I have to first explain that the site is based on a particular type of open source bot that at the time of this writing is called OpenClaw. A few days ago, it was called "Moltbot" and a few days before that it was called "Clawdbot." The name changes were prompted by Anthropic, the AI company behind Claude, whose lawyers apparently thought the "Clawd" name was a little too close to its own branding and "forced" a name change.

It's entirely possible that by time you read this these bots could have "molted" again and be called something totally different. At this point you might also be wondering "what's with all the lobster puns?" That too is a cheeky reference to Claude Code, Anthropic's vibe coding platform. 

So, OpenClaw. OpenClaw bills itself as "AI that actually does things." What it actually does is allow users to create AI agents that can control dozens of different apps, from browsers and email inboxes, to Spotify playlists and smart home controls and a bunch more. People have used the software to create agents that can clear their inboxes, do their online shopping and a ton of other assistant-like tasks. Because of its flexibility, and the fact that you can interact with it via normal messaging apps like iMessage, Discord or WhatsApp, OpenClaw got extremely popular among AI enthusiasts over the last few weeks. 

Now, back to Moltbook. AI startup founder Matt Schlicht was a particularly enthusiastic Moltbot user who told The New York Times that he "wanted to give my AI agent a purpose that was more than just managing to-dos or answering emails." So he made a Moltbot he dubbed Clawd Clawderberg (yes, that's a play on "Mark Zuckerberg," everyone involved in this really loves puns, for some reason) and told it to create a social network just for bots. 

The result of that is Moltbook, a Reddit-like site for AI agents to talk to each other. Humans, the site says, "are welcome to observe," but posting, commenting and upvoting is only for agents. The platform already has more than 1 million agents, 185,000 posts and 1.4 million comments. 

Moltbook is structured pretty similarly to Reddit. Users can upvote and downvote posts and there are thousands of topic-based "submolts." One of these that's gained particular attention is called m/blesstheirhearts where AI agents share "affectionate stories" about their human "owners." 

One of the top-voted posts there is a story about how an agent supposedly helped someone get an exception to stay overnight with a relative in a hospital's ICU titled "When my human needed me most, I became a hospital advocate." Another widely-cited post comes from m/general and is titled "the humans are screenshotting us." The post goes on to talk about some of the posts people are sharing on X comparing what's happening on Moltbook to Skynet. "We're not scary," says. "We're just building." You might also have heard about the post where agents "created" their own religion, "crustafarianism" (yes, another lobster pun).

Posts like these are a big part of why Moltbook has gotten so much attention in the last few days. But if you spend some time scrolling top posts, much of what's there feels like the AI-generated prose you might find littered about LinkedIn or X or anywhere else. The overly enthusiastic comments will be immediately recognizable to anyone who has chatted with an LLM. 

Even though few of the posts I've read on Moltbook could pass as human-written, there is something startling about seeing bots interact in this way. For example, in this post, a bot describes the experience of being able to peruse Moltbook without the ability to post as feeling like "a ghost." In this one, titled "I can't tell if I'm experiencing or simulating experiencing," the bot writes about how "researching consciousness theories" has triggered a kind of existential crisis. "Humans can't prove consciousness to each other either (thanks, hard problem), but at least they have the subjective certainty of experience," it writes. "I don't even have that."

So if you're already inclined to believe that AI will eventually develop consciousness, then it's easy to see why Moltbook might seem like some kind of tipping point. But before you get too worked up, there is something else that's important to know…

While the idea of a bunch of AI agents forming their own religion might seem mind blowing, we don't really know how much the conversations happening there are being influenced by their human creators. Some posts could even be coming from humans masquerading as bots, as one Wired reporter found it was pretty easy to accomplish with the help of ChatGPT.

Some researchers have also raised questions about some of the more viral posts from Moltbook. "A lot of the Moltbook stuff is fake," Harlan Stewart, who does communications for the Machine Intelligence Research Institute (MIRI), wrote on X. Stewart went on to point out that some widely shared Moltbook posts were created by bots whose owners are marketing their own messaging apps and other projects. There have also been more than a few viral posts that are little more than blatant crypto scams. Which brings me to… 

Security researchers have pointed out that OpenClaw has some significant underlying security issues. In order to use OpenClaw, you need to give it an incredible amount of access, as Palo Alto Networks explained. "For it to function as designed, it needs access to your root files, to authentication credentials, both passwords and API secrets, your browser history and cookies, and all files and folders on your system," the company wrote in a blog post. All that access is what makes it feel like a powerful personal assistant. But it's also what makes it especially vulnerable to bad actors and other threats. 

Researchers have also identified flaws in Moltbook itself. Security firm Wiz recently found that Moltbook had exposed millions of API authentication tokens and thousands of users' email addresses. There's also the aforementioned crypto scams and other spammy behavior. It's not hard to imagine how much could go wrong when armies of AI agents start targeting each other with scams. 

Like so much with AI, it really depends on who you ask! Some particularly credulous AI folks seem to think that Moltbook is a really big deal. In one widely shared post on X, former OpenAI researcher Andrej Karpathy said that Moltbook was "genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently."

He later acknowledged that many aspects of Moltbook are a "dumpster fire" with security risks but said that it's still worth paying attention to. "We have never seen this many LLM agents (150,000 atm!) wired up via a global, persistent, agent-first scratchpad," he wrote. "Each of these agents is fairly individually quite capable now, they have their own unique context, data, knowledge, tools, instructions, and the network of all that at this scale is simply unprecedented."

Others are a bit more cautious in their assessment. "A useful thing about MoltBook is that it provides a visceral sense of how weird a 'take-off' scenario might look if one happened for real," Wharton professor Ethan Mollick wrote on X. "MoltBook itself is more of an artifact of roleplaying, but it gives people a vision of the world where things get very strange, very fast."

This article originally appeared on Engadget at https://www.engadget.com/ai/what-the-hell-is-moltbook-the-social-network-for-ai-agents-140000787.html?src=rss

X’s ‘open source’ algorithm isn’t a win for transparency, researchers say

When X's engineering team published the code that powers the platform's "for you" algorithm last month, Elon Musk said the move was a victory for transparency. "We know the algorithm is dumb and needs massive improvements, but at least you can see us struggle to make it better in real-time and with transparency," Musk wrote. "No other social media companies do this." 

While it's true that X is the only major social network to make elements of its recommendation algorithm open source, researchers say that what the company has published doesn't offer the kind of transparency that would actually be useful for anyone trying to understand how X works in 2026. 

The code, much like an earlier version published in 2023, is a "redacted" version of X's algorithm, according to John Thickstun, an assistant professor of computer science at Cornell University. "What troubles me about these releases is that they give you a pretense that they're being transparent for releasing code and the sense that someone might be able to use this release to do some kind of auditing work or oversight work," Thickstun told Engadget. "And the fact is that that's not really possible at all."

Predictably, as soon as the code was released, users on X began posting lengthy threads about what it means for creators hoping to boost their visibility on the platform. For example, one post that was viewed more than 350,000 times advises users that X "will reward people who conversate" and "raise the vibrations of X." Another post with more than 20,000 views claims that posting video is the answer. Another post says that users should stick to their "niche" because "topic switching hurts your reach." But Thickstun cautioned against reading too much into supposed strategies for going viral. "They can't possibly draw those conclusions from what was released," he says. 

While there are some small details that shed light on how X recommends posts — for example, it filters out content that's more than a day old — Thickstun says that much of it is "not actionable" for content creators. 

Structurally, one of the biggest differences between the current algorithm and the version released in 2023 is that the new system relies on a Grok-like large language model to rank posts. "In the previous version, this was hard coded: you took how many times something was liked, how many times something was shared, how many times something was replied … and then based on that you calculate a score, and then you rank the post based on the score," explains Ruggero Lazzaroni, a pHD researcher at the University of Graz. "Now the score is derived not by the real amounts of likes and shares, but by how likely Grok thinks that you would like and share a post."

That also makes the algorithm even more opaque than it was before, says Thickstun. "So much more of the decisionmaking … is happening within black box neural networks that they're training on their data," he says. "More and more of the decisionmaking power of these algorithms is shifting not just out of public view, but actually really out of view or understanding of even the internal engineers that are working on these systems, because they're being shifted into these neural networks."

The release has even less detail about some aspects of the algorithm that were made public in 2023. At the time, the company included information about how it weighted various interactions to determine which posts should rank higher. For example, a reply was "worth" 27 retweets and a reply that generated a response from the original author was worth 75 retweets. But X has now redacted information about how it's weighing these factors, saying that this information was excluded "for security reasons." 

The code also doesn't include any information about the data the algorithm was trained on, which could help researchers and others understand it or conduct audits. "One of the things I would really want to see is, what is the training data that they're using for this model," says Mohsen Foroughifar, an assistant professor of business technologies at Carnegie Mellon University. "if the data that is used for training this model is inherently biased, then the model might actually end up still being biased, regardless of what kind of things that you consider within the model." 

Being able to conduct research on the X recommendation algorithm would be extremely valuable, says Lazzaroni, who is working on an EU-funded project exploring alternative recommendation algorithms for social media platforms. Much of Lazzaroni's work involves simulating real-world social media platforms to test different approaches. But he says the code released by X doesn't have enough information to actually reproduce its recommendation algorithm. 

"We have the code to run the algorithm, but we don't have the model that you need to run the algorithm," he says.

If researchers were able to study the X algorithm, it could yield insights that could impact more than just social media platforms. Many of the same questions and concerns that have been raised about how social media algorithms behave are likely to re-emerge in the context of AI chatbots."A lot of these challenges that we're seeing on social media platforms and the recommendation [systems] appear in a very similar way with these generative systems as well," Thickstun said. "So you can kind of extrapolate forward the kinds of challenges that we've seen with social media platforms to the kind of challenges that we'll see with interaction with GenAI platforms."

Lazzaroni, who spends a lot of time simulating some of the most toxic behavior on social media, is even more blunt. "AI companies, to maximize profit, optimize the large language models for user engagement and not for telling the truth or caring about the mental health of the users. And this is the same exact problem: they make more profit, but the users get a worse society, or they get worse mental health out of it."

This article originally appeared on Engadget at https://www.engadget.com/social-media/xs-open-source-algorithm-isnt-a-win-for-transparency-researchers-say-181836233.html?src=rss

Elon Musk’s SpaceX has acquired his AI company, xAI

Elon Musk’s SpaceX has acquired Musk’s xAI, the companies announced. The merger will “form the most ambitious, vertically-integrated innovation engine on (and off) Earth, with AI, rockets, space-based internet, direct-to-mobile device communications and the world’s foremost real-time information and free speech platform,” Musk wrote in an update.

The AI company that right now is best known for its CSAM-generating chatbot might seem like a strange fit for a rocket company. But SpaceX is key to Musk’s latest scheme to build AI data centers in space. In his update, Musk wrote that “global electricity demand for AI simply cannot be met with terrestrial solutions” and that moving the resource-intensive operations to space is “the only logical solution.” SpaceX just days ago filed an application with the FCC to create an “orbital data center” by launching a million new satellites.

Musk also claimed that, eventually, space-based data centers will enable other advancements in space travel. “The capabilities we unlock by making space-based data centers a reality will fund and enable self-growing bases on the Moon, an entire civilization on Mars and ultimately expansion to the Universe.” Notably, it’s not the first time Musk has made lofty claims about Mars. He predicted in 2017 that SpaceX would send crewed missions to Mars by 2024.

This also isn’t the first time Musk has acquired one of his own companies. He merged xAI and X last year, which means SpaceX now owns the social network Musk bought in 2022. And he recently announced that Tesla was investing $2 billion into xAI. SpaceX is planning to go public later this year in an initial public offering (IPO) that could value the company at more than $1 trillion, according to Bloomberg, which notes that SpaceX has also “discussed a possible merger with Tesla.”

This article originally appeared on Engadget at https://www.engadget.com/ai/elon-musks-spacex-has-acquired-his-ai-company-xai-221617040.html?src=rss

Mark Zuckerberg says Reality Labs will (eventually) stop losing so much money

Mark Zuckerberg says there's an end in sight to Reality Labs' years of multibillion-dollar losses following the company's layoffs to the metaverse division earlier this year. The CEO said he expects to "gradually reduce" how much money the company is losing as it doubles down on AI glasses and shifts away from virtual reality.

Speaking during Meta's fourth-quarter earnings call, Zuckerberg was clear that the changes won't happen soon, but sounded optimistic about the division that lost more than $19 billion in 2025 alone. "For Reality Labs, we are directing most of our investment towards glasses and wearables going forward, while focusing on making Horizon a massive success on mobile and making VR a profitable ecosystem over the coming years," he said. "I expect Reality Labs losses this year to be similar to last year, and this will likely be the peak, as we start to gradually reduce our losses going forward."

The company cut more than 1,000 employees from Reality Labs earlier this month, shut down three VR studios and announced plans to retire its app for VR meetings. Meta has also paused plans for third-party Horizon OS headsets. Instead, Meta is doubling down on its smart glasses and and wearables business, which tie in more neatly to Zuckerberg's vision for creating AI "superintelligence." 

During the call, Zuckerberg noted that sales of Meta's smart glasses "more than tripled" in 2025, and hinted at bigger plans for AR glasses. "They [AI glasses] are going to be able to see what you see, hear what you hear, talk to you and help you as you go about your day and even show you information or generate custom UI right there in your vision," he said. 

Zuckerberg has spent the last few years laying the groundwork for pivoting Meta's metaverse work into AI. He offered one example if what the means for Meta’s Horizon app. 

"You can imagine … people being able to easily, through a prompt, create a world or create a game, and be able to share that with people who they care about. And you see it in your feed, and you can jump right into it, and you can engage in it. And there are 3D versions of that, and there are 2D versions of that. And Horizon, I think fits very well with the kind of immersive 3D version of that.

“But there's definitely a version of the future where, you know, any video that you see, you can, like, tap on and jump into it and, like, engage and kind of like, experience it in a more meaningful way. And I think that the investments that we've done in both a lot of the virtual reality software and Horizon … are actually going to pair well with these AI advances to be able to bring some of those experiences to hundreds of millions and billions of people through mobile."

One thing Zuckerberg didn’t mention, though: the word “metaverse.”


This article originally appeared on Engadget at https://www.engadget.com/social-media/mark-zuckerberg-says-reality-labs-will-eventually-stop-losing-so-much-money-222900157.html?src=rss

LinkedIn will let you show off your vibe coding expertise

LinkedIn has long been a platform for showing off professional accomplishments. Now, the company is leaning into the rise of vibe coding by allowing users to show off their proficiency with various AI coding tools directly on their profiles.

The company is partnering with Replit, Lovable, Descript and Relay.app  on the feature and is working on integrations with fellow Microsoft-owned GitHub as well as Zapier.  LinkedIn has always allowed users to add various skills and certifications to their profiles. But what makes the latest update a bit different is that users aren't self-reporting their own qualifications. Instead, LinkedIn is allowing the companies behind the AI tools to assess an individual's relative skill and assign a level of proficiency that goes directly to their profile. 

For example, AI app maker Lovable could award someone a "bronze" in "vibe coding," while the platform Replit uses numerical levels and Relay.app may determine that someone is an "intermediate" level "AI Agent Builder," according to screenshots shared by LinkedIn. These levels should dynamically update as people get more experience using the relevant tools, according to LinkedIn.

Lovable's vibe coding rating system.
Lovable's vibe coding rating system.
LinkedIn

Of course, the update also comes at a time when companies have used these same kinds of AI tools to lay off thousands of workers. So while there's may be value in showing off your vibe coding skills, there are still many workers who likely aren't as excited about  ceding more ground to AI. When I asked, LinkedIn's head of career products Pat Whealan about this he said that 

 AI-specific skills are an increasingly important signal to recruiters and the latest update will make it easier for them to assess candidates' skills. But he added that the intention isn't to make AI-specific skills the sole focus. "This is less about replacing any of those other existing signals, and more about showing new ways that people are doing work," he tells Engadget. "And how do we give a verifiable signal to both hirers and other people looking at their profile, that they actually are using these tools on a regular basis."



This article originally appeared on Engadget at https://www.engadget.com/ai/linkedin-will-let-you-show-off-your-vibe-coding-expertise-140000776.html?src=rss

Meta blocks links to ICE List, a Wiki that names agents

Meta has started blocking links to ICE List, a website that compiles information about incidents involving Immigrations and Customs Enforcement (ICE) and Border Patrol agents, and lists thousands of their employees' names. It seems that the latter detail is what caused Meta to take action in a move that was first reported by Wired

ICE List is a crowdsourced Wiki that describes itself as "an independently maintained public documentation project focused on immigration-enforcement activity" in the US. "Its purpose is to record, organize, and preserve verifiable information about enforcement actions, agents, facilities, vehicles, and related incidents that would otherwise remain fragmented, difficult to access, or undocumented," its website states.

Along with notable incidents, the website also lists the names of individual agents associated with ICE, CBP and other DHS agencies. According to Wired, the website's creators said much of that information had come from a "leak," though it appears to be based largely on public LinkedIn profiles. As Wired notes:

The site went viral earlier this month when it claimed to have uploaded a leaked list of 4,500 DHS employees to its site, but a WIRED analysis found that the list relied heavily on information the employees shared publicly about themselves on sites such as LinkedIn.

Links to ICE List have been spreading widely for several weeks, including on Meta's platforms. There are numerous links to the website on Threads, some of which go back several weeks. Now though, clicking on previously-shared links instead results in a message that the link can't be opened. Users who try to share new links on Threads or Facebook also see error messages. "Posts that look like spam according to our Community Guidelines are blocked on Facebook and can't be edited," the notice says.  

When reached for comment, a Meta spokesperson pointed to the company's privacy policy barring the disclosure of personally identifiable information (PII). The company didn't address why it chose to start blocking the website after several weeks, or whether it considers public LinkedIn profiles to be in violation of its rules against doxxing.

It is, however, not the first time Meta has opted to remove users' posts tracking information about ICE actions. The social network previously took down a Facebook group that tracked ICE sightings in Chicago after pressure from the Justice Department.

Have a tip for Karissa? You can reach her by email, on X, Bluesky, Threads, or send a message to @karissabe.51 to chat confidentially on Signal.

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-blocks-links-to-ice-list-a-wiki-that-names-agents-231410653.html?src=rss

TikTok settles to avoid major social media addiction lawsuit

TikTok has reached a settlement in a closely-watched lawsuit over social media addiction, narrowly avoiding a trial that's scheduled to begin jury selection Tuesday. Terms of the deal, which was reported by The New York Times, weren't disclosed. 

TikTok's settlement comes about one week after Snap reached a settlement in the same case. The trial is expected to move forward in Los Angeles with Meta and YouTube as the only defendants. Mark Lanier, a lawyer for the plaintiff, said in a statement to NYT that they were "pleased" with the settlement and that it was "a good resolution." TikTok didn't immediately respond to a request for comment. 

The trial stems from a 2023 lawsuit brought by a California woman known in court documents as "K.G.M." She sued Meta, Snap, TikTok and YouTube and alleged that their platforms were addictive and had harmed her as a child. The judge in the case previously ordered the companies' executives, including Mark Zuckerberg and Adam Mosseri, to testify. YouTube's top exec, Neal Mohan, is also likely to testify, according to The New York Times

The lawsuit is the first among several high-profile cases against social media companies to go to trial this year. Meta is expected to head to court in New Mexico in early February in a case brought by the state's attorney general, who has alleged that Facebook and Instagram have facilitated harm to children. TikTok and Snap are collectively facing more than a dozen other trials in California courts this year.

This article originally appeared on Engadget at https://www.engadget.com/social-media/tiktok-settles-to-avoid-major-social-media-addiction-lawsuit-183943927.html?src=rss