Nearly one-third of teens use AI chatbots daily

AI chatbots haven't come close to replacing teens' social media habits, but they are playing a significant role in their online habits. Nearly one-third of US teens report using AI chatbots daily or more, according to a new report from Pew Research. 

The report is the first from Pew to specifically examine how often teens are using AI overall, and was published alongside its latest research on teens' social media use. It's based on an online survey of 1,458 US teens who were polled between September 25 to October 9, 2025. According to Pew, the survey was "weighted to be representative of U.S. teens ages 13 to 17 who live with their parents by age, gender, race and ethnicity, household income, and other categories."

According to Pew, 48 percent of teens use AI chatbots "several times a week" or more often, with 12 percent reporting their use at "several times a day" and 4 percent saying they use the tools "almost constantly." That's far fewer than the 21 percent of teens who report almost constant use of TikTok and the 17 percent who say the same about YouTube. But those numbers are still significant considering how much newer these services are compared with mainstream social media apps. 

The report also offers some insight into which AI companies' chatbots are most used among teens. OpenAI's ChatGPT came out ahead by far, with 59 percent of teens saying they had used the service, followed by Google's Gemini at 23 percent and Meta AI at 20 percent. Just 14 percent of teens said they had ever used Microsoft Copilot, and 9 percent and 3 percent reported using Character AI and Anthropic's Claude, respectively.

The survey is Pew's first to study Ai chatbot use among teens broadly.
The survey is Pew's first to study Ai chatbot use among teens broadly.
Pew Research

Pew's research comes as there's been growing scrutiny over AI companies' handling of younger users. Both OpenAI and Character AI are currently facing wrongful deaths lawsuits from the parents of teens who died by suicide. In both cases, the parents allege that their child's interactions with a chatbot played a role in their death. (Character AI briefly banned teens from its service before introducing a more limited format for younger users.) Other companies, including Alphabet and Meta, are being probed by the FTC over their safety policies for younger users.

Interestingly, the report also indicates there has been little change in US teens' social media use.  Pew, which has regularly polled teens about how they use social media, notes that teens' daily use of these platforms "remains relatively stable" compared with recent years. YouTube is still the most widely-used platform, reaching 92 percent of teens, followed by TikTok at 69 percent, Instagram at 63 percent and Snapchat at 55 percent. Of the major apps the report surveyed, WhatsApp is the only service to see significant change in recent years, with 24 percent of teens now reporting they use the messaging app, compared with 17 percent in 2022.


This article originally appeared on Engadget at https://www.engadget.com/ai/nearly-one-third-of-teens-use-ai-chatbots-daily-200000888.html?src=rss

Russia blocks Roblox, citing ‘LGBT propaganda’ as a reason

Russia has blocked the popular gaming platform Roblox, according to a report by Reuters. The country's communications watchdog Roskomnadzor accused the developers of distributing extremist materials and "LGBT propaganda." The agency went on to say that Roblox is "rife with inappropriate content that can negatively impact the spiritual and moral development of children."

This is just the latest move the country has taken against what it calls the "international LGBT movement." It recently pressured the language-learning app Duolingo into deleting references to what the country calls "non-traditional sexual relations."

Russian courts regularly issue fines to organizations that violate its "LGBT propaganda" law, which criminalizes the promotion of same-sex relationships. President Vladimir Putin has called the protection of gay and transgender rights a move "towards open satanism."

Roblox doesn't have a "LGBT propaganda" problem because there's no such thing, but the platform does have plenty of issues that Russia doesn't seem all that concerned about. It's a noted haven for child predators, which has caused other countries like Iraq and Turkey to ban the platform. To its credit, the company has begun cracking down on user-generated content and added new age-based restrictions.

Roblox is still one of the more popular entertainment platforms in the world. It averaged over 151 million daily active users in the third quarter of this year alone.

This article originally appeared on Engadget at https://www.engadget.com/gaming/russia-blocks-roblox-citing-lgbt-propaganda-as-a-reason-180757267.html?src=rss

Denmark set to ban social media for users under 15 years of age

The government of Denmark said on Friday that lawmakers from its political right, left and center have reached an agreement to ban social media for anyone under 15, as reported by The Associated Press. If enacted, the move would be one of the most ambitious attempts globally to keep children off social media. Momentum has been building in recent years around concerns that social media is harming its younger users.

The country’s Digitalization Ministry would set the minimum age at 15 for certain social media platforms but has not clarified which ones would be affected. The government also did not share specifics on how enforcement would work.

A statement from the Digitalization Ministry reads, in part, “Children and young people have their sleep disrupted, lose their peace and concentration, and experience increasing pressure from digital relationships where adults are not always present,” as reported by The Associated Press. Digitalization Minister Caroline Stage said Danish authorities are “finally drawing a line in the sand and setting a clear direction.”

In December, the world’s first country-wide social media ban for children will go into effect in Australia, banning children under 16 from major social media platforms. Platforms that want to operate in the country must employ age-verification technology and would face fines if they fail to enforce the nation’s age limits.

Some age-verification methods, particularly facial recognition and showing of ID, have faced heavy skepticism as they have been implemented around the world. In the UK and Italy anyone wanting to watch porn online must now upload a selfie or provide ID to verify they are above age limits. If the same methods are employed to verify teenagers' ages, questions will undoubtedly arise about data safety and privacy involving minors' data.

Texas recently came close to enacting a similar ban, though it ultimately didn't pass. Utah passed laws in 2023 that require parental consent before teens can create social media accounts. Florida passed a social media ban for children that is currently held up in court.

This move will undoubtedly spark more conversation around the potential harms of social media on adolescents, as well as whether social media access will be perceived as personal parenting decisions that should remain free from government intervention.

This article originally appeared on Engadget at https://www.engadget.com/social-media/denmark-set-to-ban-social-media-for-users-under-15-years-of-age-171602408.html?src=rss

Women of color running for Congress are attacked disproportionately on X, report finds

Women of color running for Congress in 2024 have faced a disproportionate number of attacks on X compared with other candidates, according to a new report from the nonprofit Center for Democracy and Technology (CDT) and the University of Pittsburgh.

The report sought to “compare the levels of offensive speech and hate speech that different groups of Congressional candidates are targeted with based on race and gender, with a particular emphasis on women of color.” To do this, the report’s authors analyzed 800,000 tweets that covered a three-month period between May 20 and August 23 of this year. That dataset represented all posts mentioning a candidate running for Congress with an account on X.

The report’s authors found that more than 20 percent of posts directed at Black and Asian women candidates “contained offensive language about the candidate.” It also found that Black women in particular were targeted with hate speech more often compared with other candidates.

“On average, less than 1% of all tweets that mentioned a candidate contained hate speech,” the report says. “However, we found that African-American women candidates were more likely than any other candidate to be subject to this type of post (4%).” That roughly lines up with X’s recent transparency report — the company’s first since Elon Musk took over the company — which said that rule-breaking content accounts for less than 1 percent of all posts on its platform.

In a statement, an X spokesperson said the company had suspended more than 1 million accounts and removed more than 2 million posts in the first half of 2024 for breaking the company's rules. "While we encourage people to express themselves freely on X, abuse, harassment, and hateful conduct have no place on our platform and violate the X Rules," the spokesperson said. 

Notably, the CDT’s report analyzed both hate speech — which ostensibly violates X’s policies — and “offensive speech,” which the report defined as “words or phrases that demean, threaten, insult, or ridicule a candidate.” While the latter category may not be against X’s rules, the report notes that the volume of suck attacks could still deter women of color from running for office. It recommends that X and other platforms take “specific measures” to counteract such effects.

“This should include clear policies that prohibit attacks against someone based on race or gender, greater transparency into how their systems address these types of attacks, better reporting tools and means for accountability, regular risk assessments with an emphasis on race and gender, and privacy preserving mechanisms for independent researchers to conduct studies using their data. The consequences of the status-quo where women of color candidates are targeted with significant attacks online at much higher rates than other candidates creates an immense barrier to creating a truly inclusive democracy.”

Update: October 2, 2024, 12:13 PM ET: This post was updated to include a statement from an X spokesperson. 

This article originally appeared on Engadget at https://www.engadget.com/social-media/women-of-color-running-for-congress-are-attacked-disproportionately-on-x-report-finds-043206066.html?src=rss

Meta needs updated rules for sexually explicit deepfakes, Oversight Board says

Meta’s Oversight Board is urging the company to update its rules around sexually explicit deepfakes. The board made the recommendations as part of its decision in two cases involving AI-generated images of public figures.

The cases stem from two user appeals over AI-generated images of public figures, though the board declined to name the individuals. One post, which originated on Instagram, depicted a nude Indian woman. The post was reported to Meta but the report was automatically closed after 48 hours, as was a subsequent user appeal. The company eventually removed the post after attention from the Oversight Board, which nonetheless overturned Meta’s original decision to leave the image up.

The second post, which was shared to a Facebook group dedicated to AI art, showed “an AI-generated image of a nude woman with a man groping her breast.” Meta automatically removed the post because it had been added to an internal system that can identify images that have been previously reported to the company. The Oversight Board found that Meta was correct to have taken the post down.

In both cases, the Oversight Board said the AI deepfakes violated the company’s rules barring “derogatory sexualized photoshop” images. But in its recommendations to Meta, the Oversight Board said the current language used in these rules is outdated and may make it more difficult for users to report AI-made explicit images.

Instead, the board says that it should update its policies to make clear that it prohibits non-consensual explicit images that are AI-made or manipulated. “Much of the non-consensual sexualized imagery spread online today is created with generative AI models that either automatically edit existing images or create entirely new ones,” the board writes.”Meta should ensure that its prohibition on derogatory sexualized content covers this broader array of editing techniques, in a way that is clear to both users and the company’s moderators.”

The board also called out Meta’s practice of automatically closing user appeals, which it said could have “significant human rights impacts” on users. However, the board said it didn’t have “sufficient information” about the practice to make a recommendation.

The spread of explicit AI images has become an increasingly prominent issue as “deepfake porn” has become a more widespread form of online harassment in recent years. The board’s decision comes one day after the US Senate unanimously passed a bill cracking down on explicit deepfakes. If passed into law, the measure would allow victims to sue the creators of such images for as much as $250,000.

The cases aren’t the first time the Oversight Board has pushed Meta to update its rules for AI-generated content. In another high-profile case, the board investigated a maliciously edited video of President Joe Biden. The case ultimately resulted in Meta revamping its policies around how AI-generated content is labeled.

This article originally appeared on Engadget at https://www.engadget.com/meta-needs-updated-rules-for-sexually-explicit-deepfakes-oversight-board-says-100005969.html?src=rss

Snap will pay $15 million to settle California lawsuit alleging sexual discrimination

The California Civil Rights Department has revealed that Snap Inc. has agreed to pay $15 million to settle the lawsuit it filed "over alleged discrimination, harassment, and retaliation against women at the company." California's civil rights agency started investigating the company behind Snapchat over three years ago due to claims that it discriminated and retaliated against female employees. The agency accused the company of failing the make sure that female employees were paid equally despite a period of rapid growth between 2015 to 2022. 

Women, especially those in engineering roles, were allegedly discouraged to apply for promotions and lost them to less qualified male colleagues when they did. The agency said that they also had to endure unwelcome sexual advances and faced retaliation when they spoke up. Female employees were given negative performance reviews, were denied opportunities and, ultimately, were terminated.

"In California, we’re proud of the work of our state’s innovators who are a driving force of our nation’s economy," CRD Director Kevin Kish said in a statement. "We're also proud of the strength of our state’s civil rights laws, which help ensure every worker is protected against discrimination and has an opportunity to thrive. This settlement with Snapchat demonstrates a shared commitment to a California where all workers have a fair chance at the American Dream. Women are entitled to equality in every job, in every workplace, and in every industry."

Snapchat denies that the company has an issue with pay inequality and sexual discrimination. In a statement sent to Politico and Bloomberg, it says it only decided to settle due to the costs and impact of a lengthy litigation. "We care deeply about our commitment to maintain a fair and inclusive environment at Snap, and do not believe we have any ongoing systemic pay equity, discrimination, harassment, or retaliation issues against women. While we disagreed with the California Civil Rights Department's claims and analyses, we took into consideration the cost and impact of lengthy litigation, and the scope of the CRD’s other settlements, and decided it is in the best interest of the company to resolve these claims and focus on the future," the company explains.

Under the settlement terms, which still have to be approved by a judge, $14.5 million of the total amount will go towards women who worked as employees at Snap Inc. in California between 2014 and 2024. The company will also be required to have a third-party monitor audit its sexual harassment, retaliation and discrimination compliance.

California's Civil Rights Department was the same agency that sued Activision Blizzard in 2021 and accused the company of fostering a "frat boy" culture that encouraged rampant misogyny and sexual harassment. The agency also found that women in the company were overlooked for promotions and were paid less than their male colleagues. It settled with the video game developer in late 2023 for $54 million, though it had to withdraw its claims that there was widespread sexual harassment at the company. 

This article originally appeared on Engadget at https://www.engadget.com/snap-will-pay-15-million-to-settle-california-lawsuit-alleging-sexual-discrimination-120019788.html?src=rss

X now treats the term cisgender as a slur

The increasingly discriminatory X (Twitter) now considers the term “cisgender” a slur. Owner Elon Musk posted last June, to the delight of his bigoted brigade of blue-check sycophants, that “‘cis’ or ‘cisgender’ are considered slurs on this platform.” On Tuesday, X made good on the regressive provocateur’s stance and reportedly began posting an official warning that the LGBTQ-inclusive terms could result in a ban from the platform. Not that you’d miss much.

TechCrunch reported on Tuesday that trying to publish a post using the terms “cisgender” or “cis” in the X mobile app will pop up a full-screen warning reading, “This post contains language that may be considered a slur by X and could be used in a harmful manner in violation of our rules.” It then gives you the choice of continuing to publish the post or conforming to the backward views of the worst of us and deleting it.

Of course, neither form of the term cisgender is a slur.

As the historically marginalized transgender community finally began finding at least a sliver of widespread and long overdue social acceptance in the 21st century, the term became more commonly used in the mainstream lexicon to describe people whose gender identity matches their sex at birth. Organizations including the American Psychological Association, World Health Organization, American Medical Association, American Psychiatric Association recognize the term.

But some people have a hard time accepting and respecting that some humans are different from others. Those fantasizing (against all evidence and scientific consensus) that the heteronormative ideals they grew up with are absolute gospel sometimes take great offense at being asked to adjust their vocabulary to communicate respect for a community that has spent centuries forced to live in the shadows or risk their safety due to the widespread pathologization of their identities. 

Musk seems to consider those the good ol’ days.

This isn’t the billionaire’s first ride on the Transphobe Train. After his backward tweet last June (on the first day of Pride Month, no less), the edgelord’s platform ran a timeline takeover ad from a right-wing nonprofit, plugging a transphobic propaganda film. In case you’re wondering if the group may have anything of value to say, TechCrunch notes that the same organization also doubts climate change and downplays the dehumanizing atrocities of slavery.

X also reversed course on a policy, implemented long before Musk’s takeover, that banned the deadnaming or misgendering of transgender people.

This article originally appeared on Engadget at https://www.engadget.com/x-now-treats-the-term-cisgender-as-a-slur-211117779.html?src=rss

X now treats the term cisgender as a slur

The increasingly discriminatory X (Twitter) now considers the term “cisgender” a slur. Owner Elon Musk posted last June, to the delight of his bigoted brigade of blue-check sycophants, that “‘cis’ or ‘cisgender’ are considered slurs on this platform.” On Tuesday, X made good on the regressive provocateur’s stance and reportedly began posting an official warning that the LGBTQ-inclusive terms could result in a ban from the platform. Not that you’d miss much.

TechCrunch reported on Tuesday that trying to publish a post using the terms “cisgender” or “cis” in the X mobile app will pop up a full-screen warning reading, “This post contains language that may be considered a slur by X and could be used in a harmful manner in violation of our rules.” It then gives you the choice of continuing to publish the post or conforming to the backward views of the worst of us and deleting it.

Of course, neither form of the term cisgender is a slur.

As the historically marginalized transgender community finally began finding at least a sliver of widespread and long overdue social acceptance in the 21st century, the term became more commonly used in the mainstream lexicon to describe people whose gender identity matches their sex at birth. Organizations including the American Psychological Association, World Health Organization, American Medical Association, American Psychiatric Association recognize the term.

But some people have a hard time accepting and respecting that some humans are different from others. Those fantasizing (against all evidence and scientific consensus) that the heteronormative ideals they grew up with are absolute gospel sometimes take great offense at being asked to adjust their vocabulary to communicate respect for a community that has spent centuries forced to live in the shadows or risk their safety due to the widespread pathologization of their identities. 

Musk seems to consider those the good ol’ days.

This isn’t the billionaire’s first ride on the Transphobe Train. After his backward tweet last June (on the first day of Pride Month, no less), the edgelord’s platform ran a timeline takeover ad from a right-wing nonprofit, plugging a transphobic propaganda film. In case you’re wondering if the group may have anything of value to say, TechCrunch notes that the same organization also doubts climate change and downplays the dehumanizing atrocities of slavery.

X also reversed course on a policy, implemented long before Musk’s takeover, that banned the deadnaming or misgendering of transgender people.

This article originally appeared on Engadget at https://www.engadget.com/x-now-treats-the-term-cisgender-as-a-slur-211117779.html?src=rss

Meta’s Oversight Board will rule on AI-generated sexual images

Meta’s Oversight Board is once again taking on the social network’s rules for AI-generated content. The board has accepted two cases that deal with AI-made explicit images of public figures.

While Meta’s rules already prohibit nudity on Facebook and Instagram, the board said in a statement that it wants to address whether “Meta’s policies and its enforcement practices are effective at addressing explicit AI-generated imagery.” Sometimes referred to as “deepfake porn,” AI-generated images of female celebrities, politicians and other public figures has become an increasingly prominent form of online harassment and has drawn a wave of proposed regulation. With the two cases, the Oversight Board could push Meta to adopt new rules to address such harassment on its platform.

The Oversight Board said it’s not naming the two public figures at the center of each case in an effort to avoid further harassment, though it described the circumstances around each post.

One case involves an Instagram post showing an AI-generated image of a nude Indian woman that was posted by an account that “only shares AI- generated images of Indian women.” The post was reported to Meta but the report was closed after 48 hours because it wasn’t reviewed. The same user appealed that decision but the appeal was also closed and never reviewed. Meta eventually removed the post after the user appealed to the Oversight Board and the board agreed to take the case.

The second case involved a Facebook post in a group dedicated to AI art. The post in question showed “an AI-generated image of a nude woman with a man groping her breast.” The woman was meant to resemble “an American public figure” whose name was also in the caption of the post. The post was taken down automatically because it had been previously reported and Meta’s internal systems were able to match it to the prior post. The user appealed the decision to take it down but the appeal was “automatically closed.” The user then appealed to the Oversight Board, which agreed to consider the case.

In a statement, Oversight Board co-chair Helle Thorning-Schmidt said that the board took up the two cases from different countries in order to assess potential disparities in how Meta’s policies are enforced. “We know that Meta is quicker and more effective at moderating content in some markets and languages than others,” Thorning-Schmidt said. “By taking one case from the US and one from India, we want to look at whether Meta is protecting all women globally in a fair way.”

The Oversight Board is asking for public comment for the next two weeks and will publish its decision sometime in the next few weeks, along with policy recommendations for Meta. A similar process involving a misleadingly-edited video of Joe Biden recently resulted in Meta agreeing to label more AI-generated content on its platform.

This article originally appeared on Engadget at https://www.engadget.com/metas-oversight-board-will-rule-on-ai-generated-sexual-images-100047138.html?src=rss

Meta’s Oversight Board will rule on AI-generated sexual images

Meta’s Oversight Board is once again taking on the social network’s rules for AI-generated content. The board has accepted two cases that deal with AI-made explicit images of public figures.

While Meta’s rules already prohibit nudity on Facebook and Instagram, the board said in a statement that it wants to address whether “Meta’s policies and its enforcement practices are effective at addressing explicit AI-generated imagery.” Sometimes referred to as “deepfake porn,” AI-generated images of female celebrities, politicians and other public figures has become an increasingly prominent form of online harassment and has drawn a wave of proposed regulation. With the two cases, the Oversight Board could push Meta to adopt new rules to address such harassment on its platform.

The Oversight Board said it’s not naming the two public figures at the center of each case in an effort to avoid further harassment, though it described the circumstances around each post.

One case involves an Instagram post showing an AI-generated image of a nude Indian woman that was posted by an account that “only shares AI- generated images of Indian women.” The post was reported to Meta but the report was closed after 48 hours because it wasn’t reviewed. The same user appealed that decision but the appeal was also closed and never reviewed. Meta eventually removed the post after the user appealed to the Oversight Board and the board agreed to take the case.

The second case involved a Facebook post in a group dedicated to AI art. The post in question showed “an AI-generated image of a nude woman with a man groping her breast.” The woman was meant to resemble “an American public figure” whose name was also in the caption of the post. The post was taken down automatically because it had been previously reported and Meta’s internal systems were able to match it to the prior post. The user appealed the decision to take it down but the appeal was “automatically closed.” The user then appealed to the Oversight Board, which agreed to consider the case.

In a statement, Oversight Board co-chair Helle Thorning-Schmidt said that the board took up the two cases from different countries in order to assess potential disparities in how Meta’s policies are enforced. “We know that Meta is quicker and more effective at moderating content in some markets and languages than others,” Thorning-Schmidt said. “By taking one case from the US and one from India, we want to look at whether Meta is protecting all women globally in a fair way.”

The Oversight Board is asking for public comment for the next two weeks and will publish its decision sometime in the next few weeks, along with policy recommendations for Meta. A similar process involving a misleadingly-edited video of Joe Biden recently resulted in Meta agreeing to label more AI-generated content on its platform.

This article originally appeared on Engadget at https://www.engadget.com/metas-oversight-board-will-rule-on-ai-generated-sexual-images-100047138.html?src=rss