X is working on NSFW Communities for adult content

X is working on features that will allow admins of “Communities,” the platform’s tool for subreddit-like groups, to designate the spaces as containing “adult content.” The change was confirmed by an engineer at X amid reports that the Elon Musk-owned company was working on enabling NSFW groups.

In a post on X, engineer Dong Wook Chung noted that “soon” NSFW content would be automatically filtered in the app’s Communities feature. “Admins can now set 'Adult content' in Settings to avoid auto-filtering of the content,” Chung said.

As Bloomberg reported, researchers had previously spotted clues that X planned to enable settings for “adult-sensitive” content. X permits users to share nudity and other “graphic” content, but doesn’t allow it to appear in certain parts of the app, like profile photos and cover images for Communities.

X’s Communities feature predates Musk’s takeover of the company. Twitter began experimenting with the idea in 2021, saying it would provide “a more intimate space for conversations” on the platform. Though Twitter never publicly discussed enabling NSFW features for Communities, the app allowed adult content, unlike most of its social media peers. The company reportedly looked into creating an OnlyFans competitor with its creator subscription product in 2022. The plan was eventually scrapped, according to the Platformer newsletter, due to concerns it would “worsen” the company’s problems with illegal child exploitation content.

It’s not clear if X’s current leadership has addressed those concerns. In a separate post, Chung, the X engineer, stated that the new filtering settings “is about making Communities safer for everyone by automatically filtering out” adult content. “Only users who have specified their age will be able to search Communities with NSFW content.” 

X didn’t immediately respond to a request for comment.

This article originally appeared on Engadget at https://www.engadget.com/x-is-working-on-nsfw-communities-for-adult-content-184629839.html?src=rss

X is working on NSFW Communities for adult content

X is working on features that will allow admins of “Communities,” the platform’s tool for subreddit-like groups, to designate the spaces as containing “adult content.” The change was confirmed by an engineer at X amid reports that the Elon Musk-owned company was working on enabling NSFW groups.

In a post on X, engineer Dong Wook Chung noted that “soon” NSFW content would be automatically filtered in the app’s Communities feature. “Admins can now set 'Adult content' in Settings to avoid auto-filtering of the content,” Chung said.

As Bloomberg reported, researchers had previously spotted clues that X planned to enable settings for “adult-sensitive” content. X permits users to share nudity and other “graphic” content, but doesn’t allow it to appear in certain parts of the app, like profile photos and cover images for Communities.

X’s Communities feature predates Musk’s takeover of the company. Twitter began experimenting with the idea in 2021, saying it would provide “a more intimate space for conversations” on the platform. Though Twitter never publicly discussed enabling NSFW features for Communities, the app allowed adult content, unlike most of its social media peers. The company reportedly looked into creating an OnlyFans competitor with its creator subscription product in 2022. The plan was eventually scrapped, according to the Platformer newsletter, due to concerns it would “worsen” the company’s problems with illegal child exploitation content.

It’s not clear if X’s current leadership has addressed those concerns. In a separate post, Chung, the X engineer, stated that the new filtering settings “is about making Communities safer for everyone by automatically filtering out” adult content. “Only users who have specified their age will be able to search Communities with NSFW content.” 

X didn’t immediately respond to a request for comment.

This article originally appeared on Engadget at https://www.engadget.com/x-is-working-on-nsfw-communities-for-adult-content-184629839.html?src=rss

Snapchat’s latest paid perk is an AI Bitmoji of your pet

Snapchat has a new AI-powered perk for subscribers: Bitmoji versions of your pet. The feature, which is unfortunately not called “petmoji,” allows users to snap a photo of their four-legged friend to create a cartoon-like avatar to accompany their Bitmoji in the Snap Map.

Based on screenshots shared by the company, it seems users will be able to choose from a few different variations of the AI-generated images after sharing a photo of their pet. That’s considerably less customization than what you can do with your own human-inspired Bitmoji,though it should allow users to create something that looks similar to their IRL pet. (No word on if Snap could one day introduce branded pet accessories for animal avatars like they do for human Bitmoji.)

The addition is also the latest example of how Snap has embraced AI features in its subscription offering. Since debuting Snapchat+ in 2022, the company has used the premium service to experiment with generative AI features, including its MyAI assistant as well as camera-powered features like Dreams and AI-generated snaps. Snapchat+ has more than 7 million subscribers, the company announced in December.

Elsewhere, Snap added some updates for non-subscribers, too. The app is adding a new template feature to make it easier to edit clips, and new swipe-based gestures to send and edit snaps more quickly. Snapchat will also support longer video uploads for Stories and Spotlight. In-app captures can now be three minutes long, while the app will support uploads of up to five minutes.

This article originally appeared on Engadget at https://www.engadget.com/snapchats-latest-paid-perk-is-an-ai-bitmoji-of-your-pet-235027028.html?src=rss

Snapchat’s latest paid perk is an AI Bitmoji of your pet

Snapchat has a new AI-powered perk for subscribers: Bitmoji versions of your pet. The feature, which is unfortunately not called “petmoji,” allows users to snap a photo of their four-legged friend to create a cartoon-like avatar to accompany their Bitmoji in the Snap Map.

Based on screenshots shared by the company, it seems users will be able to choose from a few different variations of the AI-generated images after sharing a photo of their pet. That’s considerably less customization than what you can do with your own human-inspired Bitmoji,though it should allow users to create something that looks similar to their IRL pet. (No word on if Snap could one day introduce branded pet accessories for animal avatars like they do for human Bitmoji.)

The addition is also the latest example of how Snap has embraced AI features in its subscription offering. Since debuting Snapchat+ in 2022, the company has used the premium service to experiment with generative AI features, including its MyAI assistant as well as camera-powered features like Dreams and AI-generated snaps. Snapchat+ has more than 7 million subscribers, the company announced in December.

Elsewhere, Snap added some updates for non-subscribers, too. The app is adding a new template feature to make it easier to edit clips, and new swipe-based gestures to send and edit snaps more quickly. Snapchat will also support longer video uploads for Stories and Spotlight. In-app captures can now be three minutes long, while the app will support uploads of up to five minutes.

This article originally appeared on Engadget at https://www.engadget.com/snapchats-latest-paid-perk-is-an-ai-bitmoji-of-your-pet-235027028.html?src=rss

More YouTube creators are now making money from Shorts, the company’s TikTok competitor

YouTube’s TikTok competitor, Shorts, is becoming a more significant part of the company’s monetization program. The company announced that more than a quarter of channels in its Partner Program are now earning money from the short-form videos.

The milestone comes a little more than a year after YouTube began sharing ad revenue with creators making Shorts. YouTube says it currently has more than 3 million creators around the world in the Partner Program, which would imply the number of Shorts creators making money from the platform is somewhere in the hundreds of thousands.

Because ads on Shorts appear between clips in a feed, revenue sharing for Shorts is structured differently than for longer-form content on YouTube. Ad revenue is pooled and divided among eligible creators based on factors like views and music licensing. The company has said this arrangement is far more lucrative for individuals than traditional creator funds.

So far though, it’s unclear just how much creators are making from Shorts compared with the platform’s other monetization programs. YouTube declined to share details but said the company has paid out $70 billion to creators over the last three years.

Shorts’ momentum could grow even more in the coming months. TikTok, which itself has been trying to compete more directly with YouTube by encouraging longer videos, is facing a nonzero chance that its app could be banned in the United States. Though that outcome is far from certain, YouTube would almost certainly attract former TikTok users and creators.

This article originally appeared on Engadget at https://www.engadget.com/more-youtube-creators-are-now-making-money-from-shorts-the-companys-tiktok-competitor-130017537.html?src=rss

More YouTube creators are now making money from Shorts, the company’s TikTok competitor

YouTube’s TikTok competitor, Shorts, is becoming a more significant part of the company’s monetization program. The company announced that more than a quarter of channels in its Partner Program are now earning money from the short-form videos.

The milestone comes a little more than a year after YouTube began sharing ad revenue with creators making Shorts. YouTube says it currently has more than 3 million creators around the world in the Partner Program, which would imply the number of Shorts creators making money from the platform is somewhere in the hundreds of thousands.

Because ads on Shorts appear between clips in a feed, revenue sharing for Shorts is structured differently than for longer-form content on YouTube. Ad revenue is pooled and divided among eligible creators based on factors like views and music licensing. The company has said this arrangement is far more lucrative for individuals than traditional creator funds.

So far though, it’s unclear just how much creators are making from Shorts compared with the platform’s other monetization programs. YouTube declined to share details but said the company has paid out $70 billion to creators over the last three years.

Shorts’ momentum could grow even more in the coming months. TikTok, which itself has been trying to compete more directly with YouTube by encouraging longer videos, is facing a nonzero chance that its app could be banned in the United States. Though that outcome is far from certain, YouTube would almost certainly attract former TikTok users and creators.

This article originally appeared on Engadget at https://www.engadget.com/more-youtube-creators-are-now-making-money-from-shorts-the-companys-tiktok-competitor-130017537.html?src=rss

Anti-trans hate is ‘widespread’ on Facebook, Instagram and Threads, report warns

Meta is failing to enforce its own rules against anti-trans hate speech on its platform, a new report from GLAAD warns. The LGBTQ advocacy group found that “extreme anti-trans hate content remains widespread across Instagram, Facebook, and Threads.”

The report documents dozens of examples of hate speech from Meta’s apps, which GLAAD says were reported to the company between June 2023 and March 2024. But though the posts appeared to be clear violations of the company’s policies, “Meta either replied that posts were not violative or simply did not take action on them,” GLAAD says.

The reported content included posts with anti-trans slurs, violent and dehumanizing language and promotions for conversion therapy, all of which are barred under Meta’s rules. GLAAD also notes that some of the posts it reported came from influential accounts with large audiences on Facebook and Instagram. GLAAD also shared two examples of posts from Threads, Meta’s newest app where the company has tried to tamp down “political” content and other “potentially sensitive” topics.

“The company’s ongoing failure to enforce their own policies against anti-LGBTQ, and especially anti-trans hate, is simply unacceptable,” GLAAD’s CEO and President Sarah Kate Ellis said in a statement.

Meta didn’t immediately respond to a request for comment. But GLAAD’s report isn’t the first time the company has faced criticism for its handling of content targeting the LGBTQ community. Last year the Oversight Board urged Meta to “improve the accuracy of its enforcement on hate speech towards the LGBTQIA+ community.”

This article originally appeared on Engadget at https://www.engadget.com/anti-trans-hate-is-widespread-on-facebook-instagram-and-threads-report-warns-215538151.html?src=rss

The FTC might sue TikTok over its handling of users’ privacy and security

TikTok, already fighting a proposed law that could lead to a ban of the app in the United States, may soon also find itself in the crosshairs of the Federal Trade Commission. The FTC is close to wrapping up a multiyear investigation into the company, which could result in a lawsuit or major fine, Politico reports.

The investigation is reportedly centered around the app’s privacy and security practices, including its handling of children’s user data. According to Politico, the FTC is looking into potential violations of the Children's Online Privacy Protection Act (COPPA), as well as “allegations that the company misled its users by stating falsely that individuals in China do not have access to U.S. user data.” TikTok could also be penalized for violating the terms of its 2019 settlement with regulators over data privacy.

While it’s not clear if the FTC’s investigation will result in a lawsuit or other action, the investigation is yet another source of pressure for the company as it tries to secure its future in its largest market.. After a quick passage in the House, the Senate is considering a bill that would force TikTok’s parent company, ByteDance, to sell the app or face an outright ban in the US. The Biden Administration, which has also tried to pressure ByteDance to divest TikTok, is backing the measure and US intelligence officials have briefed lawmakers on the alleged national security risks posed by the app.

TikTok didn’t immediately respond to a request for comment.

This article originally appeared on Engadget at https://www.engadget.com/the-ftc-might-sue-tiktok-over-its-handling-of-users-privacy-and-security-224911806.html?src=rss

The Oversight Board weighs in on Meta’s most-moderated word

The Oversight Board is urging Meta to change the way it moderates the word “shaheed,” an Arabic term that has led to more takedowns than any other word or phrase on the company’s platforms. Meta asked the group for help crafting new rules last year after attempts to revamp it internally stalled.

The Arabic word “shaheed” is often translated as “martyr,” though the board notes that this isn’t an exact definition and the word can have “multiple meanings.” But Meta’s current rules are based only on the “martyr” definition, which the company says implies praise. This has led to a “blanket ban” on the word when used in conjunction with people designated as “dangerous individuals” by the company.

However, this policy ignores the “linguistic complexity” of the word, which is “often used, even with reference to dangerous individuals, in reporting and neutral commentary, academic discussion, human rights debates and even more passive ways,” the Oversight Board says in its opinion. “There is strong reason to believe the multiple meanings of ‘shaheed’ result in the removal of a substantial amount of material not intended as praise of terrorists or their violent actions.”

In their recommendations to Meta, the Oversight Board says that the company should end its “blanket ban” on the word being used to reference “dangerous individuals,” and that posts should only be removed if there are other clear “signals of violence” or if the content breaks other policies. The board also wants Meta to better explain how it uses automated systems to enforce these rules.

If Meta adopts the Oversight Board’s recommendations, it could have a significant impact on the platform’s Arabic-speaking users. The board notes that the word, because it is so common, likely “accounts for more content removals under the Community Standards than any other single word or phrase,” across the company’s apps.

“Meta has been operating under the assumption that censorship can and will improve safety, but the evidence suggests that censorship can marginalize whole populations while not improving safety at all,” the board’s co-chair (and former Danish prime minister) Helle Thorning-Schmidt said in a statement. “The Board is especially concerned that Meta’s approach impacts journalism and civic discourse because media organizations and commentators might shy away from reporting on designated entities to avoid content removals.”

This is hardly the first time Meta has been criticized for moderation policies that disproportionately impact Arabic-speaking users. A 2022 report commissioned by the company found that Meta’s moderators were less accurate when assessing Palestinian Arabic, resulting in “false strikes” on users’ accounts. The company apologized last year after Instagram’s automated translations began inserting the word “terrorist” into the profiles of some Palestinian users.

The opinion is also yet another example of how long it can take for Meta’s Oversight Board to influence the social network’s policies. The company first asked the board to weigh in on the rules more than a year ago (the Oversight Board said it “paused” the publication of the policy after October 7 attacks in Israel to ensure its rules “held up” to the “extreme stress” of the conflict in Gaza). Meta will now have two months to respond to the recommendations, though actual changes to the company’s policies and practices could take several more weeks or months to implement.

“We want people to be able to use our platforms to share their views, and have a set of policies to help them do so safely,” a Meta spokesperson said in a statement. “We aim to apply these policies fairly but doing so at scale brings global challenges, which is why in February 2023 we sought the Oversight Board's guidance on how we treat the word ‘shaheed’ when referring to designated individuals or organizations. We will review the Board’s feedback and respond within 60 days.”

This article originally appeared on Engadget at https://www.engadget.com/the-oversight-board-weighs-in-on-metas-most-moderated-word-100003625.html?src=rss

Judge dismisses X’s lawsuit against anti-hate group

A judge has dismissed a lawsuit from X against the Center for Countering Digital Hate (CCDH), a nonprofit that researches hate speech on the Elon Musk-owned platform. In the decision, the judge said that the lawsuit was an attempt to “punish” the organization for criticizing the company.

X sued the CCDH last summer, accusing the group of “scraping” its platform as part of a “scare campaign” to hurt its advertising business. The group had published research claiming X was failing to act on reports of hate speech, and was in some cases boosting such content.

In a ruling, federal judge Charles Breyer said that “this case is about punishing” CCDH for publishing unflattering research. “It is clear to the Court that if X Corp. was indeed motived to spend money in response to CCDH’s scraping in 2023, it was not because of the harm such scraping posed to the X platform, but because of the harm it posed to X Corp.’s image,” Breyer wrote. “X Corp.’s motivation in bringing this case is evident. X Corp. has brought this case in order to punish CCDH for CCDH publications that criticized X Corp.—and perhaps in order to dissuade others.”

X said it planned to appeal the decision.

In a statement, CCDH CEO Imram Ahmed said that the ruling “affirmed our fundamental right to research, to speak, to advocate, and to hold accountable social media companies for decisions they make behind closed doors.” He added that “it is now abundantly clear that we need federal transparency laws” that would require online platforms to make data available to independent researchers.

This article originally appeared on Engadget at https://www.engadget.com/judge-dismisses-xs-lawsuit-against-anti-hate-group-173048754.html?src=rss