Supreme Court remands social media moderation cases over First Amendment issues

Two state laws that could upend the way social media companies handle content moderation are still in limbo after a Supreme Court ruling sent the challenges back to lower courts, vacating previous rulings. In a 9 - 0 decision in Moody v. NetChoice and NetChoice v. Paxton, the Supreme Court said that earlier rulings in lower courts had not properly evaluated the laws’ impact on the First Amendment.

The cases stem from two state laws, from Texas and Florida, which tried to impose restrictions on social media companies’ ability to moderate content. The Texas law, passed in 2021, allows users to sue large social media companies over alleged “censorship” of their political views. The Supreme Court suspended the law in 2022 following a legal challenge. Meanwhile, the Florida measure, also passed in 2021, attempted to impose fines on social media companies for banning politicians. That law has also been on hold pending legal challenges.

Both laws were challenged by NetChoice, an industry group that represents Meta, Google, X and other large tech companies. NetChoice argued that the laws were unconstitutional and would essentially prevent large platforms from performing any kind of content moderation. The Biden Administration also opposed both laws. In a statement, NetChoice called the decision “a victory for First Amendment rights online.”

In a decision authored by Justice Elena Kagan, the court said that lower court rulings in both cases “concentrated” on the issue of “whether a state law can regulate the content-moderation practices used in Facebook’s News Feed (or near equivalents).” But, she writes, “they did not address the full range of activities the laws cover, and measure the constitutional against the unconstitutional applications.”

Essentially, the usually-divided court agreed that the First Amendment implications of the laws could have broad impacts on parts of these sites unaffected by algorithmic sorting or content moderation (like direct messages, for instance) as well as on speech in general. Analysis of those externalities, Kagan wrote, simply never occurred in the lower court proceedings. The decision to remand means that analysis should take place, and the case may come back before SCOTUS in the future.

“In sum, there is much work to do below on both these cases … But that work must be done consistent with the First Amendment, which does not go on leave when social media are involved,” Kagan wrote. 

This article originally appeared on Engadget at https://www.engadget.com/supreme-court-remands-social-media-moderation-cases-over-first-amendment-issues-154001257.html?src=rss

Bluesky ‘starter packs’ help new users find their way

One of the most difficult parts of joining a new social platform is finding relevant accounts to follow. That has proved especially challenging for people who quit X to try out one of the many Twitter-like services that have cropped up in the last couple of years. Now, Bluesky has an interesting solution to this dilemma. The service introduced “starter packs,” which aim to address that initial discovery problem by allow existing users to build lists of accounts and custom feeds oriented around specific interests or themes.

In a blog post, the company described the feature as a way to “bring friends directly into your slice of Bluesky.” Users can curate up to 50 accounts and three custom feeds into a “starter pack.” That list can then be shared broadly on Bluesky or sent to new users via a QR code. Other users can then opt to follow an entire “pack” all at once, or scroll through to manually add the accounts and feeds they want to follow.

Bluesky starter pack.
Bluesky

Though Bluesky seems to be positioning the feature as a tool for new users, it’s also useful for anyone who feels like their feed is getting a little stale or has been curious about one of the many subcultures that have emerged on the platform. I’ve been on Bluesky for well over a year and I’ve already found some interesting starter packs, including Bluesky for Journalists (for people interested in news content) and Starter Cats (for accounts that post cat photos).

Starter packs also highlight another one of Bluesky’s more interesting features: custom feeds. The open-source service allows users to create their own algorithmic feeds that others can subscribe to and follow, a bit like a list on X. Custom feeds were introduced last year and have also been an important discovery tool. But scrolling a massive list of custom feeds can be overwhelming. Pairing these feeds with curated lists of users, though, is a much easier way to find ones related to topics you're actually interested in.

This article originally appeared on Engadget at https://www.engadget.com/bluesky-starter-packs-help-new-users-find-their-way-234322177.html?src=rss

Meta’s Oversight Board made just 53 decisions in 2023

The Oversight Board has published its latest annual report looking at its influence on Meta and ability to shift the policies that govern Facebook and Instagram. The board says that in 2023 it received 398,597 appeals, the vast majority of which came from Facebook users. But it took on only a tiny fraction of those cases, issuing a total of 53 decisions.

The board suggests, however, that the cases it selects can have an outsize impact on Meta’s users. For example, it credits its work for influencing improvements to Meta’s strike system and the “account status” feature that helps users check if their posts have violated any of the company’s rules.

Sussing out the board’s overall influence, though, is more complicated. The group says that between January of 2021 and May of 2024, it has sent a total of 266 recommendations to Meta. Of those, the company has fully or partially implemented 75, and reported “progress” on 81. The rest have been declined, “omitted or reframed,” or else Meta has claimed some level of implementation but hasn’t offered proof to the board. (There are five recommendations currently awaiting a response.) Those numbers raise some questions about how much Meta is willing to change in response to the board it created.

The Oversight Board's tally of how Meta has responded to its recommendations,
Oversight Board

Notably, the report has no criticism for Meta and offers no analysis of Meta’s efforts (or lack thereof) to comply with its recommendations. The report calls out a case in which it recommended that Meta suspend the former prime minister of Cambodia for six months, noting that it overturned the company’s decision to leave up a video that could have incited violence. But the report makes no mention of the fact that Meta declined to suspend the former prime minister’s account and declined to further clarify its rules for public figures.

The report also hints at thorny topics the board may take on in the coming months. It mentions that it wants to look at content “demotion,” or what some Facebook and Instagram users may call “shadowbans” (the term is a loaded one for Meta, which has repeatedly denied that its algorithms intentionally punish users for no reason). “One area we are interested in exploring is demoted content, where a platform limits a post’s visibility without telling the user,” the Oversight Board writes.

For now, it’s not clear exactly how the group could tackle the issue. The board’s purview currently allows it to weigh in on specific pieces of content that Meta has removed or left up after a user appeal. But it’s possible the board could find another way into the issue. A spokesperson for the Oversight Board notes that the group expressed concern about demoted content in its opinion on content related to the Israel-Hamas war. “This is something the board would like to further explore as Meta’s decisions around demotion are pretty opaque,” the spokesperson said.

This article originally appeared on Engadget at https://www.engadget.com/metas-oversight-board-made-just-53-decisions-in-2023-100017750.html?src=rss

A Meta ‘error’ broke the political content filter on Threads and Instagram

Earlier this year, Meta made the controversial decision to automatically limit political content from users’ recommendations in Threads and Instagram by default. The company said that it didn’t want to “proactively amplify” political posts and that users could opt-in via their Instagram settings if they did want to see such content.

But, it turns out, that Meta continued to limit political content even for users who had opted in to seeing it. An unspecified “error” apparently caused the “political content” toggle — already buried several layers deep into Instagram's settings menu — to revert back to the “limit” setting each time the app closed. Political content, according to Meta, “is likely to mention governments, elections, or social topics that affect a group of people and/or society at large.”

An
Meta

The issue was flagged by Threads users, including Democratic strategist Keith Edwards, and confirmed by Engadget. It’s unclear how long the “error” was affecting users’ recommendations. “This was an error and should not have happened,” Meta spokesperson Andy Stone wrote on Threads. “We're working on getting it fixed.” Meta didn’t respond to questions about how long the setting had not been working properly.

The issue is likely to raise questions about Meta’s stance on political content. Though Threads is often compared to X, the company has taken an aggressive stance on content moderation, limiting the visibility of political content and outright blocking “potentially sensitive” topics, including anything related to COVID-19, from search results.

Stone later confirmed that the supposed bug had been fixed. "Earlier today, we identified an error in which people's selections in the Instagram political content settings tool mistakenly appeared to have reset even though no change had actually been made," he wrote on Threads. "The issue has now been fixed and we encourage people to check and make sure their settings reflect their preferences." 

Update June 26, 2024, 8:04 Pm ET: Added additional comments from Meta spokesperson Andy Stone.

This article originally appeared on Engadget at https://www.engadget.com/a-meta-error-broke-the-political-content-filter-on-threads-and-instagram-173020269.html?src=rss

Supreme Court ruling may allow officials to coordinate with social platforms again

The US Supreme Court has ruled on controversial attempt by two states, Missouri and Louisiana, to limit Biden Administration officials and other government agencies from engaging with workers at social media companies about misinformation, election interference and other policies. Rather than set new guidelines on acceptable communication between these parties, the Court held that the plaintiffs lacked standing to bring the issue at all. 

In Murthy, the states (as well as five individual social media users) alleged that, in the midst of the COVID pandemic and the 2020 election, officials at the CDC, FBI and other government agencies "pressured" Meta, Twitter and Google "to censor their speech in violation of the First Amendment."

The Court wrote, in an opinion authored by Justice Barrett, that "the plaintiffs must show a substantial risk that, in the near future, at least one platform will restrict the speech of at least one plaintiff in response to the actions of at least one Government defendant. Here, at the preliminary injunction stage, they must show that they are likely to succeed in carrying that burden." She went on to describe this as "a tall order." 

Though a Louisiana District Court order blocking contact between social media companies and Biden Administration officials has been on hold, the case has still had a significant impact on relationships between these parties. Last year, Meta revealed that its security researchers were no longer receiving their usual briefings from the FBI or CISA (Cybersecurity and Infrastructure Security Agency) regarding foreign election interference. FBI officials had also warned that there were instances in which they discovered election interference attempts but didn’t warn social media companies due to additional layers of legal scrutiny implemented following the lawsuit. With today's ruling it seems possible such contact might now be allowed to continue. 

In part, it seems the Court was reluctant to rule on the case because of the potential for far-reaching First Amendment implications. Among the arguments made by the Plaintiffs was an assertion of a "right to listen" theory, that social media users have a Constitutional right to engage with content. "This theory is startlingly broad," Barrett wrote, "as it would grant all social-media users the right to sue over someone else’s censorship." The opinion was joined by Justices Roberts, Sotomayor, Kagan, Kavanaugh and Jackson. Justice Alito dissented, and was joined by Justices Thomas and Gorsuch. 

The case was one of a handful involving free speech and social media to come before the Supreme Court this term. The court is also set to rule on two linked cases involving state laws from Texas and Florida that could upend the way social media companies handle content moderation.

This article originally appeared on Engadget at https://www.engadget.com/supreme-court-ruling-may-allow-officials-to-coordinate-with-social-platforms-again-144045052.html?src=rss

Threads can now show replies from Mastodon and other fediverse apps

Meta just made an important update for Threads users who are sharing posts to the fediverse. The company began allowing users to opt-in to sharing their Threads posts to Mastodon and other ActivityPub-powered services back in March. But the integration has been fairly limited, with Threads users unable to view replies and most other interactions to their posts without switching over to a Mastodon client or other app.

That’s now changing. The Threads app will now be able to show replies and likes from Mastodon and other services, Meta announced. The change marks the first time Threads users who have opted into fediverse sharing will be able to see content that originated in the fediverse directly on Threads.

There are still some limitations, though. Meta says that, frustratingly, Threads users won’t be able to respond directly to replies from users in the fediverse. It also notes that “some replies may not be visible,” so Threads’ notifications still won’t be the most reliable place to track your engagement.

Meta also announced that it’s expanding the fediverse sharing options to more users, with the feature live in more than 100 countries. (Instagram chief Adam Mosseri said the company is hoping to turn the fediverse beta features on everywhere “soon.”)

The changes are an important step for anyone who cares about the future of decentralized social media. Though Meta has been somewhat slow to deliver on its promises to support ActivityPub in Threads, the app has the potential to bring tens of millions of people into the fediverse.

This article originally appeared on Engadget at https://www.engadget.com/threads-can-now-show-replies-from-mastodon-and-other-fediverse-apps-224127213.html?src=rss

Reddit puts AI scrapers on notice

Reddit has a warning for AI companies and other scrapers: play by our rules or get blocked. The company said in an update that it plans to update its Robots Exclusion Protocol (robots.txt file), which allows it to block automated scraping of its platform.

The company said it will also continue to block and rate-limit crawlers and other bots that don’t have a prior agreement with the company. The changes, it said, shouldn’t affect “good faith actors,” like the Internet Archive and researchers.

Reddit’s notice comes shortly after multiple reports that Perplexity and other AI companies regularly bypass websites’ robots.txt protocol, which is used by publishers to tell web crawlers they don’t want their content accessed. Perplexity’s CEO, in a recent interview with Fast Company, said that the protocol is “not a legal framework.”

In a statement, a Reddit spokesperson told Engadget that it wasn’t targeting a particular company. “This update isn’t meant to single any one entity out; it’s meant to protect Reddit while keeping the internet open,” the spokesperson said. “In the next few weeks, we’ll be updating our robots.txt instructions to be as clear as possible: if you are using an automated agent to access Reddit, regardless of what type of company you are, you need to abide by our terms and policies, and you need to talk to us. We believe in the open internet, but we do not believe in the misuse of public content.”

It’s not the first time the company has taken a hard line when it comes to data access. The company cited AI companies’ use of its platform when it began charging for its API last year. Since then, it has struck licensing deals with some AI companies, including Google and OpenAI. The agreements allow AI firms to train their models on Reddit’s archive and have been a significant source of revenue for the newly-public Reddit. The “talk to us” part of that statement is likely a not-so-subtle reminder that the company is no longer in the business of handing out its content for free.

This article originally appeared on Engadget at https://www.engadget.com/reddit-puts-ai-scrapers-on-notice-205734539.html?src=rss

Snapchat is making it harder for strangers to contact teens — again

Snapchat is, once again, beefing up its safety features to make it harder for strangers to contact teens in the app. The company is adding new warnings about "suspicious" contacts and preemptively blocking friend requests from accounts that may be linked to scams.

It’s not the first time Snap has tried to dissuade teen users from connecting with strangers in the app. The company says the latest warnings go a step further in that the alerts rely on “new and advanced signals” that indicate an account may be tied to a scammer. Likewise, Snap says it will block friend requests sent by users who lack mutual friends with the requestee, and "a history of accessing Snapchat in locations often associated with scamming activity.” The app’s block feature is also getting an upgrade so that users who block someone will also automatically block new accounts made on the same device.

These updates, according to the company, will help address sextortion scams that often target teens across social media platforms, as well as other safety and privacy concerns. Snap, like many of its social media peers, has come under fire from lawmakers over teen safety issues, including sextortion scams and the ease with which drug dealers have been able to contact teens in the app. The latest update also just happens to come shortly after Rolling Stone published an exhaustive investigation into how Snapchat “helped fuel a teen-overdose epidemic across the country.”

The article cited specific features like Snapchat’s Snap Map, which allows users to share their current location with friends, and “quick add” suggestions, which surfaced friend recommendations. (The company began limiting “quick add” suggestions between teen and adult accounts in 2022.) And while teens can still opt-in to the Snap Map location sharing, the company says it’s simplifying these settings so they’re easier to change and surfacing more “frequent reminders” about how they are sharing their whereabouts in the app.

This article originally appeared on Engadget at https://www.engadget.com/snapchat-is-making-it-harder-for-strangers-to-contact-teens--again-163824048.html?src=rss

‘Patreon is giving creators more tools to attract free subscribers

Patreon is continuing its push to expand beyond its roots as a paid membership platform. The company, which added new chat features and free membership options last year, is giving creators more ways to interact with their fans even if they aren’t paying subscribers.

The company says its creators have already seen more than 30 million sign-ups for free memberships, which allow fans to get updates and follow the work of creators and artists they like without committing to a monthly subscription. Now, creators will also be able to add non-paying members to Patreon’s Discord-like chats. Additionally, creators will be able to offer a live chat and custom countdown timer to tease new work.

For fans who aren’t yet paying for a membership, Patreon will add the ability for creators to sell access to past posts and collections so people will have a way to access previously paywalled content without committing to a recurring subscription. (The company added one-time purchases for digital products like podcast episodes last year.) Creators will also have the ability to offer limited-time gift subscriptions to fans.

patreon countdowns
Patreon

For Patreon, the changes are meant to help creators become less reliant on platforms like Instagram, YouTube and TikTok where engagement and views are often dependent on another company’s algorithm. At a time when platforms’ payouts to creators are reportedly dwindling — The Wall Street Journal reported last week that making a living as a creator has gotten significantly harder over the last year as dedicated creator funds shrink — Patreon is spinning its platform as place where creators can connect with their “real fans” and actually make money.

“Creators want a place where people can sign up to see their future work… and then actually see it,” the company explains in a blog post. “They don’t want to keep chasing likes or follower counts in a constantly changing system they have no control over.”

This article originally appeared on Engadget at https://www.engadget.com/patreon-is-giving-creators-more-tools-to-attract-free-subscribers-130049968.html?src=rss

X is making live streaming a premium feature

X will soon be moving the ability to live stream behind its premium paywall, the company announced. The change will make X the only major social platform to charge for the feature, which is currently free on Facebook, Instagram, YouTube, Twitch and TikTok.

“Starting soon, only Premium subscribers will be able to livestream (create live video streams) on X,” the company said. “This includes going live from an encoder with X integration,” an apparent reference to X’s game streaming capabilities.

X didn’t offer an explanation for the change. The company has used additional features, like post editing, longform writing, and ad-free feeds to lure users to its paid subscriptions, but hasn’t typically moved existing, widely available, features behind its paywall. X Premium subscriptions start at $3/month for the "basic" tier, and rise to $8/month for Premium and $16/month for Premium+. 

There are, however, other signs that the Elon Musk-owned platform wants to charge for other simple features. The company introduced a $1 annual charge for new accounts to have posting privileges in New Zealand and the Philippines. Though the company still describes the scheme as a test, Musk has suggested he wants to expand the fees to all new users.

This article originally appeared on Engadget at https://www.engadget.com/x-is-making-live-streaming-a-premium-feature-185151147.html?src=rss