Meta needs updated rules for sexually explicit deepfakes, Oversight Board says

Meta’s Oversight Board is urging the company to update its rules around sexually explicit deepfakes. The board made the recommendations as part of its decision in two cases involving AI-generated images of public figures.

The cases stem from two user appeals over AI-generated images of public figures, though the board declined to name the individuals. One post, which originated on Instagram, depicted a nude Indian woman. The post was reported to Meta but the report was automatically closed after 48 hours, as was a subsequent user appeal. The company eventually removed the post after attention from the Oversight Board, which nonetheless overturned Meta’s original decision to leave the image up.

The second post, which was shared to a Facebook group dedicated to AI art, showed “an AI-generated image of a nude woman with a man groping her breast.” Meta automatically removed the post because it had been added to an internal system that can identify images that have been previously reported to the company. The Oversight Board found that Meta was correct to have taken the post down.

In both cases, the Oversight Board said the AI deepfakes violated the company’s rules barring “derogatory sexualized photoshop” images. But in its recommendations to Meta, the Oversight Board said the current language used in these rules is outdated and may make it more difficult for users to report AI-made explicit images.

Instead, the board says that it should update its policies to make clear that it prohibits non-consensual explicit images that are AI-made or manipulated. “Much of the non-consensual sexualized imagery spread online today is created with generative AI models that either automatically edit existing images or create entirely new ones,” the board writes.”Meta should ensure that its prohibition on derogatory sexualized content covers this broader array of editing techniques, in a way that is clear to both users and the company’s moderators.”

The board also called out Meta’s practice of automatically closing user appeals, which it said could have “significant human rights impacts” on users. However, the board said it didn’t have “sufficient information” about the practice to make a recommendation.

The spread of explicit AI images has become an increasingly prominent issue as “deepfake porn” has become a more widespread form of online harassment in recent years. The board’s decision comes one day after the US Senate unanimously passed a bill cracking down on explicit deepfakes. If passed into law, the measure would allow victims to sue the creators of such images for as much as $250,000.

The cases aren’t the first time the Oversight Board has pushed Meta to update its rules for AI-generated content. In another high-profile case, the board investigated a maliciously edited video of President Joe Biden. The case ultimately resulted in Meta revamping its policies around how AI-generated content is labeled.

This article originally appeared on Engadget at https://www.engadget.com/meta-needs-updated-rules-for-sexually-explicit-deepfakes-oversight-board-says-100005969.html?src=rss

Meta takes down 63,000 Instagram accounts linked to extortion scams

Meta has taken down tens of thousands of Instagram accounts from Nigeria as part of a massive crackdown on sextortion scams. The accounts primarily targeted adult men in the United States, but some also targeted minors, Meta said in an update.

The takedowns are part of a larger effort by Meta to combat sextortion scams on its platform in recent months. Earlier this year, the company added a safety feature in Instagram messages to automatically detect nudity and warn users about potential blackmail scams. The company also provides in-app resources and safety tips about such scams.

According to Meta, the recent takedowns included 2,500 accounts that were linked to a group of about 20 people who worked together to carry out sextortion scams. The company also took down thousands of accounts and groups on Facebook that provided tips and other advice, including scripts and fake images, for would-be sextortionists. Those accounts were linked to the Yahoo Boys, a group of “loosely organized cybercriminals operating largely out of Nigeria that specialize in different types of scams,” Meta said.

Meta has come under particular scrutiny for not doing enough to protect teens from sextortion on its apps. During a Senate hearing earlier this year, Senator Lindsey Graham pressed Mark Zuckerberg on whether the parents of a child who died by suicide after falling victim to such a scam should be able to sue the company.

Though the company said that the “majority” of the scammers it uncovered in its latest takedowns targeted adults, it confirmed that some of the accounts had targeted minors as well and that those accounts had also been reported to the National Center for Missing and Exploited Children (NCMEC).

This article originally appeared on Engadget at https://www.engadget.com/meta-takes-down-63000-instagram-accounts-linked-to-extortion-scams-175118067.html?src=rss

How false nostalgia inspired noplace, a Myspace-like app for Gen Z

Already fascinated with y2k-era tech, some members of Gen Z have wondered what those early, simpler social networks were like. Now, they can get an idea thanks to a new app called noplace, which recreates some aspects of Myspace more than a decade after its fall from the most-visited site in the US.

The app officially launched earlier this month and briefly made the No. 1 spot in Apple’s App Store. Dreamed up by Gen Z founder Tiffany Zhong, noplace bills itself as both a throwback and an alternative to mainstream social media algorithms and the creator culture that comes with them. “I missed how social media used to be back in the day … where it was actually social, people would post random updates about their life,” Zhong tells Engadget. “You kind of had a sense of where people were in terms of time and space.”

Though Zhong says she never got to experience Myspace firsthand — she was in elementary school during its early 2000s peak — noplace manages to nail many of the platform’s signature elements. Each user starts with a short profile where they can add personal details like their relationship status and age, as well a free-form “about me” section. Users can also share their interests and detail what they’re currently watching, playing, reading and listening to. And, yes, they can embed song clips. There’s even a “top 10” for highlighting your best friends (unclear if Gen Z is aware of how much trauma that particular Myspace feature inflicted on my generation).

Myspace, of course, was at its height years before smartphone apps with a unified “design language” became the dominant medium for browsing social media. But the highly customizable noplace profiles still manage to capture the vibe of the bespoke HTML and clashing color schemes that distinguished so many Myspace pages and websites on the early 2000s internet.

noplace has a
noplace

There are other familiar features. All new users are automatically friends with Zhong, which she confirms is a nod to Tom Anderson, otherwise known as “Myspace Tom.” And the app encourages users to add their interests, called “stars,” and search for like-minded friends.

Despite the many similarities — the app was originally named “nospace” — Zhong says noplace is about more than just recreating the look and feel of Myspace. The app has a complicated gamification scheme, where users are rewarded with in-app badges for reaching different “levels” as they use the app more. This system isn’t really explained in the app — Zhong says it’s intentionally “vague” — but levels loosely correspond to different actions like writing on friends’ walls and interacting with other users’ posts. There’s also a massive Twitter-like central feed where users can blast out quick updates to everyone else on the app.

It can feel a bit chaotic, but early adopters are already using it in some unexpected ways, according to Zhong. “Around 20% in the past week of posts have been questions,” she says, comparing it to the trend of Gen Z using TikTok and YouTube as a search engine. “The vision for what we're building is actually becoming a social search engine. Everyone thinks it's like a social network, but because people are asking questions already … we're building features where you can ask questions and you can get crowdsourced responses.”

That may sound ambitious for a (so far) briefly-viral social app, but noplace has its share of influential backers. Reddit founder Alexis Ohanian is among the company’s investors. And Zhong herself once made headlines in her prior role as a teenage analyst at a prominent VC firm.

For now, though, noplace feels more to me like a Myspace-inspired novelty, though I’m admittedly not the target demographic. But, as someone who was a teenager on actual Myspace, I often think that I’m grateful my teen years came long before Instagram or TikTok. Not because Myspace was simpler than today’s social media, but because logging off was so much easier.

Zhong sees the distinction a little differently, not as a matter of dial-up connections enforcing a separation between on and offline, but a matter of prioritizing self expression cover clout. “You're just chasing follower count versus being your true self,” Zhong says. “It makes sense how social networks have evolved that way, but it's media platforms. It's not a social network anymore.”

This article originally appeared on Engadget at https://www.engadget.com/how-false-nostalgia-inspired-noplace-a-myspace-like-app-for-gen-z-163813099.html?src=rss

Apple blog TUAW returns as an AI content farm

The Unofficial Apple Weblog (TUAW) has come back online nearly a decade after shutting down. But the once venerable source of Apple news appears to have been transformed by its new owners into an AI-generated content farm.

The site, which ceased operations in 2015, began publishing “new” articles, many of which appear to be nearly identical to content published by MacRumors and other publications, over the past week. But those posts bear the bylines of writers who last worked for TUAW more than a decade ago. The site also has an author page featuring the names of former writers along with photos that appear to be AI-generated.

Christina Warren, who last wrote for TUAW in 2009, flagged the sketchy tactic in a post on Threads. “Someone bought the TUAW domain, populated it with AI-generated slop, and then reused my name from a job I had when I was 21 years old to try to pull some SEO scam that won’t even work in 2024 because Google changed its algo,” she wrote.

Originally started in 2004, TUAW was shut down by AOL in 2015. Much of the site’s original archive can still be found on Engadget. Yahoo, which owns Engadget, sold the TUAW domain in 2024 to an entity called “Web Orange Limited” in 2024, according to a statement on TUAW’s website.

The sale, notably, did not include the TUAW archive. But, it seems that Web Orange Limited found a convenient (if legally dubious) way around that. “With a commitment to revitalize its legacy, the new team at Web Orange Limited meticulously rewrote the content from archived versions available on archive.org, ensuring the preservation of TUAW’s rich history while updating it to meet modern standards and relevance,” the site’s about page states.

TUAW doesn’t say if AI was used in those “rewrites,” but a comparison between the original archive on Engadget and the “rewritten” content on TUAW suggests that Web Orange Limited put little effort into the task. “The article ‘rewrites’ aren’t even assigned to the correct names,” Warren tells Engadget, “It has stuff for me going back to 2004. I didn’t start writing for the site until 2007.”

TUAW didn’t immediately respond to emailed questions about its use of AI or why it was using the bylines of former writers with AI-generated profile photos. Yahoo didn't immediately respond to a request for comment. 

Update July 10, 2024, 11:05 AM ET: After this story was published, the TUAW website updated its author pages to remove many of the names of former staffers. Many have been swapped with generic-sounding names. "Christina Warren" has been changed to "Mary Brown." TUAW still hasn't responded to questions. 

This article originally appeared on Engadget at https://www.engadget.com/apple-blog-tuaw-returns-as-an-ai-content-farm-225326136.html?src=rss

What Meta should change about Threads, one year in

It’s been a year since Meta pushed out Threads in an attempt to take on the platform now known as X. At the time, Mark Zuckerberg said that he hoped it would turn into “a public conversations app with 1 billion+ people on it.”

Meta’s timing was good. Threads launched at a particularly chaotic moment for Twitter, when many people were seeking out alternatives. Threads saw 30 million sign-ups in its first day and the app has since grown to 175 million monthly users, according to Zuckerberg. (X has 600 million monthly users, according to Elon Musk.)

But the earliest iteration of Threads still felt a little bit broken. There was no web version, and a lot of missing features. The company promised interoperability with ActivityPub, the open-source standard that powers Mastodon and other apps in the fediverse, but integration remains minimal.

One year later, it’s still not really clear what Threads is actually for. Its leader has said that "the goal isn’t to replace Twitter” but to create a “public square” for Instagram users and a “less angry place for conversations.”But the service itself still has a number of issues that prevent it from realizing that vision. If Meta really wants to make that happen, here’s what it should change.

If you follow me on Threads, then you probably already know this is my top complaint. But Meta desperately needs to fix the algorithm that powers Threads’ default “For You” feed. The algorithmic feed, which is the default view in both the app and website, is painfully slow. It often surfaces days-old posts, even during major, newsworthy moments when many people are posting about the same topic.

It’s so bad it’s become a running meme to post something along the lines of “I can’t wait to read about this on my ‘For You’ feed tomorrow,” every time there’s a major news event or trending story.

The algorithmic feed is also downright bizarre. For a platform that was built off of Instagram, an app that has extremely fine-tuned recommendations and more than a decade of data about the topics I’m interested in, Threads appears to use none of it. Instead, it has a strange preference for intense personal stories from accounts I’m entirely unconnected to.

In the last year, I’ve seen countless multi-part Threads posts from complete strangers detailing childhood abuse, eating disorders, chronic illnesses, domestic violence, pet loss and other unimaginable horrors. These are not posts I’m seeking out by any means, yet Meta’s algorithm shoves them to the top of my feed.

I’ve aggressively used Threads' swipe gestures to try to rid my feed of excessive trauma dumping, and it’s helped to some extent. But it hasn’t improved the number of strange posts I see from completely random individuals. At this moment the top two posts in my feed are from an event planner offering to share wedding tips and a woman describing a phone call from her health insurance company. (Both posts are 12 hours old.) These types of posts have led to blogger Max Read dubbing Threads the “gas leak social network” because they make it feel as if everyone is “suffering some kind of minor brain damage.”

Look, I get why Meta has been cautious when it comes to content moderation on Threads. The company doesn’t exactly have a great track record on issues like extremism, health misinformation or genocide-inciting hate speech. It’s not surprising they would want to avoid similar headlines about Threads.

But if Meta wants Threads to be a “public square,” it can’t preemptively block searches for topics like COVID-19 and vaccines just because they are “potentially sensitive.” (Instagram head Adam Mosseri claimed this measure was “temporary” last October.) If Meta wants Threads to be a “public square,” it shouldn’t automatically throttle political content from users’ recommendations; and Threads’ leaders shouldn’t assume that users don’t want to see news.

A year in, it’s painfully clear that a platform like Threads is hamstrung without a proper direct messaging feature. For some reason, Threads’ leaders, especially Mosseri, have been adamantly opposed to creating a separate inbox for the app.

Instead, users hoping to privately connect with someone on Threads are forced to switch over to Instagram and hope the person they are trying to reach accepts new message requests. There is an in-app way to send a Threads post to an Instagram friend but this depends on you already being connected on Instagram.

Exactly why Threads can’t have its own messaging feature isn’t exactly clear. Mosseri has suggested that it doesn’t make sense to build a new inbox for the app, but that ignores the fact that many people use Instagram and Threads very differently. Which brings me to…

Meta has said that the reason why it was able to get Threads out the door so quickly was largely thanks to Instagram. Threads was created using a lot of Instagram’s code and infrastructure, which also helped the company get tens of millions of people to sign up for the app on day one.

But continuing to require an Instagram account to use Threads makes little sense a year on. For one, it shuts out a not-insignificant number of people who may be interested in Threads but don’t want to be on Instagram,

There’s also the fact that the apps, though they share some design elements, are completely different kinds of services. And many people, myself included, use Instagram and Threads very differently.

A “public square” platform like Threads works best for public-facing accounts where conversations can have maximum visibility. But most people I know use their Instagram accounts for personal updates, like family photos. And while you can have different visibility settings for each app, you shouldn’t be forced to link the two accounts. This also means that if you want to use Threads anonymously, you would need to create an entirely new Instagram account to serve as a login for the corresponding Threads account.

It seems that Meta is at least considering this. Mosseri said in an interview with Platformer that the company is “working on things like Threads-only accounts” and wants the app to become “more independent.”

These aren’t the only factors that will determine whether Threads will be, as Zuckerberg has speculated, Meta’s next 1 billion-user app. Meta will, eventually, need to make money from the service, which is currently advertising-free. But before Meta’s multibillion-dollar ad machine can be pointed at Threads, the company will need to better explain who its newest app is actually for.

This article originally appeared on Engadget at https://www.engadget.com/what-meta-should-change-about-threads-one-year-in-173036784.html?src=rss

What Meta should change about Threads, one year in

It’s been a year since Meta pushed out Threads in an attempt to take on the platform now known as X. At the time, Mark Zuckerberg said that he hoped it would turn into “a public conversations app with 1 billion+ people on it.”

Meta’s timing was good. Threads launched at a particularly chaotic moment for Twitter, when many people were seeking out alternatives. Threads saw 30 million sign-ups in its first day and the app has since grown to 175 million monthly users, according to Zuckerberg. (X has 600 million monthly users, according to Elon Musk.)

But the earliest iteration of Threads still felt a little bit broken. There was no web version, and a lot of missing features. The company promised interoperability with ActivityPub, the open-source standard that powers Mastodon and other apps in the fediverse, but integration remains minimal.

One year later, it’s still not really clear what Threads is actually for. Its leader has said that "the goal isn’t to replace Twitter” but to create a “public square” for Instagram users and a “less angry place for conversations.”But the service itself still has a number of issues that prevent it from realizing that vision. If Meta really wants to make that happen, here’s what it should change.

If you follow me on Threads, then you probably already know this is my top complaint. But Meta desperately needs to fix the algorithm that powers Threads’ default “For You” feed. The algorithmic feed, which is the default view in both the app and website, is painfully slow. It often surfaces days-old posts, even during major, newsworthy moments when many people are posting about the same topic.

It’s so bad it’s become a running meme to post something along the lines of “I can’t wait to read about this on my ‘For You’ feed tomorrow,” every time there’s a major news event or trending story.

The algorithmic feed is also downright bizarre. For a platform that was built off of Instagram, an app that has extremely fine-tuned recommendations and more than a decade of data about the topics I’m interested in, Threads appears to use none of it. Instead, it has a strange preference for intense personal stories from accounts I’m entirely unconnected to.

In the last year, I’ve seen countless multi-part Threads posts from complete strangers detailing childhood abuse, eating disorders, chronic illnesses, domestic violence, pet loss and other unimaginable horrors. These are not posts I’m seeking out by any means, yet Meta’s algorithm shoves them to the top of my feed.

I’ve aggressively used Threads' swipe gestures to try to rid my feed of excessive trauma dumping, and it’s helped to some extent. But it hasn’t improved the number of strange posts I see from completely random individuals. At this moment the top two posts in my feed are from an event planner offering to share wedding tips and a woman describing a phone call from her health insurance company. (Both posts are 12 hours old.) These types of posts have led to blogger Max Read dubbing Threads the “gas leak social network” because they make it feel as if everyone is “suffering some kind of minor brain damage.”

Look, I get why Meta has been cautious when it comes to content moderation on Threads. The company doesn’t exactly have a great track record on issues like extremism, health misinformation or genocide-inciting hate speech. It’s not surprising they would want to avoid similar headlines about Threads.

But if Meta wants Threads to be a “public square,” it can’t preemptively block searches for topics like COVID-19 and vaccines just because they are “potentially sensitive.” (Instagram head Adam Mosseri claimed this measure was “temporary” last October.) If Meta wants Threads to be a “public square,” it shouldn’t automatically throttle political content from users’ recommendations; and Threads’ leaders shouldn’t assume that users don’t want to see news.

A year in, it’s painfully clear that a platform like Threads is hamstrung without a proper direct messaging feature. For some reason, Threads’ leaders, especially Mosseri, have been adamantly opposed to creating a separate inbox for the app.

Instead, users hoping to privately connect with someone on Threads are forced to switch over to Instagram and hope the person they are trying to reach accepts new message requests. There is an in-app way to send a Threads post to an Instagram friend but this depends on you already being connected on Instagram.

Exactly why Threads can’t have its own messaging feature isn’t exactly clear. Mosseri has suggested that it doesn’t make sense to build a new inbox for the app, but that ignores the fact that many people use Instagram and Threads very differently. Which brings me to…

Meta has said that the reason why it was able to get Threads out the door so quickly was largely thanks to Instagram. Threads was created using a lot of Instagram’s code and infrastructure, which also helped the company get tens of millions of people to sign up for the app on day one.

But continuing to require an Instagram account to use Threads makes little sense a year on. For one, it shuts out a not-insignificant number of people who may be interested in Threads but don’t want to be on Instagram,

There’s also the fact that the apps, though they share some design elements, are completely different kinds of services. And many people, myself included, use Instagram and Threads very differently.

A “public square” platform like Threads works best for public-facing accounts where conversations can have maximum visibility. But most people I know use their Instagram accounts for personal updates, like family photos. And while you can have different visibility settings for each app, you shouldn’t be forced to link the two accounts. This also means that if you want to use Threads anonymously, you would need to create an entirely new Instagram account to serve as a login for the corresponding Threads account.

It seems that Meta is at least considering this. Mosseri said in an interview with Platformer that the company is “working on things like Threads-only accounts” and wants the app to become “more independent.”

These aren’t the only factors that will determine whether Threads will be, as Zuckerberg has speculated, Meta’s next 1 billion-user app. Meta will, eventually, need to make money from the service, which is currently advertising-free. But before Meta’s multibillion-dollar ad machine can be pointed at Threads, the company will need to better explain who its newest app is actually for.

This article originally appeared on Engadget at https://www.engadget.com/what-meta-should-change-about-threads-one-year-in-173036784.html?src=rss

Meta is changing its policy for the most-moderated word on its platforms

Meta is changing a long-running policy regarding the Arabic word “shaheed,” which has been described as the most-moderated word on the company’s apps. The company said in an update to the Oversight Board that use of the word alone would no longer result in a post’s removal.

The Oversight Board had criticized the company for a “blanket ban” on the word, which is often translated as “martyr,” though, as the board noted, it can have multiple meanings. Meta’s previous policy, however, didn’t take that “linguistic complexity” into account, which resulted in a disproportionate number of takedowns over a commonly used word. Shaheed, the board said earlier this year, “accounts for more content removals under the Community Standards than any other single word or phrase,” across the company’s apps.

In its latest update, Meta said that it had tested a new approach to moderating the word following a recommendation from the board. “Initial results from our assessment indicate that continuing to remove content when “Shaheed” is paired with otherwise violating content – or when the three signals of violence outlined by the Board are present – captures the most potentially harmful content without disproportionality impacting voice,” the company wrote.

The change should have a significant impact on Meta’s Arabic-speaking users, who, according to the board, have been unfairly censored as a result of the policy. “The Oversight Board welcomes Meta’s announcement today that it will implement the Board’s recommendations and introduce significant changes to an unfair policy that led to the censoring of millions of people across its platforms,” the board said in a statement. “The policy changes on how to moderate the Arabic word ‘shaheed’ should have a swift impact on when content is removed, with a more nuanced approach ending a blanket ban on a term that Meta has acknowledged is one the most over-enforced on its platforms.”

This article originally appeared on Engadget at https://www.engadget.com/meta-is-changing-its-policy-for-the-most-moderated-word-on-its-platforms-185016272.html?src=rss

Meta changes its labels for AI-generated images after complaints from photographers

Meta is updating its “Made with AI” labels after widespread complaints from photographers that the company was mistakenly flagging non-AI-generated content. In an update, the company said that it will change the wording to “AI info” because the current labels “weren’t always aligned with people’s expectations and didn’t always provide enough context.”

The company introduced the “Made with AI” labels earlier this year after criticism from the Oversight Board about its “manipulated media” policy. Meta said that, like many of its peers, it would rely on “industry standard” signals to determine when generative AI had been used to create an image. However, it wasn’t long before photographers began noticing that Facebook and Instagram were applying the badge on images that hadn’t actually been created with AI. According to tests conducted by PetaPixel, photos edited with Adobe’s generative fill tool in Photoshop would trigger the label even if the edit was only to a “tiny speck.”

While Meta didn’t name Photoshop, the company said in its update that “some content that included minor modifications using AI, such as retouching tools, included industry standard indicators” that triggered the “Made with AI” badge. “While we work with companies across the industry to improve the process so our labeling approach better matches our intent, we’re updating the ‘Made with AI’ label to ‘AI info’ across our apps, which people can click for more information.”

Somewhat confusingly, the new “AI info” labels won’t actually have any details about what AI-enabled tools may have been used for the image in question. A Meta spokesperson confirmed that the contextual menu that appears when users tap on the badge will remain the same. That menu has a generic description of generative AI and notes that Meta may add the notice “when people share content that has AI signals our systems can read.”

This article originally appeared on Engadget at https://www.engadget.com/meta-changes-its-labels-for-ai-generated-images-after-complaints-from-photographers-191533416.html?src=rss

Supreme Court remands social media moderation cases over First Amendment issues

Two state laws that could upend the way social media companies handle content moderation are still in limbo after a Supreme Court ruling sent the challenges back to lower courts, vacating previous rulings. In a 9 - 0 decision in Moody v. NetChoice and NetChoice v. Paxton, the Supreme Court said that earlier rulings in lower courts had not properly evaluated the laws’ impact on the First Amendment.

The cases stem from two state laws, from Texas and Florida, which tried to impose restrictions on social media companies’ ability to moderate content. The Texas law, passed in 2021, allows users to sue large social media companies over alleged “censorship” of their political views. The Supreme Court suspended the law in 2022 following a legal challenge. Meanwhile, the Florida measure, also passed in 2021, attempted to impose fines on social media companies for banning politicians. That law has also been on hold pending legal challenges.

Both laws were challenged by NetChoice, an industry group that represents Meta, Google, X and other large tech companies. NetChoice argued that the laws were unconstitutional and would essentially prevent large platforms from performing any kind of content moderation. The Biden Administration also opposed both laws. In a statement, NetChoice called the decision “a victory for First Amendment rights online.”

In a decision authored by Justice Elena Kagan, the court said that lower court rulings in both cases “concentrated” on the issue of “whether a state law can regulate the content-moderation practices used in Facebook’s News Feed (or near equivalents).” But, she writes, “they did not address the full range of activities the laws cover, and measure the constitutional against the unconstitutional applications.”

Essentially, the usually-divided court agreed that the First Amendment implications of the laws could have broad impacts on parts of these sites unaffected by algorithmic sorting or content moderation (like direct messages, for instance) as well as on speech in general. Analysis of those externalities, Kagan wrote, simply never occurred in the lower court proceedings. The decision to remand means that analysis should take place, and the case may come back before SCOTUS in the future.

“In sum, there is much work to do below on both these cases … But that work must be done consistent with the First Amendment, which does not go on leave when social media are involved,” Kagan wrote. 

This article originally appeared on Engadget at https://www.engadget.com/supreme-court-remands-social-media-moderation-cases-over-first-amendment-issues-154001257.html?src=rss

Bluesky ‘starter packs’ help new users find their way

One of the most difficult parts of joining a new social platform is finding relevant accounts to follow. That has proved especially challenging for people who quit X to try out one of the many Twitter-like services that have cropped up in the last couple of years. Now, Bluesky has an interesting solution to this dilemma. The service introduced “starter packs,” which aim to address that initial discovery problem by allow existing users to build lists of accounts and custom feeds oriented around specific interests or themes.

In a blog post, the company described the feature as a way to “bring friends directly into your slice of Bluesky.” Users can curate up to 50 accounts and three custom feeds into a “starter pack.” That list can then be shared broadly on Bluesky or sent to new users via a QR code. Other users can then opt to follow an entire “pack” all at once, or scroll through to manually add the accounts and feeds they want to follow.

Bluesky starter pack.
Bluesky

Though Bluesky seems to be positioning the feature as a tool for new users, it’s also useful for anyone who feels like their feed is getting a little stale or has been curious about one of the many subcultures that have emerged on the platform. I’ve been on Bluesky for well over a year and I’ve already found some interesting starter packs, including Bluesky for Journalists (for people interested in news content) and Starter Cats (for accounts that post cat photos).

Starter packs also highlight another one of Bluesky’s more interesting features: custom feeds. The open-source service allows users to create their own algorithmic feeds that others can subscribe to and follow, a bit like a list on X. Custom feeds were introduced last year and have also been an important discovery tool. But scrolling a massive list of custom feeds can be overwhelming. Pairing these feeds with curated lists of users, though, is a much easier way to find ones related to topics you're actually interested in.

This article originally appeared on Engadget at https://www.engadget.com/bluesky-starter-packs-help-new-users-find-their-way-234322177.html?src=rss