What Meta should change about Threads, one year in

It’s been a year since Meta pushed out Threads in an attempt to take on the platform now known as X. At the time, Mark Zuckerberg said that he hoped it would turn into “a public conversations app with 1 billion+ people on it.”

Meta’s timing was good. Threads launched at a particularly chaotic moment for Twitter, when many people were seeking out alternatives. Threads saw 30 million sign-ups in its first day and the app has since grown to 175 million monthly users, according to Zuckerberg. (X has 600 million monthly users, according to Elon Musk.)

But the earliest iteration of Threads still felt a little bit broken. There was no web version, and a lot of missing features. The company promised interoperability with ActivityPub, the open-source standard that powers Mastodon and other apps in the fediverse, but integration remains minimal.

One year later, it’s still not really clear what Threads is actually for. Its leader has said that "the goal isn’t to replace Twitter” but to create a “public square” for Instagram users and a “less angry place for conversations.”But the service itself still has a number of issues that prevent it from realizing that vision. If Meta really wants to make that happen, here’s what it should change.

If you follow me on Threads, then you probably already know this is my top complaint. But Meta desperately needs to fix the algorithm that powers Threads’ default “For You” feed. The algorithmic feed, which is the default view in both the app and website, is painfully slow. It often surfaces days-old posts, even during major, newsworthy moments when many people are posting about the same topic.

It’s so bad it’s become a running meme to post something along the lines of “I can’t wait to read about this on my ‘For You’ feed tomorrow,” every time there’s a major news event or trending story.

The algorithmic feed is also downright bizarre. For a platform that was built off of Instagram, an app that has extremely fine-tuned recommendations and more than a decade of data about the topics I’m interested in, Threads appears to use none of it. Instead, it has a strange preference for intense personal stories from accounts I’m entirely unconnected to.

In the last year, I’ve seen countless multi-part Threads posts from complete strangers detailing childhood abuse, eating disorders, chronic illnesses, domestic violence, pet loss and other unimaginable horrors. These are not posts I’m seeking out by any means, yet Meta’s algorithm shoves them to the top of my feed.

I’ve aggressively used Threads' swipe gestures to try to rid my feed of excessive trauma dumping, and it’s helped to some extent. But it hasn’t improved the number of strange posts I see from completely random individuals. At this moment the top two posts in my feed are from an event planner offering to share wedding tips and a woman describing a phone call from her health insurance company. (Both posts are 12 hours old.) These types of posts have led to blogger Max Read dubbing Threads the “gas leak social network” because they make it feel as if everyone is “suffering some kind of minor brain damage.”

Look, I get why Meta has been cautious when it comes to content moderation on Threads. The company doesn’t exactly have a great track record on issues like extremism, health misinformation or genocide-inciting hate speech. It’s not surprising they would want to avoid similar headlines about Threads.

But if Meta wants Threads to be a “public square,” it can’t preemptively block searches for topics like COVID-19 and vaccines just because they are “potentially sensitive.” (Instagram head Adam Mosseri claimed this measure was “temporary” last October.) If Meta wants Threads to be a “public square,” it shouldn’t automatically throttle political content from users’ recommendations; and Threads’ leaders shouldn’t assume that users don’t want to see news.

A year in, it’s painfully clear that a platform like Threads is hamstrung without a proper direct messaging feature. For some reason, Threads’ leaders, especially Mosseri, have been adamantly opposed to creating a separate inbox for the app.

Instead, users hoping to privately connect with someone on Threads are forced to switch over to Instagram and hope the person they are trying to reach accepts new message requests. There is an in-app way to send a Threads post to an Instagram friend but this depends on you already being connected on Instagram.

Exactly why Threads can’t have its own messaging feature isn’t exactly clear. Mosseri has suggested that it doesn’t make sense to build a new inbox for the app, but that ignores the fact that many people use Instagram and Threads very differently. Which brings me to…

Meta has said that the reason why it was able to get Threads out the door so quickly was largely thanks to Instagram. Threads was created using a lot of Instagram’s code and infrastructure, which also helped the company get tens of millions of people to sign up for the app on day one.

But continuing to require an Instagram account to use Threads makes little sense a year on. For one, it shuts out a not-insignificant number of people who may be interested in Threads but don’t want to be on Instagram,

There’s also the fact that the apps, though they share some design elements, are completely different kinds of services. And many people, myself included, use Instagram and Threads very differently.

A “public square” platform like Threads works best for public-facing accounts where conversations can have maximum visibility. But most people I know use their Instagram accounts for personal updates, like family photos. And while you can have different visibility settings for each app, you shouldn’t be forced to link the two accounts. This also means that if you want to use Threads anonymously, you would need to create an entirely new Instagram account to serve as a login for the corresponding Threads account.

It seems that Meta is at least considering this. Mosseri said in an interview with Platformer that the company is “working on things like Threads-only accounts” and wants the app to become “more independent.”

These aren’t the only factors that will determine whether Threads will be, as Zuckerberg has speculated, Meta’s next 1 billion-user app. Meta will, eventually, need to make money from the service, which is currently advertising-free. But before Meta’s multibillion-dollar ad machine can be pointed at Threads, the company will need to better explain who its newest app is actually for.

This article originally appeared on Engadget at https://www.engadget.com/what-meta-should-change-about-threads-one-year-in-173036784.html?src=rss

Meta is changing its policy for the most-moderated word on its platforms

Meta is changing a long-running policy regarding the Arabic word “shaheed,” which has been described as the most-moderated word on the company’s apps. The company said in an update to the Oversight Board that use of the word alone would no longer result in a post’s removal.

The Oversight Board had criticized the company for a “blanket ban” on the word, which is often translated as “martyr,” though, as the board noted, it can have multiple meanings. Meta’s previous policy, however, didn’t take that “linguistic complexity” into account, which resulted in a disproportionate number of takedowns over a commonly used word. Shaheed, the board said earlier this year, “accounts for more content removals under the Community Standards than any other single word or phrase,” across the company’s apps.

In its latest update, Meta said that it had tested a new approach to moderating the word following a recommendation from the board. “Initial results from our assessment indicate that continuing to remove content when “Shaheed” is paired with otherwise violating content – or when the three signals of violence outlined by the Board are present – captures the most potentially harmful content without disproportionality impacting voice,” the company wrote.

The change should have a significant impact on Meta’s Arabic-speaking users, who, according to the board, have been unfairly censored as a result of the policy. “The Oversight Board welcomes Meta’s announcement today that it will implement the Board’s recommendations and introduce significant changes to an unfair policy that led to the censoring of millions of people across its platforms,” the board said in a statement. “The policy changes on how to moderate the Arabic word ‘shaheed’ should have a swift impact on when content is removed, with a more nuanced approach ending a blanket ban on a term that Meta has acknowledged is one the most over-enforced on its platforms.”

This article originally appeared on Engadget at https://www.engadget.com/meta-is-changing-its-policy-for-the-most-moderated-word-on-its-platforms-185016272.html?src=rss

Meta changes its labels for AI-generated images after complaints from photographers

Meta is updating its “Made with AI” labels after widespread complaints from photographers that the company was mistakenly flagging non-AI-generated content. In an update, the company said that it will change the wording to “AI info” because the current labels “weren’t always aligned with people’s expectations and didn’t always provide enough context.”

The company introduced the “Made with AI” labels earlier this year after criticism from the Oversight Board about its “manipulated media” policy. Meta said that, like many of its peers, it would rely on “industry standard” signals to determine when generative AI had been used to create an image. However, it wasn’t long before photographers began noticing that Facebook and Instagram were applying the badge on images that hadn’t actually been created with AI. According to tests conducted by PetaPixel, photos edited with Adobe’s generative fill tool in Photoshop would trigger the label even if the edit was only to a “tiny speck.”

While Meta didn’t name Photoshop, the company said in its update that “some content that included minor modifications using AI, such as retouching tools, included industry standard indicators” that triggered the “Made with AI” badge. “While we work with companies across the industry to improve the process so our labeling approach better matches our intent, we’re updating the ‘Made with AI’ label to ‘AI info’ across our apps, which people can click for more information.”

Somewhat confusingly, the new “AI info” labels won’t actually have any details about what AI-enabled tools may have been used for the image in question. A Meta spokesperson confirmed that the contextual menu that appears when users tap on the badge will remain the same. That menu has a generic description of generative AI and notes that Meta may add the notice “when people share content that has AI signals our systems can read.”

This article originally appeared on Engadget at https://www.engadget.com/meta-changes-its-labels-for-ai-generated-images-after-complaints-from-photographers-191533416.html?src=rss

Supreme Court remands social media moderation cases over First Amendment issues

Two state laws that could upend the way social media companies handle content moderation are still in limbo after a Supreme Court ruling sent the challenges back to lower courts, vacating previous rulings. In a 9 - 0 decision in Moody v. NetChoice and NetChoice v. Paxton, the Supreme Court said that earlier rulings in lower courts had not properly evaluated the laws’ impact on the First Amendment.

The cases stem from two state laws, from Texas and Florida, which tried to impose restrictions on social media companies’ ability to moderate content. The Texas law, passed in 2021, allows users to sue large social media companies over alleged “censorship” of their political views. The Supreme Court suspended the law in 2022 following a legal challenge. Meanwhile, the Florida measure, also passed in 2021, attempted to impose fines on social media companies for banning politicians. That law has also been on hold pending legal challenges.

Both laws were challenged by NetChoice, an industry group that represents Meta, Google, X and other large tech companies. NetChoice argued that the laws were unconstitutional and would essentially prevent large platforms from performing any kind of content moderation. The Biden Administration also opposed both laws. In a statement, NetChoice called the decision “a victory for First Amendment rights online.”

In a decision authored by Justice Elena Kagan, the court said that lower court rulings in both cases “concentrated” on the issue of “whether a state law can regulate the content-moderation practices used in Facebook’s News Feed (or near equivalents).” But, she writes, “they did not address the full range of activities the laws cover, and measure the constitutional against the unconstitutional applications.”

Essentially, the usually-divided court agreed that the First Amendment implications of the laws could have broad impacts on parts of these sites unaffected by algorithmic sorting or content moderation (like direct messages, for instance) as well as on speech in general. Analysis of those externalities, Kagan wrote, simply never occurred in the lower court proceedings. The decision to remand means that analysis should take place, and the case may come back before SCOTUS in the future.

“In sum, there is much work to do below on both these cases … But that work must be done consistent with the First Amendment, which does not go on leave when social media are involved,” Kagan wrote. 

This article originally appeared on Engadget at https://www.engadget.com/supreme-court-remands-social-media-moderation-cases-over-first-amendment-issues-154001257.html?src=rss

Bluesky ‘starter packs’ help new users find their way

One of the most difficult parts of joining a new social platform is finding relevant accounts to follow. That has proved especially challenging for people who quit X to try out one of the many Twitter-like services that have cropped up in the last couple of years. Now, Bluesky has an interesting solution to this dilemma. The service introduced “starter packs,” which aim to address that initial discovery problem by allow existing users to build lists of accounts and custom feeds oriented around specific interests or themes.

In a blog post, the company described the feature as a way to “bring friends directly into your slice of Bluesky.” Users can curate up to 50 accounts and three custom feeds into a “starter pack.” That list can then be shared broadly on Bluesky or sent to new users via a QR code. Other users can then opt to follow an entire “pack” all at once, or scroll through to manually add the accounts and feeds they want to follow.

Bluesky starter pack.
Bluesky

Though Bluesky seems to be positioning the feature as a tool for new users, it’s also useful for anyone who feels like their feed is getting a little stale or has been curious about one of the many subcultures that have emerged on the platform. I’ve been on Bluesky for well over a year and I’ve already found some interesting starter packs, including Bluesky for Journalists (for people interested in news content) and Starter Cats (for accounts that post cat photos).

Starter packs also highlight another one of Bluesky’s more interesting features: custom feeds. The open-source service allows users to create their own algorithmic feeds that others can subscribe to and follow, a bit like a list on X. Custom feeds were introduced last year and have also been an important discovery tool. But scrolling a massive list of custom feeds can be overwhelming. Pairing these feeds with curated lists of users, though, is a much easier way to find ones related to topics you're actually interested in.

This article originally appeared on Engadget at https://www.engadget.com/bluesky-starter-packs-help-new-users-find-their-way-234322177.html?src=rss

Meta’s Oversight Board made just 53 decisions in 2023

The Oversight Board has published its latest annual report looking at its influence on Meta and ability to shift the policies that govern Facebook and Instagram. The board says that in 2023 it received 398,597 appeals, the vast majority of which came from Facebook users. But it took on only a tiny fraction of those cases, issuing a total of 53 decisions.

The board suggests, however, that the cases it selects can have an outsize impact on Meta’s users. For example, it credits its work for influencing improvements to Meta’s strike system and the “account status” feature that helps users check if their posts have violated any of the company’s rules.

Sussing out the board’s overall influence, though, is more complicated. The group says that between January of 2021 and May of 2024, it has sent a total of 266 recommendations to Meta. Of those, the company has fully or partially implemented 75, and reported “progress” on 81. The rest have been declined, “omitted or reframed,” or else Meta has claimed some level of implementation but hasn’t offered proof to the board. (There are five recommendations currently awaiting a response.) Those numbers raise some questions about how much Meta is willing to change in response to the board it created.

The Oversight Board's tally of how Meta has responded to its recommendations,
Oversight Board

Notably, the report has no criticism for Meta and offers no analysis of Meta’s efforts (or lack thereof) to comply with its recommendations. The report calls out a case in which it recommended that Meta suspend the former prime minister of Cambodia for six months, noting that it overturned the company’s decision to leave up a video that could have incited violence. But the report makes no mention of the fact that Meta declined to suspend the former prime minister’s account and declined to further clarify its rules for public figures.

The report also hints at thorny topics the board may take on in the coming months. It mentions that it wants to look at content “demotion,” or what some Facebook and Instagram users may call “shadowbans” (the term is a loaded one for Meta, which has repeatedly denied that its algorithms intentionally punish users for no reason). “One area we are interested in exploring is demoted content, where a platform limits a post’s visibility without telling the user,” the Oversight Board writes.

For now, it’s not clear exactly how the group could tackle the issue. The board’s purview currently allows it to weigh in on specific pieces of content that Meta has removed or left up after a user appeal. But it’s possible the board could find another way into the issue. A spokesperson for the Oversight Board notes that the group expressed concern about demoted content in its opinion on content related to the Israel-Hamas war. “This is something the board would like to further explore as Meta’s decisions around demotion are pretty opaque,” the spokesperson said.

This article originally appeared on Engadget at https://www.engadget.com/metas-oversight-board-made-just-53-decisions-in-2023-100017750.html?src=rss

A Meta ‘error’ broke the political content filter on Threads and Instagram

Earlier this year, Meta made the controversial decision to automatically limit political content from users’ recommendations in Threads and Instagram by default. The company said that it didn’t want to “proactively amplify” political posts and that users could opt-in via their Instagram settings if they did want to see such content.

But, it turns out, that Meta continued to limit political content even for users who had opted in to seeing it. An unspecified “error” apparently caused the “political content” toggle — already buried several layers deep into Instagram's settings menu — to revert back to the “limit” setting each time the app closed. Political content, according to Meta, “is likely to mention governments, elections, or social topics that affect a group of people and/or society at large.”

An
Meta

The issue was flagged by Threads users, including Democratic strategist Keith Edwards, and confirmed by Engadget. It’s unclear how long the “error” was affecting users’ recommendations. “This was an error and should not have happened,” Meta spokesperson Andy Stone wrote on Threads. “We're working on getting it fixed.” Meta didn’t respond to questions about how long the setting had not been working properly.

The issue is likely to raise questions about Meta’s stance on political content. Though Threads is often compared to X, the company has taken an aggressive stance on content moderation, limiting the visibility of political content and outright blocking “potentially sensitive” topics, including anything related to COVID-19, from search results.

Stone later confirmed that the supposed bug had been fixed. "Earlier today, we identified an error in which people's selections in the Instagram political content settings tool mistakenly appeared to have reset even though no change had actually been made," he wrote on Threads. "The issue has now been fixed and we encourage people to check and make sure their settings reflect their preferences." 

Update June 26, 2024, 8:04 Pm ET: Added additional comments from Meta spokesperson Andy Stone.

This article originally appeared on Engadget at https://www.engadget.com/a-meta-error-broke-the-political-content-filter-on-threads-and-instagram-173020269.html?src=rss

Supreme Court ruling may allow officials to coordinate with social platforms again

The US Supreme Court has ruled on controversial attempt by two states, Missouri and Louisiana, to limit Biden Administration officials and other government agencies from engaging with workers at social media companies about misinformation, election interference and other policies. Rather than set new guidelines on acceptable communication between these parties, the Court held that the plaintiffs lacked standing to bring the issue at all. 

In Murthy, the states (as well as five individual social media users) alleged that, in the midst of the COVID pandemic and the 2020 election, officials at the CDC, FBI and other government agencies "pressured" Meta, Twitter and Google "to censor their speech in violation of the First Amendment."

The Court wrote, in an opinion authored by Justice Barrett, that "the plaintiffs must show a substantial risk that, in the near future, at least one platform will restrict the speech of at least one plaintiff in response to the actions of at least one Government defendant. Here, at the preliminary injunction stage, they must show that they are likely to succeed in carrying that burden." She went on to describe this as "a tall order." 

Though a Louisiana District Court order blocking contact between social media companies and Biden Administration officials has been on hold, the case has still had a significant impact on relationships between these parties. Last year, Meta revealed that its security researchers were no longer receiving their usual briefings from the FBI or CISA (Cybersecurity and Infrastructure Security Agency) regarding foreign election interference. FBI officials had also warned that there were instances in which they discovered election interference attempts but didn’t warn social media companies due to additional layers of legal scrutiny implemented following the lawsuit. With today's ruling it seems possible such contact might now be allowed to continue. 

In part, it seems the Court was reluctant to rule on the case because of the potential for far-reaching First Amendment implications. Among the arguments made by the Plaintiffs was an assertion of a "right to listen" theory, that social media users have a Constitutional right to engage with content. "This theory is startlingly broad," Barrett wrote, "as it would grant all social-media users the right to sue over someone else’s censorship." The opinion was joined by Justices Roberts, Sotomayor, Kagan, Kavanaugh and Jackson. Justice Alito dissented, and was joined by Justices Thomas and Gorsuch. 

The case was one of a handful involving free speech and social media to come before the Supreme Court this term. The court is also set to rule on two linked cases involving state laws from Texas and Florida that could upend the way social media companies handle content moderation.

This article originally appeared on Engadget at https://www.engadget.com/supreme-court-ruling-may-allow-officials-to-coordinate-with-social-platforms-again-144045052.html?src=rss

Threads can now show replies from Mastodon and other fediverse apps

Meta just made an important update for Threads users who are sharing posts to the fediverse. The company began allowing users to opt-in to sharing their Threads posts to Mastodon and other ActivityPub-powered services back in March. But the integration has been fairly limited, with Threads users unable to view replies and most other interactions to their posts without switching over to a Mastodon client or other app.

That’s now changing. The Threads app will now be able to show replies and likes from Mastodon and other services, Meta announced. The change marks the first time Threads users who have opted into fediverse sharing will be able to see content that originated in the fediverse directly on Threads.

There are still some limitations, though. Meta says that, frustratingly, Threads users won’t be able to respond directly to replies from users in the fediverse. It also notes that “some replies may not be visible,” so Threads’ notifications still won’t be the most reliable place to track your engagement.

Meta also announced that it’s expanding the fediverse sharing options to more users, with the feature live in more than 100 countries. (Instagram chief Adam Mosseri said the company is hoping to turn the fediverse beta features on everywhere “soon.”)

The changes are an important step for anyone who cares about the future of decentralized social media. Though Meta has been somewhat slow to deliver on its promises to support ActivityPub in Threads, the app has the potential to bring tens of millions of people into the fediverse.

This article originally appeared on Engadget at https://www.engadget.com/threads-can-now-show-replies-from-mastodon-and-other-fediverse-apps-224127213.html?src=rss

Reddit puts AI scrapers on notice

Reddit has a warning for AI companies and other scrapers: play by our rules or get blocked. The company said in an update that it plans to update its Robots Exclusion Protocol (robots.txt file), which allows it to block automated scraping of its platform.

The company said it will also continue to block and rate-limit crawlers and other bots that don’t have a prior agreement with the company. The changes, it said, shouldn’t affect “good faith actors,” like the Internet Archive and researchers.

Reddit’s notice comes shortly after multiple reports that Perplexity and other AI companies regularly bypass websites’ robots.txt protocol, which is used by publishers to tell web crawlers they don’t want their content accessed. Perplexity’s CEO, in a recent interview with Fast Company, said that the protocol is “not a legal framework.”

In a statement, a Reddit spokesperson told Engadget that it wasn’t targeting a particular company. “This update isn’t meant to single any one entity out; it’s meant to protect Reddit while keeping the internet open,” the spokesperson said. “In the next few weeks, we’ll be updating our robots.txt instructions to be as clear as possible: if you are using an automated agent to access Reddit, regardless of what type of company you are, you need to abide by our terms and policies, and you need to talk to us. We believe in the open internet, but we do not believe in the misuse of public content.”

It’s not the first time the company has taken a hard line when it comes to data access. The company cited AI companies’ use of its platform when it began charging for its API last year. Since then, it has struck licensing deals with some AI companies, including Google and OpenAI. The agreements allow AI firms to train their models on Reddit’s archive and have been a significant source of revenue for the newly-public Reddit. The “talk to us” part of that statement is likely a not-so-subtle reminder that the company is no longer in the business of handing out its content for free.

This article originally appeared on Engadget at https://www.engadget.com/reddit-puts-ai-scrapers-on-notice-205734539.html?src=rss