A Meta ‘error’ broke the political content filter on Threads and Instagram

Earlier this year, Meta made the controversial decision to automatically limit political content from users’ recommendations in Threads and Instagram by default. The company said that it didn’t want to “proactively amplify” political posts and that users could opt-in via their Instagram settings if they did want to see such content.

But, it turns out, that Meta continued to limit political content even for users who had opted in to seeing it. An unspecified “error” apparently caused the “political content” toggle — already buried several layers deep into Instagram's settings menu — to revert back to the “limit” setting each time the app closed. Political content, according to Meta, “is likely to mention governments, elections, or social topics that affect a group of people and/or society at large.”

An
Meta

The issue was flagged by Threads users, including Democratic strategist Keith Edwards, and confirmed by Engadget. It’s unclear how long the “error” was affecting users’ recommendations. “This was an error and should not have happened,” Meta spokesperson Andy Stone wrote on Threads. “We're working on getting it fixed.” Meta didn’t respond to questions about how long the setting had not been working properly.

The issue is likely to raise questions about Meta’s stance on political content. Though Threads is often compared to X, the company has taken an aggressive stance on content moderation, limiting the visibility of political content and outright blocking “potentially sensitive” topics, including anything related to COVID-19, from search results.

Stone later confirmed that the supposed bug had been fixed. "Earlier today, we identified an error in which people's selections in the Instagram political content settings tool mistakenly appeared to have reset even though no change had actually been made," he wrote on Threads. "The issue has now been fixed and we encourage people to check and make sure their settings reflect their preferences." 

Update June 26, 2024, 8:04 Pm ET: Added additional comments from Meta spokesperson Andy Stone.

This article originally appeared on Engadget at https://www.engadget.com/a-meta-error-broke-the-political-content-filter-on-threads-and-instagram-173020269.html?src=rss

Supreme Court ruling may allow officials to coordinate with social platforms again

The US Supreme Court has ruled on controversial attempt by two states, Missouri and Louisiana, to limit Biden Administration officials and other government agencies from engaging with workers at social media companies about misinformation, election interference and other policies. Rather than set new guidelines on acceptable communication between these parties, the Court held that the plaintiffs lacked standing to bring the issue at all. 

In Murthy, the states (as well as five individual social media users) alleged that, in the midst of the COVID pandemic and the 2020 election, officials at the CDC, FBI and other government agencies "pressured" Meta, Twitter and Google "to censor their speech in violation of the First Amendment."

The Court wrote, in an opinion authored by Justice Barrett, that "the plaintiffs must show a substantial risk that, in the near future, at least one platform will restrict the speech of at least one plaintiff in response to the actions of at least one Government defendant. Here, at the preliminary injunction stage, they must show that they are likely to succeed in carrying that burden." She went on to describe this as "a tall order." 

Though a Louisiana District Court order blocking contact between social media companies and Biden Administration officials has been on hold, the case has still had a significant impact on relationships between these parties. Last year, Meta revealed that its security researchers were no longer receiving their usual briefings from the FBI or CISA (Cybersecurity and Infrastructure Security Agency) regarding foreign election interference. FBI officials had also warned that there were instances in which they discovered election interference attempts but didn’t warn social media companies due to additional layers of legal scrutiny implemented following the lawsuit. With today's ruling it seems possible such contact might now be allowed to continue. 

In part, it seems the Court was reluctant to rule on the case because of the potential for far-reaching First Amendment implications. Among the arguments made by the Plaintiffs was an assertion of a "right to listen" theory, that social media users have a Constitutional right to engage with content. "This theory is startlingly broad," Barrett wrote, "as it would grant all social-media users the right to sue over someone else’s censorship." The opinion was joined by Justices Roberts, Sotomayor, Kagan, Kavanaugh and Jackson. Justice Alito dissented, and was joined by Justices Thomas and Gorsuch. 

The case was one of a handful involving free speech and social media to come before the Supreme Court this term. The court is also set to rule on two linked cases involving state laws from Texas and Florida that could upend the way social media companies handle content moderation.

This article originally appeared on Engadget at https://www.engadget.com/supreme-court-ruling-may-allow-officials-to-coordinate-with-social-platforms-again-144045052.html?src=rss

Threads can now show replies from Mastodon and other fediverse apps

Meta just made an important update for Threads users who are sharing posts to the fediverse. The company began allowing users to opt-in to sharing their Threads posts to Mastodon and other ActivityPub-powered services back in March. But the integration has been fairly limited, with Threads users unable to view replies and most other interactions to their posts without switching over to a Mastodon client or other app.

That’s now changing. The Threads app will now be able to show replies and likes from Mastodon and other services, Meta announced. The change marks the first time Threads users who have opted into fediverse sharing will be able to see content that originated in the fediverse directly on Threads.

There are still some limitations, though. Meta says that, frustratingly, Threads users won’t be able to respond directly to replies from users in the fediverse. It also notes that “some replies may not be visible,” so Threads’ notifications still won’t be the most reliable place to track your engagement.

Meta also announced that it’s expanding the fediverse sharing options to more users, with the feature live in more than 100 countries. (Instagram chief Adam Mosseri said the company is hoping to turn the fediverse beta features on everywhere “soon.”)

The changes are an important step for anyone who cares about the future of decentralized social media. Though Meta has been somewhat slow to deliver on its promises to support ActivityPub in Threads, the app has the potential to bring tens of millions of people into the fediverse.

This article originally appeared on Engadget at https://www.engadget.com/threads-can-now-show-replies-from-mastodon-and-other-fediverse-apps-224127213.html?src=rss

Reddit puts AI scrapers on notice

Reddit has a warning for AI companies and other scrapers: play by our rules or get blocked. The company said in an update that it plans to update its Robots Exclusion Protocol (robots.txt file), which allows it to block automated scraping of its platform.

The company said it will also continue to block and rate-limit crawlers and other bots that don’t have a prior agreement with the company. The changes, it said, shouldn’t affect “good faith actors,” like the Internet Archive and researchers.

Reddit’s notice comes shortly after multiple reports that Perplexity and other AI companies regularly bypass websites’ robots.txt protocol, which is used by publishers to tell web crawlers they don’t want their content accessed. Perplexity’s CEO, in a recent interview with Fast Company, said that the protocol is “not a legal framework.”

In a statement, a Reddit spokesperson told Engadget that it wasn’t targeting a particular company. “This update isn’t meant to single any one entity out; it’s meant to protect Reddit while keeping the internet open,” the spokesperson said. “In the next few weeks, we’ll be updating our robots.txt instructions to be as clear as possible: if you are using an automated agent to access Reddit, regardless of what type of company you are, you need to abide by our terms and policies, and you need to talk to us. We believe in the open internet, but we do not believe in the misuse of public content.”

It’s not the first time the company has taken a hard line when it comes to data access. The company cited AI companies’ use of its platform when it began charging for its API last year. Since then, it has struck licensing deals with some AI companies, including Google and OpenAI. The agreements allow AI firms to train their models on Reddit’s archive and have been a significant source of revenue for the newly-public Reddit. The “talk to us” part of that statement is likely a not-so-subtle reminder that the company is no longer in the business of handing out its content for free.

This article originally appeared on Engadget at https://www.engadget.com/reddit-puts-ai-scrapers-on-notice-205734539.html?src=rss

Snapchat is making it harder for strangers to contact teens — again

Snapchat is, once again, beefing up its safety features to make it harder for strangers to contact teens in the app. The company is adding new warnings about "suspicious" contacts and preemptively blocking friend requests from accounts that may be linked to scams.

It’s not the first time Snap has tried to dissuade teen users from connecting with strangers in the app. The company says the latest warnings go a step further in that the alerts rely on “new and advanced signals” that indicate an account may be tied to a scammer. Likewise, Snap says it will block friend requests sent by users who lack mutual friends with the requestee, and "a history of accessing Snapchat in locations often associated with scamming activity.” The app’s block feature is also getting an upgrade so that users who block someone will also automatically block new accounts made on the same device.

These updates, according to the company, will help address sextortion scams that often target teens across social media platforms, as well as other safety and privacy concerns. Snap, like many of its social media peers, has come under fire from lawmakers over teen safety issues, including sextortion scams and the ease with which drug dealers have been able to contact teens in the app. The latest update also just happens to come shortly after Rolling Stone published an exhaustive investigation into how Snapchat “helped fuel a teen-overdose epidemic across the country.”

The article cited specific features like Snapchat’s Snap Map, which allows users to share their current location with friends, and “quick add” suggestions, which surfaced friend recommendations. (The company began limiting “quick add” suggestions between teen and adult accounts in 2022.) And while teens can still opt-in to the Snap Map location sharing, the company says it’s simplifying these settings so they’re easier to change and surfacing more “frequent reminders” about how they are sharing their whereabouts in the app.

This article originally appeared on Engadget at https://www.engadget.com/snapchat-is-making-it-harder-for-strangers-to-contact-teens--again-163824048.html?src=rss

‘Patreon is giving creators more tools to attract free subscribers

Patreon is continuing its push to expand beyond its roots as a paid membership platform. The company, which added new chat features and free membership options last year, is giving creators more ways to interact with their fans even if they aren’t paying subscribers.

The company says its creators have already seen more than 30 million sign-ups for free memberships, which allow fans to get updates and follow the work of creators and artists they like without committing to a monthly subscription. Now, creators will also be able to add non-paying members to Patreon’s Discord-like chats. Additionally, creators will be able to offer a live chat and custom countdown timer to tease new work.

For fans who aren’t yet paying for a membership, Patreon will add the ability for creators to sell access to past posts and collections so people will have a way to access previously paywalled content without committing to a recurring subscription. (The company added one-time purchases for digital products like podcast episodes last year.) Creators will also have the ability to offer limited-time gift subscriptions to fans.

patreon countdowns
Patreon

For Patreon, the changes are meant to help creators become less reliant on platforms like Instagram, YouTube and TikTok where engagement and views are often dependent on another company’s algorithm. At a time when platforms’ payouts to creators are reportedly dwindling — The Wall Street Journal reported last week that making a living as a creator has gotten significantly harder over the last year as dedicated creator funds shrink — Patreon is spinning its platform as place where creators can connect with their “real fans” and actually make money.

“Creators want a place where people can sign up to see their future work… and then actually see it,” the company explains in a blog post. “They don’t want to keep chasing likes or follower counts in a constantly changing system they have no control over.”

This article originally appeared on Engadget at https://www.engadget.com/patreon-is-giving-creators-more-tools-to-attract-free-subscribers-130049968.html?src=rss

X is making live streaming a premium feature

X will soon be moving the ability to live stream behind its premium paywall, the company announced. The change will make X the only major social platform to charge for the feature, which is currently free on Facebook, Instagram, YouTube, Twitch and TikTok.

“Starting soon, only Premium subscribers will be able to livestream (create live video streams) on X,” the company said. “This includes going live from an encoder with X integration,” an apparent reference to X’s game streaming capabilities.

X didn’t offer an explanation for the change. The company has used additional features, like post editing, longform writing, and ad-free feeds to lure users to its paid subscriptions, but hasn’t typically moved existing, widely available, features behind its paywall. X Premium subscriptions start at $3/month for the "basic" tier, and rise to $8/month for Premium and $16/month for Premium+. 

There are, however, other signs that the Elon Musk-owned platform wants to charge for other simple features. The company introduced a $1 annual charge for new accounts to have posting privileges in New Zealand and the Philippines. Though the company still describes the scheme as a test, Musk has suggested he wants to expand the fees to all new users.

This article originally appeared on Engadget at https://www.engadget.com/x-is-making-live-streaming-a-premium-feature-185151147.html?src=rss

How small claims court became Meta’s customer service hotline

Last month, Ray Palena boarded a plane from New Jersey to California to appear in court. He found himself engaged in a legal dispute against one of the largest corporations in the world, and improbably, the venue for their David-versus-Goliath showdown would be San Mateo's small claims court.

Over the course of eight months and an estimated $700 (mostly in travel expenses), he was able to claw back what all other methods had failed to render: his personal Facebook account.

Those may be extraordinary lengths to regain a digital profile with no relation to its owner's livelihood, but Palena is one of a growing number of frustrated users of Meta's services who, unable to get help from an actual human through normal channels of recourse, are using the court system instead. And in many cases, it's working.

Engadget spoke with five individuals who have sued Meta in small claims court over the last two years in four different states. In three cases, the plaintiffs were able to restore access to at least one lost account. One person was also able to win financial damages and another reached a cash settlement. Two cases were dismissed. In every case, the plaintiffs were at least able to get the attention of Meta’s legal team, which appears to have something of a playbook for handling these claims.

At the heart of these cases is the fact that Meta lacks the necessary volume of human customer service workers to assist those who lose their accounts. The company’s official help pages steer users who have been hacked toward confusing automated tools that often lead users to dead-end links or emails that don’t work if your account information has been changed. (The company recently launched a $14.99-per-month program, Meta Verified, which grants access to human customer support. Its track record as a means of recovering hacked accounts after the fact has been spotty at best, according to anecdotal descriptions.)

Hundreds of thousands of people also turn to their state Attorney General’s office as some state AGs have made requests on users’ behalf — on Reddit, this is known as the “AG method.” But attorneys general across the country have been so inundated with these requests they formally asked Meta to fix their customer service, too. “We refuse to operate as the customer service representatives of your company,” a coalition of 41 state AGs wrote in a letter to the company earlier this year.

Facebook and Instagram users have long sought creative and sometimes extreme measures to get hacked accounts back due to Meta’s lack of customer support features. Some users have resorted to hiring their own hackers or buying an Oculus headset since Meta has dedicated support staff for the device (users on Reddit report this “method” no longer works). The small claims approach has become a popular topic on Reddit forums where frustrated Meta users trade advice on various “methods” for getting an account back. People Clerk, a site that helps people write demand letters and other paperwork required for small claims court, published a help article called “How to Sue facebook,” in March.

It’s difficult to estimate just how many small claims cases are being brought by Facebook and Instagram users, but they may be on the rise. Patrick Forrest, the chief legal officer for Justice Direct, the legal services startup that owns People Clerk, says the company has seen a “significant increase” in cases against Meta over the last couple years.

One of the advantages of small claims court is that it’s much more accessible to people without deep pockets and legal training. Filing fees are typically under $100 and many courthouses have resources to help people complete the necessary paperwork for a case. “There's no discovery, there are no depositions, there's no pre-trial,” says Bruce Zucker, a law professor at California State University, Northridge. “You get a court date and it's going to be about a five or 10 minute hearing, and you have a judge who's probably also tried to call customer service and gotten nowhere.”

“Facebook and Instagram and WhatsApp [have] become crucial marketplaces where people conduct their business, where people are earning a living," Forrest said. “And if you are locked out of that account, business or personal, it can lead to severe financial damages, and it can disrupt your ability to sustain your livelihood.”

One such person whose finances were enmeshed with Meta's products is Valerie Garza, the owner of a massage business. She successfully sued the company in a San Diego small claims court in 2022 after a hack which cost her access to personal Facebook and Instagram accounts, as well as those associated with her business. She was able to document thousands of dollars in resulting losses.

A Meta legal representative contacted Garza a few weeks before her small claims court hearing, requesting she drop the case. She declined, and when Meta didn’t show up to her hearing, she won by default. "When we went through all of the loss of revenues," Garza told Engadget, "[the judge] kind of had to give it to me.”

But that wasn’t the end of Garza’s legal dispute with Meta. After the first hearing, the company filed a motion asking the judge to set aside the verdict, citing its own failure to appear at the hearing. Meta also tried to argue that its terms of service set a maximum of $100 liability. Another hearing was scheduled and a lawyer again contacted Garza offering to help get her account back.

“He seemed to actually kind of just want to get things turned back on, and that was still my goal, at this point,” Garza said. It was then she discovered that her business’ Instagram was being used to advertise sex work.

She began collecting screenshots of the activity on the account, which violated Instagram’s terms of service, as well as fraudulent charges for Facebook ads bought by whoever hacked her account. Once again, Meta didn’t show up to the hearing and a judge ordered the company to pay her the $7,268.65 in damages she had requested.

“I thought they were going to show up this time because they sent their exhibits, they didn't ask for a postponement or anything,” she says. “My guess is they didn't want to go on record and have a transcript showing how completely grossly negligent they are in their business and how very little they care about the safety or financial security of their paying advertisers.”

In July of 2023, Garza indicated in court documents that Meta had paid in full. In all, the process took more than a year, three court appearances and countless hours of work. But Garza says it was worth it. “I just can't stand letting somebody take advantage and walking away,” she says.

Even for individuals whose work doesn't depend on Meta's platforms, a hacked account can result in real harm.

Palena, who flew cross-country to challenge Meta in court, had no financial stake in his Facebook account, which he claimed nearly 20 years ago when the social network was still limited to college students. But whoever hacked him had changed the associated email address and phone number, and began using his page to run scam listings on Facebook Marketplace.

“I was more concerned about the damage it could do to me and my name if something did happen, if someone actually was scammed,” he tells Engadget. In his court filing, he asked for $10,000 in damages, the maximum allowed in California small claims court. He wrote that Meta had violated its own terms of service by allowing a hacked account to stay up, damaging his reputation. “I didn't really care that much about financial compensation,” Palena says “I really just wanted the account back because the person who hacked the account was still using it. They were using my profile with my name and my profile image."

A couple weeks later, a legal rep from Meta reached out to him and asked him for information about his account. They exchanged a few emails over several weeks, but his account was still inaccessible. The same day he boarded a plane to San Mateo, the Meta representative emailed him again and asked if he would be willing to drop the case since “the access team is close to getting your account secure and activated again.” He replied that he intended to be in court the next day as he was still unable to get into his account.

Less than half an hour before his hearing was scheduled to start, he received the email he had spent months waiting for: a password reset link to get back into his account. Palena still attended the hearing, though Meta did not. According to court records reviewed by Engadget, Palena told the judge the case had been “tentatively resolved,” though he hasn’t officially dropped the case yet.

While filing a small claims court case is comparatively simple, it can still be a minefield, even to figure out something as seemingly straightforward as which court to file to. Forrest notes that Facebook’s terms of service stipulates that legal cases must be brought in San Mateo County, home of Meta’s headquarters. But, confusingly, the terms of service for Meta accounts states that cases other than small claims court must be filed in San Mateo. In spite of the apparent contradiction, some people (like Garza) have had success suing Meta outside of San Mateo.

Each jurisdiction also has different rules for maximum allowable compensation in small claims, what sorts of relief those courts are able to grant and even whether or not parties are allowed to have a lawyer present. The low barrier to entry means many first-time plaintiffs are navigating the legal system for the first time without help, and making rookie mistakes along the way.

Shaun Freeman had spent years building up two Instagram accounts, which he describes as similar to TMZ but with “a little more character.” The pages, which had hundreds of thousands of followers, had also been a significant source of income to Freeman, who has also worked in the entertainment industry and uses the stage name Young Platinum.

He says his pages had been suspended or disabled in the past, but he was able to get them back through Meta’s appeals process, and once through a complaint to the California Attorney General’s office. But in 2023 he again lost access to both accounts. He says one was disabled and one is inaccessible due to what seems like a technical glitch.

He tried to file appeals and even asked a friend of a friend who worked at Meta to look into what had happened, but was unsuccessful. Apparently out of other options, he filed a small claims case in Nevada in February. A hearing was scheduled for May, but Freeman had trouble figuring out the legal mechanics. “It took me months and months to figure out how to get them served,” Freeman says. He was eventually able to hire a process server and got the necessary signature 10 days before his hearing. But it may have been too late. Court records show the case was dismissed for failure to serve.

Even without operator error, Meta seems content to create hardship for would-be litigants over matters much smaller than the company's more headline-grabbing antitrust and child safety disputes. Based on correspondence reviewed by Engadget, the company maintains a separate "small claims docket" email address to contact would-be litigants.

Ron Gaul, who lives in North Dakota, filed a small claims suit after Meta disabled his account following a wave of what he describes as targeted harassment. The case was eventually dismissed after Meta’s lawyers had the case moved to district court, which is permissible for a small claims case under North Dakota law.

Gaul says he couldn’t keep up with the motions filed by Meta’s lawyers, whom he had hoped to avoid by filing in small claims court. “I went to small claims because I couldn't have a lawyer,” he tells Engadget.

Ryan, an Arizona real estate agent who asked to be identified by his first name only, decided to sue Meta in small claims with his partner after their Facebook accounts were disabled in the fall of 2022. They were both admins of several large Facebook Groups and he says their accounts were disabled over a supposed copyright violation.

Before a scheduled hearing, the company reached out. “They started basically trying to bully us,” says Ryan, who asked to be identified by his first name only. “They started saying that they have a terms of service [and] they can do whatever they want, they could delete people for any reason.” Much like Gaul, Ryan expected small claims would level the playing field. But according to emails and court records reviewed by Engadget, Meta often deploys its own legal resources as well as outside law firms to respond to these sorts of claims and engage with small claims litigants outside of court. "They put people that still have legal training against these people that are, you know, representing themselves,” he said.

In the end, Meta’s legal team was able to help Ryan get his account back and he agreed to drop himself from the small claims case. But two months later his partner had still not gotten back into hers. Meta eventually told her that her account had been permanently deleted and was no longer able to be restored. Meta eventually offered $3,500 — the maximum amount for a small claims case in Arizona. He says they wanted more, but Meta refused, and they felt like they were out of options. Ryan claims they had already lost tens of thousands of dollars in potential sales that they normally sourced from Facebook. “We were prepared to go further, but no lawyer would really take it on without a $15,000 retainer and it wasn't worth it.”

While it may seem surprising that Meta would give these small claims cases so much attention, Zucker, the Cal State Northridge professor, says that big companies have their own reasons for wanting to avoid court. “I don’t think places like Google or Meta want to have a bunch of judgments against them … because then that becomes a public record and starts floating around,” he says. “So they do take these things seriously.”

Without responding to specific questions about the substance of this story, Meta instead sent Engadget the following statement:

"We know that losing and recovering access to your online accounts can be a frustrating experience. We invest heavily in designing account security systems to help prevent account compromise in the first place, and in educating our users, including by regularly sharing new security features and tips for how people can stay safe and vigilant against potential targeting by hackers. But we also know that bad actors, including scammers, target people across the internet and constantly adapt to evade detection by social media platforms like ours, email and telecom providers, banks and others. To detect malicious activity and help protect people who may have gotten compromised via email phishing, malware or other means, we also constantly improve our detection, enforcement and support systems, in addition to providing channels where people can report account access issues to us, working with law enforcement and taking legal action against malicious groups."

This article originally appeared on Engadget at https://www.engadget.com/how-small-claims-court-became-metas-customer-service-hotline-160224479.html?src=rss

The FTC has referred its child privacy case against TikTok to the Justice Department

The Federal Trade Commission has referred its complaint against TikTok to the Justice Department after a long-running investigation into the company’s privacy and security practices. “Our investigation found reason to believe that TikTok is violating or about to violate the FTC Act and the Children’s Online Privacy Protection Act (COPPA),” FTC Chair Lina Khan said in a post on X.

In a longer statement shared by the FTC, the regulator noted its investigation into TikTok after a 2019 privacy settlement related to Musical.ly, the app acquired by ByteDance that eventually became TikTok. The FTC “also investigated additional potential violations of COPPA and the FTC Act,” it said. It’s not clear exactly what the FTC turned up, though Politico reported earlier this year that the regulator was also looking into whether TikTok had misled users about whether their personal data was accessible to people in China.

The statement itself is a somewhat unusual move for the FTC, which acknowledged that it doesn't typically publicize its referral decisions. It said it believed doing so in this case “was in the public interest.” The referral is likely to ramp up pressure on TikTok, which is also fighting a legal battle against the US government to avoid a potential ban. Lawmakers and other officials have alleged the app poses a national security threat due to its ties to China.

A TikTok spokesperson told Engadget in a statement that the company was “disappointed” with the FTC’s decision. "We've been working with the FTC for more than a year to address its concerns,” the spokesperson said. “We're disappointed the agency is pursuing litigation instead of continuing to work with us on a reasonable solution. We strongly disagree with the FTC's allegations, many of which relate to past events and practices that are factually inaccurate or have been addressed. We're proud of and remain deeply committed to the work we've done to protect children and we will continue to update and improve our product.”

This article originally appeared on Engadget at https://www.engadget.com/the-ftc-has-referred-its-child-privacy-case-against-tiktok-to-the-justice-department-211542778.html?src=rss

Meta makes the Threads API available to all developers

Meta is finally making the Threads API available to developers. The company began testing the developers tools with a handful of companies back in March, but is now throwing the door open to more creators and app makers.

For now, the Threads API functionality is somewhat limited. It enables third-party apps to publish posts to Threads and view and manage replies and interactions with their posts. So far, this has enabled Threads integrations with social media management software like Hootsuite and Sprout Social. The Threads API has also enabled tech news aggregator Techmeme to automatically post to the platform.

These kinds of tools are widely used by brands, marketers and power users who rely on more advanced analytics and other specialized capabilities. Interestingly, Meta also suggests that creators could also be interested in using the new Threads API for their own “unique integrations” with the platform.

Meta hasn’t talked much about its future plans for the Threads API, or if it would ever support third-party client apps the way that Twitter did before Elon Musk’s take over of the service. The API could also play an eventual role in Meta’s plans to interoperate with the fediverse, though Meta has said it’s still early days for its plans to make Threads interoperable with decentralized platforms.

This article originally appeared on Engadget at https://www.engadget.com/meta-makes-the-threads-api-available-to-all-developers-174946709.html?src=rss