Oversight Board says Meta’s handling of a satirical image of Harris and Walz raises ‘serious concerns’

Two weeks before the US presidential election, the Oversight Board says it has “serious concerns” about Meta’s content moderation systems in “electoral contexts,” and that the company risks the “excessive removal of political speech” when it over-enforces its rules. The admonishment came as the board weighed in on a case involving a satirical image of Vice President Kamala Harris and her running mate, Minnesota Governor Tim Walz.

Meta originally removed the post, shared on Facebook in August, that showed an edited version of a movie poster from Dumb and Dumber. The original 1994 movie poster shows the two main characters grabbing each other’s nipples through their shirts. In the altered version, the actors’ faces were replaced by Harris and Walz.

According to the Oversight Board, Meta cited its bullying and harassment rules, which includes a provision barring “derogatory sexualized photoshop or drawings.” The social network later restored the post after it drew attention from the Oversight Board, and the company acknowledged the satirical image didn’t break its rules because it didn’t depict sexual activity.

Despite Meta’s reversal, the board says the case suggests larger issues in how Meta handles posts dealing with election-related content. “This post is nothing more than a commonplace satirical image of prominent politicians and is instantly recognizable as such,” the board wrote. “Nonetheless, the company’s failure to recognize the nature of this post and treat it accordingly raises serious concerns about the systems and resources Meta has in place to effectively make content determinations in such electoral contexts.”

In response to the Oversight Board's take on the situation, a Meta spokesperson gave the following brief statement: "We mistakenly removed this post but restored it after the issue was brought to our attention."

It’s unusually direct criticism from the Oversight Board, which released its analysis of the case in a summary decision, which comes without the group’s typical laundry list of recommendations for the social media company. The board has previously pushed Meta to clarify its rules around satirical content.The latest case highlights another issue that many of the company’s users have also complained about: over-enforcing its rules.

“In this case, however, the Board highlights the overenforcement of Meta’s Bullying and Harassment policy with respect to satire and political speech in the form of a non-sexualized derogatory depiction of political figures,” the board wrote. “It also points to the dangers that overenforcing the Bullying and Harassment policy can have, especially in the context of an election, as it may lead to the excessive removal of political speech and undermine the ability to criticize government officials and political candidates, including in a sarcastic manner.”

Update, October 23 2024, 1:00PM ET: This story has been updated to include a statement from Meta.

This article originally appeared on Engadget at https://www.engadget.com/social-media/oversight-board-says-metas-handling-of-a-satirical-image-of-harris-and-walz-raises-serious-concerns-100046800.html?src=rss

Meta is bringing back facial recognition with new safety features for Facebook and Instagram

Meta is bringing facial recognition tech back to its apps more than three years after it shut down Facebook’s “face recognition” system amid a broader backlash against the technology. Now, the social network will begin to deploy facial recognition tools on Facebook and Instagram to fight scams and help users who have lost access to their accounts, the company said in an update.

The first test will use facial recognition to detect scam ads that use the faces of celebrities and other public figures. “If our systems suspect that an ad may be a scam that contains the image of a public figure at risk for celeb-bait, we will try to use facial recognition technology to compare faces in the ad against the public figure’s Facebook and Instagram profile pictures,” Meta explained in a blog post. “If we confirm a match and that the ad is a scam, we’ll block it.”

The company said that it’s already begun to roll the feature out to a small group of celebs and public figures and that it will begin automatically enrolling more people into the feature “in the coming weeks,” though individuals have the ability to opt out of the protection. While Meta already has systems in place to review ads for potential scams, the company isn’t always able to catch “celeb-bait” ads as many legitimate companies use celebrities and public figures to market their products, Monika Bickert, VP of content policy at Meta, said in a briefing. “This is a real time process,” she said of the new facial recognition feature. “It's faster and it's more accurate than manual review.”

Separately, Meta is also testing facial recognition tools to address another long-running issue on Facebook and Instagram: account recovery. The company is experimenting with a new “video selfie” option that allows users to upload a clip of themselves, which Meta will then match to their profile photos, when users have been locked out of their accounts. The company will also use it in cases of a suspected account compromise to prevent hackers from accessing accounts using stolen credentials.

The tool won’t be able to help everyone who loses access to a Facebook or Instagram account. Many business pages, for example, don’t include a profile photo of a person, so those users would need to use Meta’s existing account recovery options. But Bickert says the new process will make it much more difficult for bad actors to game the company’s support tools “It will be a much higher level of difficulty for them in trying to bypass our systems,” Bickert said.

With both new features, Meta says it will “immediately delete” facial data that’s used for comparisons and that the scans won’t be used for another purpose. The company is also making the features optional, though celebrities will need to opt-out of the scam ad protection rather than opt-ion.

That could draw criticism from privacy advocates, particularly given Meta’s messy history with facial recognition. The company previously used the technology to power automatic photo-tagging, which allowed the company to automatically recognize the faces of users in photos and videos. The feature was discontinued in 2021, with Meta deleting the facial data of more than 1 billion people, citing “growing societal concerns.” The company also faces lawsuits, notably from the Texas and Illinois, over its use of the tech. Meta paid $650 million to settle a lawsuit related to the Illinois law and $1.4 billion to resolve a similar suit in Texas.

It’s notable, then, that the new tools won’t be available in either Illinois or Texas to start. It also won’t roll out to users in the United Kingdom or European Union as the company is “continuing to have conversations there with regulators” in the region, according to Bickert. But the company is “hoping to scale this technology globally sometime in 2025,” according to a Meta spokesperson.

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-is-bringing-back-facial-recognition-with-new-safety-features-for-facebook-and-instagram-222523426.html?src=rss

Foursquare is killing its city guide app to focus on the check-in app Swarm

It’s the end of an era for one of the App Store’s earliest success stories. Foursquare is shutting down its signature city guide app in order to “focus our efforts on building an even better experience in Swarm,” the company said in an update. The app will shut down December 15, while the web version will stay online until “early 2025.”

The shutdown is a notable reversal of a strategy the company announced a decade ago when it, controversially, opted to split its famed “check-in” service into a separate app. That app became known as Swarm while the Foursquare-branded app became a “city guide” full of user-generated reviews and local recommendations.

Now, Foursquare says its future is, once again, the check-in. “We’re also introducing exciting new features and capabilities into Swarm throughout the year (👀 some of which may look familiar to you) in order to unlock new use cases that may better support your needs,” the company said, adding that additional updates are expected “early next year.”

It’s not clear why the company is changing its strategy to elevate Swarm over its namesake app. The company laid off more than 100 employees earlier this year in an effort to “streamline” operations. Foursquare founder Dennis Crowley, who is currently co-chair of the company’s board of directors, said in a post on Threads that the company is “doing fine,” though he expressed disappointment with the news. “I would be lying if I didn't admit that I have been in a real funk these last few days over this news,” he wrote.

This article originally appeared on Engadget at https://www.engadget.com/social-media/foursquare-is-killing-its-city-guide-app-to-focus-on-the-check-in-app-swarm-191054153.html?src=rss

X updates its privacy policy to allow third parties to train AI models with its data

X is updating its privacy policy with new language that allows it to provide users’ data to third-party “collaborators” in order to train AI models. The new policy, which takes effect November 15, 2024, would seem to open the door to Reddit-like arrangements in which outside companies can pay to license data from X.

The updated policy shared by X includes a new section titled “third-party collaborators.”

Depending on your settings, or if you decide to share your data, we may share or disclose your information with third parties. If you do not opt out, in some instances the recipients of the information may use it for their own independent purposes in addition to those stated in X’s Privacy Policy, including, for example, to train their artificial intelligence models, whether generative or otherwise.

While the policy mentions the ability to opt out, it’s not clear how users would actually do so. As TechCrunch notes, the policy points to users’ settings menu, but there’ doesn’t appear to be an control for opting out of data sharing. The policy doesn’t go into effect until next month, though, so there’s still a chance that could change. X didn’t respond to a request for comment.

If X were to begin licensing its data to other companies, it could open up a significant new revenue stream for the social media company, which has seen waning interest from major advertisers.

In addition to the privacy policy, X is also updating its terms of service with stricter penalties for entities that are caught “scraping” large numbers of tweets. In a section titled “liquidated damages” the company states anyone viewing or accessing more than a million posts a day will be subject to a penalty of $15,000.

Protecting our users’ data and our system resources is important to us. You further agree that, to the extent permitted by applicable law, if you violate the Terms, or you induce or facilitate others to do so, in addition to all other legal remedies available to us, you will be jointly and severally liable to us for liquidated damages as follows for requesting, viewing, or accessing more than 1,000,000 posts (including reply posts, video posts, image posts, and any other posts) in any 24-hour period - $15,000 USD per 1,000,000 posts.

X owner Elon Musk has previously railed against “scraping.” Last year, the company temporarily blocked people from viewing tweets while logged out, in a move Musk attributed to fending off scrapers. He also moved X’s API behind a paywall, which has drastically hindered researchers’ ability to study what’s happening on the platform. He’s also used allegations of “scraping” to justify lawsuits against organizations that have attempted to study hate speech and other issues on the platform.

This article originally appeared on Engadget at https://www.engadget.com/social-media/x-updates-its-privacy-policy-to-allow-third-parties-to-train-ai-models-with-its-data-234207599.html?src=rss

The FBI arrested an Alabama man for allegedly helping hack the SEC’s X account

A 25-year-old Alabama man has been arrested by the FBI for his alleged role in the takeover of the Securities and Exchange Commission's X account earlier this year. The hack resulted in a rogue tweet that falsely claimed bitcoin ETFs had been approved by the regulator, which temporarily juiced bitcoin prices.

Now, the FBI has identified Eric Council Jr. as one of the people allegedly behind the exploit. Council was charged with conspiracy to commit aggravated identity theft and access device fraud, according to the Justice Department. While the SEC had previously confirmed that its X account was compromised via a SIM swap attack, the indictment offers new details about how it was allegedly carried out.

According to the indictment, Council worked with co-conspirators who he coordinated with over SMS and encrypted messaging apps. These unnamed individuals allegedly sent him the personal information of someone, identified only as “C.L,” who had access to the SEC X account. Council then printed a fake ID using the information and used it to buy a new SIM in their name, as well as a new iPhone, according to the DoJ. He then coordinated with the other individuals so they could access the SEC’s X account, change its settings and send the rogue tweet, the indictment says. 

The tweet from @SECGov, which came one day ahead of the SEC’s actual approval of 11 spot bitcoin ETFS, caused bitcoin prices to temporarily spike by more than $1,000. It also raised questions about why the high profile account wasn’t secured with multi-factor authentication at the time of the attack. “Today’s arrest demonstrates our commitment to holding bad actors accountable for undermining the integrity of the financial markets,” SEC Inspector General Jeffrey said in a statement.

The indictment further notes that Council allegedly performed some seemingly incriminating searches on his personal computer. Among his searchers were: "SECGOV hack," "telegram sim swap," "how can I know for sure if I am being investigated by the FBI," "What are the signs that you are under investigation by law enforcement or the FBI even if you have not been contacted by them," "what are some signs that the FBl is after you,” “Verizon store list," "federal identity theft statute," and "how long does it take to delete telegram account," the indictment says.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/the-fbi-arrested-an-alabama-man-for-allegedly-helping-hack-the-secs-x-account-193508179.html?src=rss

Instagram is adding new features to prevent teen sextortion scams

Meta is continuing its flurry of teen safety features for Instagram as the company faces mounting questions about its handling of younger users’ privacy and safety in its apps. The latest batch of updates are meant to tighten its protections against sextortion.

With the changes, Meta says it will make it harder for “potentially scammy” accounts to target teens on Instagram. The company will start to send follow requests from such accounts to users’ spam folders or block them entirely. The app will also start testing an alert that notifies teens when they receive a message from such an account, warning them that the message appears to be coming from a different country.

Additionally, when the company detects that a potential scammer is already following a teen, it will prevent them from being able to view teens’ follower lists and accounts that have tagged them in photos. The company isn’t saying exactly how it’s determining which accounts are deemed “potentially scammy,” but a spokesperson said they’re using signals such as the age of the account and whether it has mutual followers with the teen it’s attempting to interact with.

Meta is expanding its nudity protection ferature.
Meta

Meta is also making changes to prevent the spread of intimate images. Instagram will no longer allow users to screenshot or screen record images shared over DMs via the app’s ephemeral messaging feature and will no longer allow these images to be opened from the web version of Instagram. The app will also expanding the nudity protection feature it began testing earlier this year to all teens on the app. The tool automatically blurs images when nudity is detected in an image shared over DMs, and provides warnings and resources when such an image is detected.

The changes are meant to address the realities of how sextortion scams, in which scammers coerce teens into sending intimate images that are then used to threaten and blackmail them, are often carried out over Instagram. A report from Thorn and the National Center for Missing & Exploited Children (NCMEC) earlier this year found that Instagram, along with Snapchat, were the “most common” platforms used by scammers “as initial contact points.”

These scams are carried out by individuals and groups that sometimes organize on Meta’s own platforms. Alongside the updates, Meta said that it removed 800 groups on Facebook and 820 accounts, linked to a group known as the Yahoo Boys, that “were attempting to organize, recruit and train new sextortion scammers.”

Meta’s updates come as it faces increasing pressure to strengthen safety features for its youngest users. The company is currently facing a lawsuit from more than 30 states over the issue. (Earlier this week, a federal judge rejected Meta’s attempt to have the lawsuit dismissed.) New Mexico is also suing the company and has alleged that Meta didn’t do enough to stop adults from sexually harassing teens on its apps, particularly Instagram.

This article originally appeared on Engadget at https://www.engadget.com/social-media/instagram-is-adding-new-features-to-prevent-teen-sextortion-scams-111047916.html?src=rss

Threads can now show when people are online and using the app

Threads is sometimes criticized for not prioritizing real-time content in its recommendations. Now, Meta is adding status indicators that can show when a particular user is online in an apparent effort to address that need.

The optional feature, called “activity status,” will display a green bubble alongside someone’s profile photo when they’re online. The indicator is meant to help users find “others to engage with in real-time,” according to an update from Instagram boss Adam Mosseri. “We hope that knowing when your people are online makes it easier to have conversations.”

It’s an interesting choice for a platform that still doesn’t have direct messaging capabilities. Such indicators are more common in chat apps like Discord (Instagram, which does have robust DM capabilities, also has a similar feature). But Meta has said repeatedly it doesn’t want to bring in-app messaging to Threads, with the app’s head of product recently telling Business Insider there are no plans to add DMs to the app.

The feature also doesn’t exactly address many users’ desire for a feed that’s more oriented to real-time information and conversations. Instead, Meta is offering the status indicators as a way to seek out users who are currently active on the service as a way of encouraging conversations that are more likely to get timely replies. But without a clear way of finding people who have that green bubble alongside their profile photo, it’s unclear how easy this will actually be.

This article originally appeared on Engadget at https://www.engadget.com/social-media/threads-can-now-show-when-people-are-online-and-using-the-app-194041928.html?src=rss

Creators getting paid to post on Threads don’t understand its algorithm either

An artist who was able to pay off credit card debt, a photographer making extra cash by replying to the most polarizing posts she can find, a food blogger trying to start interesting conversations. These are some of the creators Meta is paying to post on Threads.

Meta introduced the invitation-only program in April, but has only shared limited details about how it works. Engadget spoke with half a dozen creators who have joined the program over the last few months. They described their strategies for reaching the required engagement metrics, and the sometimes confusing nature of Threads’ recommendation algorithm.

Creators are sorted into different tiers of the program which determines how much their bonuses can be and what kinds of metrics their posts need to hit. None of the creators who spoke with Engadget knew how or why they had been selected for the bonus program, though they all had an established following on Instagram. (One of the known requirements is a professional account on Instagram.)

Audrey Woulard is a photographer with more than 25,000 followers on Instagram and about 5,500 followers on Threads. She uses her Facebook and Instagram accounts to promote her portrait photography business. But when she was invited to the Threads bonus program, she saw an opportunity to experiment with different types of content.

Her strategy, she says, is all about replies. She exclusively focuses on replying to other users’ posts rather than creating her own. “I'm not necessarily generating content on my own,” she explains. “I'm kind of activating other people's content.” By focusing on replies, she says she’s able to reach the required 60 Threads with at least 750 views each to qualify for a $500 monthly bonus.

This has helped her become particularly attuned to the types of subjects that are likely to attract a lot of views. “Polarizing content, anything that keeps people talking,” she explains. Specifically, she looks for topics that people tend to have strong opinions about, like marriage, parenting, aging and politics, though she tries to avoid replying to obvious engagement bait.

Woulard’s experience isn’t unique. Threads defaults to a “for you” timeline that relies heavily on recommended posts rather than posts from accounts you already follow. Meta has also said it doesn’t want to “encourage” users to post about news and politics. Perhaps as a consequence of this, Threads’ “for you” feed often feels a lot slower and less focused on current events than on X.

What the algorithm does prioritize, though, is posts that get a lot of replies, even if they are about a seemingly mundane topic. This has led to a bizarrely random quality to the feed, what blogger Max Read dubbed “the gas leak social network.” It’s not uncommon to see a recommended post from someone you’re totally unconnected to talking about a trivial inconvenience, or a medical condition or some other anodyne anecdote. What these posts do have in common, though, is lots of replies.

It’s also created an opportunity for people looking to game the app’s algorithm by posting spammy content, generic questions or polarizing takes meant to attract as many replies as possible. (Meta execs have said they’re trying to fix this issue after a surge in such posts, even as they acknowledge that posts with replies are most likely to be recommended.)

But for Woulard, Meta’s emphasis on “public conversations” has worked in her favor. She says that so far she’s been able to max out three months worth of bonuses simply by replying to Threads. Woulard generates more income from her Facebook page, but enjoys the simplicity of the Threads bonus program. “It's so easy for me to make this money, I can literally sit in my room and reply to a bunch in 30 minutes.”

For Meta, offering bonuses to Instagram creators to post on Threads is part of its strategy to use Instagram to grow the year-old service. The company has leaned heavily on Instagram to grow Threads, which has already drawn 200 million users. But there were also bound to be some growing pains, says social media consultant Matt Navarra.

“I think people find it harder to create for platforms like Threads,” Navarra tells Engadget. “Writing interesting, engaging posts for a text-based platform, like X, Twitter or Threads is a different set of skills. And I think it's slightly tricky for some sorts of creators.”

Josh Kirkham, an artist who specializes in Bob Ross-style painting videos, has experienced this firsthand. With nearly 800,000 followers on Instagram, he’s in the highest tier of the bonus program, which makes him eligible to earn up to $5,000 a month from his posts on Threads. He’s been able to max out his bonus by sharing painting videos clipped from his livestreams on Instagram and TikTok.

Despite the success, he hasn’t been able to detect any patterns about what types of videos are likely to take off. He has more than 150,000 followers on Threads but, like other creators in the bonus program, relies on the app’s recommendation algorithm for his posts to get noticed. “Initially, I was posting mountain videos, and those were doing the best compared to everything else,” he says, “And then a week later, every mountain video was just getting like, nothing. Some of the times the videos that I think are going to do well don't do well at all, and vice versa.”

Kirkham says that he almost never replies to Threads posts when he’s trying to hit a bonus because he worries it will dilute his chances of getting the 5,000 views per post necessary to earn the max payout. Still, he says he’s grateful for the program as a full-time artist and creator. “It’s enabled me to pay off my credit card debt and then raise my credit score immensely,” he says. “I’m hoping for at least a few more.”

Nearly all of the creators who spoke to Engadget also expressed some skepticism that Meta would continue the bonus program at its current level for very long. In the past, the company has offered creators generous bonuses when it’s trying to boost a new format like Instagram Reels or Facebook Live only for those payments to eventually dwindle as more people join and Meta inevitably shifts its strategy — and funds for creators — somewhere else.

Logan Reavis is a photographer with nearly 50,000 followers on Instagram and about 8,500 on Threads. Though she has a bigger following on Instagram, she says Threads’ algorithm feels more favorable to creators. “The [Threads] algorithm works entirely different, especially as a photographer,” she says. “I feel like it's been hard to share my photography on Instagram, but it's encouraged on Threads. I actually reach an entirely different audience.”

Even so, she says she’s had to grapple with the quirks of the Threads algorithm and its penchant for highlighting engagement bait. “Responding to threads that have a lot of comments or conversation is what brings in my bonus views more, which is frustrating too because there's a lot of clickbait,” she says. Reavis so far hasn’t been able to reach her maximum potential $500 monthly bonus on Threads.

While creators are part of Meta’s strategy to make Threads its next billion-person app, the company hasn’t always been able to explain what its newest app is actually for. So it shouldn’t be surprising that even the creators it’s paying to post there view it as something of an experiment.

“I still don't think it has its own unique place in the social media ecosystem,” says Navarra. “It doesn't really have much of its own identity or personality, and I think that's one of its many problems at the moment.”

This article originally appeared on Engadget at https://www.engadget.com/social-media/creators-getting-paid-to-post-on-threads-dont-understand-its-algorithm-either-065736099.html?src=rss

Meta ‘found mistakes and made changes’ to address Threads moderation issues

Meta will fix “mistakes” in how Threads enforces its rules after days of complaints about the company’s handling of content moderation on the service. In an update, Threads head Adam Mosseri said the company had already made some changes to address issues that have cropped up.

Mosseri’s comments come as Threads users have been increasingly vocal about Threads’ seemingly aggressive, and sometimes bizarre, moderation decisions. In one prominent example, a number of users reported that their accounts had been penalized for using the word “cracker” or “saltines.” Mosseri didn’t explain exactly why these types of mistakes occurred, but said that one of the company’s internal tools “broke,” which prevented human reviewers from seeing “sufficient context” about the posts they were moderating.

“For those of you who've shared concerns about enforcement issues: we're looking into it and have already found mistakes and made changes,” Mosseri wrote. “Most prominently, our reviewers (people) were making calls without being provided the context on how conversations played out, which was a miss. We’re fixing this so they can make the better calls and we can make fewer mistakes. We're trying to provide a safer experience, and we need to do better.”

Content moderation isn’t the only issue that’s rankled Threads users in recent days. Earlier this week, Mosseri also promised that Threads was working on a fix to bring engagement bait “under control” on the service, following widespread complaints.

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-found-mistakes-and-made-changes-to-address-threads-moderation-issues-175734448.html?src=rss

Bluesky is having a moment… on Threads

Bluesky seems to have a bold new strategy to entice potential new users: posting on Threads. The rival social media service joined Threads amid a surge in complaints from users who are increasingly frustrated with Meta’s policies.

While complaints about Meta’s policies aren’t a new topic, they’ve gained new prominence over the last week amid complaints about the surge in engagement bait on the platform, as well as Threads’ sometimes inexplicable content moderation decisions. Meta exec Adam Mosseri, who runs the Threads app, has said the company is looking into both issues. But in the meantime, there’s been an increase in discussions about Bluesky, the decentralized service that has a very different philosophy when it comes to algorithms and moderation.

On Wednesday, Bluesky created an account on Threads, and promptly began pitching itself as an alternative platform for those frustrated with Meta. The strategy seems to be having an effect. “Bluesky” has been a trending topic on Threads for two days in a row and, at the time of this writing “Bluesky vs Meta moderation” was trending on the platform.

“We're not like the other girls... we're not owned by a billionaire,” Bluesky wrote in a post Thursday. “Your social experience should be yours to customize, not bent to the whims of whoever the owner of the platform is.”

While not the first time Bluesky has lightly trolled a rival (see its X post from earlier this week), the company is seizing on genuine frustration among Threads users. Besides the complaints about blatant engagement bait in their feeds, users have been questioning Meta’s seemingly aggressive moderation tactics on Threads. The company already throttles political content on the app, has taken a heavy handed approach to moderation of the service, according to many users. A number of people have reported having posts actioned by Meta for using the word “cracker” or “saltines,” as The Verge points out. Social media consultant Matt Navarra shared that he was penalized for sharing a BBC article about the viral “goodbye Meta AI” hoax on his Threads account.

Bluesky, on the other hand, has taken a much more flexible approach to content moderation. It puts most decisions in the hands of users, who are able to decide what kind of content they want to see or not, and allows users to run their own moderation services. “We're always doing baseline moderation, meaning that we are providing you with a default moderated experience when you come in [to Bluesky],” Bluesky CEO jay Graber told Engadget earlier this year. “And then on top of that, you can customize things.”

Whether the new attention on Bluesky will result in a significant number of departures to the service is so far unclear. Bluesky currently has about 10.8 million users, according to a dashboard tracking its growth. And while it’s not clear how many new people arrived in the last couple days, it suggests there’s been a bit of a surge over the past month as Bluesky previously grew to about 8.8 million users immediately following the shutdown of X in Brazil last month.

This article originally appeared on Engadget at https://www.engadget.com/social-media/bluesky-is-having-a-moment-on-threads-222404971.html?src=rss