Season 2 of ‘Squid Game’ arrives on Netflix December 26

Netflix has finally set a date for the next season of Squid Game, almost three years after the Korean drama became a massive hit in the US. Season 2 is set to hit Netflix December 26, with a final third season coming sometime in 2025, the streamer announced.

While the initial teaser for Season 2 doesn’t reveal much about what to expect in the next installment, Netflix shared a few more details about the plot in a letter from Hwang Dong-hyuk, the series’ director and writer.

Seong Gi-hun who vowed revenge at the end of Season 1 returns and joins the game again. Will he succeed in getting his revenge? Front Man doesn’t seem to be an easy opponent this time either. The fierce clash between their two worlds will continue into the series finale with Season 3, which will be brought to you next year.

I am thrilled to see the seed that was planted in creating a new Squid Game grow and bear fruit through the end of this story.

We’ll do our best to make sure we bring you yet another thrill ride. I hope you’re excited for what’s to come. Thank you, always, and see you soon, everyone.

Despite the long wait since the initial season, Netflix has done a lot to capitalize on the success of Squid Game. The series inspired a spinoff reality show, called Squid Game: The Challenge, which has also been greenlit for a second season. The company also treated fans to an IRL Squid Game pop-up in Los Angeles.

Additionally, Netflix announced plans for a Squid Game multiplayer game that will debut alongside Season 2 of the show. Details of the game are unclear, but the company has said that players will “compete with friends in games they’ll recognize from the series.”

This article originally appeared on Engadget at https://www.engadget.com/season-2-of-squid-game-arrives-on-netflix-december-26-000010045.html?src=rss

Google dismisses Elon Musk’s claim that autocomplete engaged in election interference

Google has responded to allegations that it “censored” searches about Donald Trump after Elon Musk baselessly claimed the company had imposed a “search ban” on the former president. The issues, Google explained, were due to bugs in its autocomplete feature. But Musk’s tweet, which was viewed more than 118 million times, nonetheless forced the search giant to publicly explain one of its most basic features.

“Over the past few days, some people on X have posted claims that Search is ‘censoring’ or ‘banning’ particular terms,” Google wrote in a series of posts on X. “That’s not happening.”

Though Google didn’t name Musk specifically, over the weekend the X owner said that “ Google has a search ban on President Donald Trump.” The claim appeared to be based on a single screenshot of a search that showed Google suggested “president donald duck” and “president donald regan” when “president donald” was typed into the search box.

The same day, Donald Trump Jr. shared a similar image that showed no autocomplete results relating to Donald Trump for the search “assassination attempt on.” Both Trump Jr. and Musk accused the company of “election interference.”

In its posts Tuesday, Google explained that people are free to search for whatever they want regardless of what appears in its autocomplete suggestions. It added that “built-in protections related to political violence” had prevented autocomplete from suggesting Trump-related searches and that “those systems were out of date.”

Likewise, the company said that the strange suggestions for “president donald” were due to a ”bug that spanned the political spectrum.” It also affected searches related to former President Barack Obama and other figures.

Finally, the company explained that articles about Kamala Harris appearing in search results for Donald Trump is not due to a shadowy conspiracy, but because the two— both of whom are actively campaigning for president — are often mentioned in the same news stories. That may sound like something that should be painfully obvious to anyone who has ever used the internet, but Musk’s post on X has fueled days of conspiracy theories about Google’s intentions.

Musk’s post, which questioned whether the search giant was interfering in the election, was particularly ironic considering that the X owner came under fire the same weekend for sharing a manipulated video of Kamala Harris without a label, a violation of his company’s own policies.

While Google’s statements didn’t cite Musk’s post directly, the company pointed out that X’s search feature has also experienced issues in the past. “Many platforms, including the one we’re posting on now, will show strange or incomplete predictions at various times,” the company said.

This article originally appeared on Engadget at https://www.engadget.com/google-dismisses-elon-musks-claim-that-autocomplete-engaged-in-election-interference-214834630.html?src=rss

The Senate just passed two landmark bills aimed at protecting minors online

The Senate has passed two major online safety bills amid years of debate over social media’s impact on teen mental health. The Kids Online Safety Act (KOSA) and the Children and Teens' Online Privacy Protection Act, also known as COPPA 2.0, passed the Senate in a vote of 91 - T3.

The bills will next head to the House, though it’s unclear if the measures will have enough support to pass. If passed into law, the bills would be the most significant pieces of legislation regulating tech companies in years.

KOSA requires social media companies like Meta to offer controls to disable algorithmic feeds and other “addictive” features for children under the age of 16. It also requires companies to provide parental supervision features and safeguard minors from content that promotes eating disorders, self harm, sexual exploitation and other harmful content.

One of the most controversial provisions in the bill creates what’s known as a “duty of care.” This means platforms are required to prevent or mitigate certain harmful effects of their products, like “addictive” features or algorithms that promote dangerous content. The Federal Trade Commission would be in charge of enforcing the standard.

The bill was originally introduced in 2022 but stalled amid pushback from digital rights and other advocacy groups who said the legislation would force platforms to spy on teens. A revised version, meant to address some of those concerns, was introduced last year, though the ACLU, EFF and other free speech groups still oppose the bill. In a statement last week, the ACLU said that KOSA would encourage social media companies “to censor protected speech” and “incentivize the removal of anonymous browsing on wide swaths of the internet.”

COPPA 2.0, on the other hand, has been less controversial among privacy advocates. An expansion of the 1998 Children and Teens' Online Privacy Protection Act, it aims to revise the nearly 30-year-old law to better reflect the modern internet and social media landscape. If passed, the law would prohibit companies from targeting advertising to children and collecting personal data on teens between 13 and 16 without consent. It also requires companies to offer an “eraser button” for personal data to delete children and teens’ personal information from a platform when “technologically feasible.”

The vote underscores how online safety has become a rare source of bipartisan agreement in the Senate, which has hosted numerous hearings on teen safety issues in recent years. The CEOs of Meta, Snap, Discord, X and TikTok testified at one such hearing earlier this year, during which South Carolina Senator Lindsey Graham accused the executives of having “blood on their hands” for numerous safety lapses.

This article originally appeared on Engadget at https://www.engadget.com/the-senate-just-passed-two-landmark-bills-aimed-at-protecting-minors-online-170935128.html?src=rss

Mark Zuckerberg says ‘f*ck that’ to closed platforms

In his two decades running the company now known as Meta, Mark Zuckerberg has gone through many transformations. More recently, he’s been showing off a seemingly less filtered version of himself. But during a live streamed conversation with NVIDIA CEO Jensen Huang, the Meta CEO seemed to veer a little more off script than he intended.

The conversation began normally enough, with the two billionaire executives congratulating each other on their AI dominance. Zuckerberg made sure to talk up the company’s recent AI Studio announcement before settling into his usual talking points, which recently have included pointed criticism of Apple.

Zuckerberg then launched into a lengthy rant about his frustrations with “closed” ecosystems like Apple’s App Store. None of that is particularly new, as the Meta founder has been feuding with Apple for years. But then Zuckerberg, who is usually quite controlled in his public appearances, revealed just how frustrated he is, telling Huang that his reaction to being told “no” is “fuck that.”

“I mean, this is sort of selfish, but, you know, after building this company for awhile, one of my things for the next 10 or 15 years is like, I just want to make sure that we can build the fundamental technology that we're going to be building social experiences on, because there just have been too many things that I've tried to build and then have just been told ‘nah you can't really build that by the platform provider,’ that at some level I'm just like, ‘nah, fuck that,’” Zuckerberg said.

“There goes our broadcast opportunity,” Huang said. “Sorry,” Zuckerberg said. “Get me talking about closed platforms, and I get angry.”

This article originally appeared on Engadget at https://www.engadget.com/mark-zuckerberg-says-fck-that-to-closed-platforms-235700788.html?src=rss

Instagram creators can now make AI doppelgangers to chat with their followers

The next time you DM a creator on Instagram, you might get a reply from their AI. Meta is starting to roll out its AI Studio, a set of tools that will allow Instagram creators to make an AI persona that can answer questions and chat with their followers and fans on their behalf.

The company first introduced AI Studio at its Connect event last fall but it only recently began to test creator-made AIs with a handful of prominent Instagrammers. Now, Meta is making the tools available to more US-based creators and giving the rest of its users the chance to experiment with specialized AI “characters.”

According to Meta, the new creator AIs are meant to address a long-running issue for Instagram users with large followings: it can be nearly impossible for the service’s most popular users to keep up with the flood of messages they receive every day. Now, though, they’ll be able to make an AI that functions as “an extension of themselves,” says Connor Hayes, who is VP of Product for AI Studio at Meta.

“These creators can actually use the comments that they've made, the captions that they've made, the transcripts of the Reels that they've posted, as well as any custom instructions or links that they want to provide … so that the AI can answer on their behalf,” Hayes tells Engadget.

Mark Zuckerberg has suggested he has big ambitions for such chatbots. In a recent interview with Bloomberg he said he expects there will eventually be “hundreds of millions” of creator-made AIs on Meta’s apps. However, it’s unclear if Instagram users will be as interested in engaging with AI versions of their favorite creators. Meta previously experimented with AI chatbots that took on the personalities of celebrities like Snoop Dogg and Kendall Jenner, but those “characters” proved to be largely underwhelming. Those chatbots have now been phased out, The Information reported.

“One thing that ended up being somewhat confusing for people was, ‘am I talking to the celebrity that is embodying this AI, or am I talking to an AI and they're playing the character,’” Meta’s Hayes says about the celebrity-branded chatbots. “We think that going in this direction where the public figures can represent themselves, or an AI that's an extension of themselves, will be a lot clearer.”

Anyone can create an AI
Meta

AI Studio isn’t just for creators, though. Meta will also allow any user to create custom AI “characters” that can chat about specific topics, make memes or offer advice. Like the creator-focused characters, these chatbots will be powered by Meta’s new Llama 3.1 model. Users can share their chatbot creations and track how many people are using them, though they won’t be able to view other users’ interactions with them.

The new chatbots are the latest way Meta has pushed its users to spend more time with its AI as it crams Meta AI into more and more places in its apps. But Meta AI has also at times struggled to relay accurate information In a blog post, Meta notes that it has “policies and protections in place to keep people safe and help ensure AIs are used responsibly.”

Screenshots provided by the company show that chats with the new AI characters will also have a familiar disclaimer: “Some messages generated by AI may be inaccurate or inappropriate.”

Update July 30, 2024, 4:35 PM PT: This story was updated with additional information about Meta's celebrity-branded chatbots.

This article originally appeared on Engadget at https://www.engadget.com/instagram-creators-can-now-make-ai-doppelgangers-to-chat-with-their-followers-220052768.html?src=rss

Meta needs updated rules for sexually explicit deepfakes, Oversight Board says

Meta’s Oversight Board is urging the company to update its rules around sexually explicit deepfakes. The board made the recommendations as part of its decision in two cases involving AI-generated images of public figures.

The cases stem from two user appeals over AI-generated images of public figures, though the board declined to name the individuals. One post, which originated on Instagram, depicted a nude Indian woman. The post was reported to Meta but the report was automatically closed after 48 hours, as was a subsequent user appeal. The company eventually removed the post after attention from the Oversight Board, which nonetheless overturned Meta’s original decision to leave the image up.

The second post, which was shared to a Facebook group dedicated to AI art, showed “an AI-generated image of a nude woman with a man groping her breast.” Meta automatically removed the post because it had been added to an internal system that can identify images that have been previously reported to the company. The Oversight Board found that Meta was correct to have taken the post down.

In both cases, the Oversight Board said the AI deepfakes violated the company’s rules barring “derogatory sexualized photoshop” images. But in its recommendations to Meta, the Oversight Board said the current language used in these rules is outdated and may make it more difficult for users to report AI-made explicit images.

Instead, the board says that it should update its policies to make clear that it prohibits non-consensual explicit images that are AI-made or manipulated. “Much of the non-consensual sexualized imagery spread online today is created with generative AI models that either automatically edit existing images or create entirely new ones,” the board writes.”Meta should ensure that its prohibition on derogatory sexualized content covers this broader array of editing techniques, in a way that is clear to both users and the company’s moderators.”

The board also called out Meta’s practice of automatically closing user appeals, which it said could have “significant human rights impacts” on users. However, the board said it didn’t have “sufficient information” about the practice to make a recommendation.

The spread of explicit AI images has become an increasingly prominent issue as “deepfake porn” has become a more widespread form of online harassment in recent years. The board’s decision comes one day after the US Senate unanimously passed a bill cracking down on explicit deepfakes. If passed into law, the measure would allow victims to sue the creators of such images for as much as $250,000.

The cases aren’t the first time the Oversight Board has pushed Meta to update its rules for AI-generated content. In another high-profile case, the board investigated a maliciously edited video of President Joe Biden. The case ultimately resulted in Meta revamping its policies around how AI-generated content is labeled.

This article originally appeared on Engadget at https://www.engadget.com/meta-needs-updated-rules-for-sexually-explicit-deepfakes-oversight-board-says-100005969.html?src=rss

Meta takes down 63,000 Instagram accounts linked to extortion scams

Meta has taken down tens of thousands of Instagram accounts from Nigeria as part of a massive crackdown on sextortion scams. The accounts primarily targeted adult men in the United States, but some also targeted minors, Meta said in an update.

The takedowns are part of a larger effort by Meta to combat sextortion scams on its platform in recent months. Earlier this year, the company added a safety feature in Instagram messages to automatically detect nudity and warn users about potential blackmail scams. The company also provides in-app resources and safety tips about such scams.

According to Meta, the recent takedowns included 2,500 accounts that were linked to a group of about 20 people who worked together to carry out sextortion scams. The company also took down thousands of accounts and groups on Facebook that provided tips and other advice, including scripts and fake images, for would-be sextortionists. Those accounts were linked to the Yahoo Boys, a group of “loosely organized cybercriminals operating largely out of Nigeria that specialize in different types of scams,” Meta said.

Meta has come under particular scrutiny for not doing enough to protect teens from sextortion on its apps. During a Senate hearing earlier this year, Senator Lindsey Graham pressed Mark Zuckerberg on whether the parents of a child who died by suicide after falling victim to such a scam should be able to sue the company.

Though the company said that the “majority” of the scammers it uncovered in its latest takedowns targeted adults, it confirmed that some of the accounts had targeted minors as well and that those accounts had also been reported to the National Center for Missing and Exploited Children (NCMEC).

This article originally appeared on Engadget at https://www.engadget.com/meta-takes-down-63000-instagram-accounts-linked-to-extortion-scams-175118067.html?src=rss

How false nostalgia inspired noplace, a Myspace-like app for Gen Z

Already fascinated with y2k-era tech, some members of Gen Z have wondered what those early, simpler social networks were like. Now, they can get an idea thanks to a new app called noplace, which recreates some aspects of Myspace more than a decade after its fall from the most-visited site in the US.

The app officially launched earlier this month and briefly made the No. 1 spot in Apple’s App Store. Dreamed up by Gen Z founder Tiffany Zhong, noplace bills itself as both a throwback and an alternative to mainstream social media algorithms and the creator culture that comes with them. “I missed how social media used to be back in the day … where it was actually social, people would post random updates about their life,” Zhong tells Engadget. “You kind of had a sense of where people were in terms of time and space.”

Though Zhong says she never got to experience Myspace firsthand — she was in elementary school during its early 2000s peak — noplace manages to nail many of the platform’s signature elements. Each user starts with a short profile where they can add personal details like their relationship status and age, as well a free-form “about me” section. Users can also share their interests and detail what they’re currently watching, playing, reading and listening to. And, yes, they can embed song clips. There’s even a “top 10” for highlighting your best friends (unclear if Gen Z is aware of how much trauma that particular Myspace feature inflicted on my generation).

Myspace, of course, was at its height years before smartphone apps with a unified “design language” became the dominant medium for browsing social media. But the highly customizable noplace profiles still manage to capture the vibe of the bespoke HTML and clashing color schemes that distinguished so many Myspace pages and websites on the early 2000s internet.

noplace has a
noplace

There are other familiar features. All new users are automatically friends with Zhong, which she confirms is a nod to Tom Anderson, otherwise known as “Myspace Tom.” And the app encourages users to add their interests, called “stars,” and search for like-minded friends.

Despite the many similarities — the app was originally named “nospace” — Zhong says noplace is about more than just recreating the look and feel of Myspace. The app has a complicated gamification scheme, where users are rewarded with in-app badges for reaching different “levels” as they use the app more. This system isn’t really explained in the app — Zhong says it’s intentionally “vague” — but levels loosely correspond to different actions like writing on friends’ walls and interacting with other users’ posts. There’s also a massive Twitter-like central feed where users can blast out quick updates to everyone else on the app.

It can feel a bit chaotic, but early adopters are already using it in some unexpected ways, according to Zhong. “Around 20% in the past week of posts have been questions,” she says, comparing it to the trend of Gen Z using TikTok and YouTube as a search engine. “The vision for what we're building is actually becoming a social search engine. Everyone thinks it's like a social network, but because people are asking questions already … we're building features where you can ask questions and you can get crowdsourced responses.”

That may sound ambitious for a (so far) briefly-viral social app, but noplace has its share of influential backers. Reddit founder Alexis Ohanian is among the company’s investors. And Zhong herself once made headlines in her prior role as a teenage analyst at a prominent VC firm.

For now, though, noplace feels more to me like a Myspace-inspired novelty, though I’m admittedly not the target demographic. But, as someone who was a teenager on actual Myspace, I often think that I’m grateful my teen years came long before Instagram or TikTok. Not because Myspace was simpler than today’s social media, but because logging off was so much easier.

Zhong sees the distinction a little differently, not as a matter of dial-up connections enforcing a separation between on and offline, but a matter of prioritizing self expression cover clout. “You're just chasing follower count versus being your true self,” Zhong says. “It makes sense how social networks have evolved that way, but it's media platforms. It's not a social network anymore.”

This article originally appeared on Engadget at https://www.engadget.com/how-false-nostalgia-inspired-noplace-a-myspace-like-app-for-gen-z-163813099.html?src=rss

Apple blog TUAW returns as an AI content farm

The Unofficial Apple Weblog (TUAW) has come back online nearly a decade after shutting down. But the once venerable source of Apple news appears to have been transformed by its new owners into an AI-generated content farm.

The site, which ceased operations in 2015, began publishing “new” articles, many of which appear to be nearly identical to content published by MacRumors and other publications, over the past week. But those posts bear the bylines of writers who last worked for TUAW more than a decade ago. The site also has an author page featuring the names of former writers along with photos that appear to be AI-generated.

Christina Warren, who last wrote for TUAW in 2009, flagged the sketchy tactic in a post on Threads. “Someone bought the TUAW domain, populated it with AI-generated slop, and then reused my name from a job I had when I was 21 years old to try to pull some SEO scam that won’t even work in 2024 because Google changed its algo,” she wrote.

Originally started in 2004, TUAW was shut down by AOL in 2015. Much of the site’s original archive can still be found on Engadget. Yahoo, which owns Engadget, sold the TUAW domain in 2024 to an entity called “Web Orange Limited” in 2024, according to a statement on TUAW’s website.

The sale, notably, did not include the TUAW archive. But, it seems that Web Orange Limited found a convenient (if legally dubious) way around that. “With a commitment to revitalize its legacy, the new team at Web Orange Limited meticulously rewrote the content from archived versions available on archive.org, ensuring the preservation of TUAW’s rich history while updating it to meet modern standards and relevance,” the site’s about page states.

TUAW doesn’t say if AI was used in those “rewrites,” but a comparison between the original archive on Engadget and the “rewritten” content on TUAW suggests that Web Orange Limited put little effort into the task. “The article ‘rewrites’ aren’t even assigned to the correct names,” Warren tells Engadget, “It has stuff for me going back to 2004. I didn’t start writing for the site until 2007.”

TUAW didn’t immediately respond to emailed questions about its use of AI or why it was using the bylines of former writers with AI-generated profile photos. Yahoo didn't immediately respond to a request for comment. 

Update July 10, 2024, 11:05 AM ET: After this story was published, the TUAW website updated its author pages to remove many of the names of former staffers. Many have been swapped with generic-sounding names. "Christina Warren" has been changed to "Mary Brown." TUAW still hasn't responded to questions. 

This article originally appeared on Engadget at https://www.engadget.com/apple-blog-tuaw-returns-as-an-ai-content-farm-225326136.html?src=rss

What Meta should change about Threads, one year in

It’s been a year since Meta pushed out Threads in an attempt to take on the platform now known as X. At the time, Mark Zuckerberg said that he hoped it would turn into “a public conversations app with 1 billion+ people on it.”

Meta’s timing was good. Threads launched at a particularly chaotic moment for Twitter, when many people were seeking out alternatives. Threads saw 30 million sign-ups in its first day and the app has since grown to 175 million monthly users, according to Zuckerberg. (X has 600 million monthly users, according to Elon Musk.)

But the earliest iteration of Threads still felt a little bit broken. There was no web version, and a lot of missing features. The company promised interoperability with ActivityPub, the open-source standard that powers Mastodon and other apps in the fediverse, but integration remains minimal.

One year later, it’s still not really clear what Threads is actually for. Its leader has said that "the goal isn’t to replace Twitter” but to create a “public square” for Instagram users and a “less angry place for conversations.”But the service itself still has a number of issues that prevent it from realizing that vision. If Meta really wants to make that happen, here’s what it should change.

If you follow me on Threads, then you probably already know this is my top complaint. But Meta desperately needs to fix the algorithm that powers Threads’ default “For You” feed. The algorithmic feed, which is the default view in both the app and website, is painfully slow. It often surfaces days-old posts, even during major, newsworthy moments when many people are posting about the same topic.

It’s so bad it’s become a running meme to post something along the lines of “I can’t wait to read about this on my ‘For You’ feed tomorrow,” every time there’s a major news event or trending story.

The algorithmic feed is also downright bizarre. For a platform that was built off of Instagram, an app that has extremely fine-tuned recommendations and more than a decade of data about the topics I’m interested in, Threads appears to use none of it. Instead, it has a strange preference for intense personal stories from accounts I’m entirely unconnected to.

In the last year, I’ve seen countless multi-part Threads posts from complete strangers detailing childhood abuse, eating disorders, chronic illnesses, domestic violence, pet loss and other unimaginable horrors. These are not posts I’m seeking out by any means, yet Meta’s algorithm shoves them to the top of my feed.

I’ve aggressively used Threads' swipe gestures to try to rid my feed of excessive trauma dumping, and it’s helped to some extent. But it hasn’t improved the number of strange posts I see from completely random individuals. At this moment the top two posts in my feed are from an event planner offering to share wedding tips and a woman describing a phone call from her health insurance company. (Both posts are 12 hours old.) These types of posts have led to blogger Max Read dubbing Threads the “gas leak social network” because they make it feel as if everyone is “suffering some kind of minor brain damage.”

Look, I get why Meta has been cautious when it comes to content moderation on Threads. The company doesn’t exactly have a great track record on issues like extremism, health misinformation or genocide-inciting hate speech. It’s not surprising they would want to avoid similar headlines about Threads.

But if Meta wants Threads to be a “public square,” it can’t preemptively block searches for topics like COVID-19 and vaccines just because they are “potentially sensitive.” (Instagram head Adam Mosseri claimed this measure was “temporary” last October.) If Meta wants Threads to be a “public square,” it shouldn’t automatically throttle political content from users’ recommendations; and Threads’ leaders shouldn’t assume that users don’t want to see news.

A year in, it’s painfully clear that a platform like Threads is hamstrung without a proper direct messaging feature. For some reason, Threads’ leaders, especially Mosseri, have been adamantly opposed to creating a separate inbox for the app.

Instead, users hoping to privately connect with someone on Threads are forced to switch over to Instagram and hope the person they are trying to reach accepts new message requests. There is an in-app way to send a Threads post to an Instagram friend but this depends on you already being connected on Instagram.

Exactly why Threads can’t have its own messaging feature isn’t exactly clear. Mosseri has suggested that it doesn’t make sense to build a new inbox for the app, but that ignores the fact that many people use Instagram and Threads very differently. Which brings me to…

Meta has said that the reason why it was able to get Threads out the door so quickly was largely thanks to Instagram. Threads was created using a lot of Instagram’s code and infrastructure, which also helped the company get tens of millions of people to sign up for the app on day one.

But continuing to require an Instagram account to use Threads makes little sense a year on. For one, it shuts out a not-insignificant number of people who may be interested in Threads but don’t want to be on Instagram,

There’s also the fact that the apps, though they share some design elements, are completely different kinds of services. And many people, myself included, use Instagram and Threads very differently.

A “public square” platform like Threads works best for public-facing accounts where conversations can have maximum visibility. But most people I know use their Instagram accounts for personal updates, like family photos. And while you can have different visibility settings for each app, you shouldn’t be forced to link the two accounts. This also means that if you want to use Threads anonymously, you would need to create an entirely new Instagram account to serve as a login for the corresponding Threads account.

It seems that Meta is at least considering this. Mosseri said in an interview with Platformer that the company is “working on things like Threads-only accounts” and wants the app to become “more independent.”

These aren’t the only factors that will determine whether Threads will be, as Zuckerberg has speculated, Meta’s next 1 billion-user app. Meta will, eventually, need to make money from the service, which is currently advertising-free. But before Meta’s multibillion-dollar ad machine can be pointed at Threads, the company will need to better explain who its newest app is actually for.

This article originally appeared on Engadget at https://www.engadget.com/what-meta-should-change-about-threads-one-year-in-173036784.html?src=rss