How false nostalgia inspired noplace, a Myspace-like app for Gen Z

Already fascinated with y2k-era tech, some members of Gen Z have wondered what those early, simpler social networks were like. Now, they can get an idea thanks to a new app called noplace, which recreates some aspects of Myspace more than a decade after its fall from the most-visited site in the US.

The app officially launched earlier this month and briefly made the No. 1 spot in Apple’s App Store. Dreamed up by Gen Z founder Tiffany Zhong, noplace bills itself as both a throwback and an alternative to mainstream social media algorithms and the creator culture that comes with them. “I missed how social media used to be back in the day … where it was actually social, people would post random updates about their life,” Zhong tells Engadget. “You kind of had a sense of where people were in terms of time and space.”

Though Zhong says she never got to experience Myspace firsthand — she was in elementary school during its early 2000s peak — noplace manages to nail many of the platform’s signature elements. Each user starts with a short profile where they can add personal details like their relationship status and age, as well a free-form “about me” section. Users can also share their interests and detail what they’re currently watching, playing, reading and listening to. And, yes, they can embed song clips. There’s even a “top 10” for highlighting your best friends (unclear if Gen Z is aware of how much trauma that particular Myspace feature inflicted on my generation).

Myspace, of course, was at its height years before smartphone apps with a unified “design language” became the dominant medium for browsing social media. But the highly customizable noplace profiles still manage to capture the vibe of the bespoke HTML and clashing color schemes that distinguished so many Myspace pages and websites on the early 2000s internet.

noplace has a
noplace

There are other familiar features. All new users are automatically friends with Zhong, which she confirms is a nod to Tom Anderson, otherwise known as “Myspace Tom.” And the app encourages users to add their interests, called “stars,” and search for like-minded friends.

Despite the many similarities — the app was originally named “nospace” — Zhong says noplace is about more than just recreating the look and feel of Myspace. The app has a complicated gamification scheme, where users are rewarded with in-app badges for reaching different “levels” as they use the app more. This system isn’t really explained in the app — Zhong says it’s intentionally “vague” — but levels loosely correspond to different actions like writing on friends’ walls and interacting with other users’ posts. There’s also a massive Twitter-like central feed where users can blast out quick updates to everyone else on the app.

It can feel a bit chaotic, but early adopters are already using it in some unexpected ways, according to Zhong. “Around 20% in the past week of posts have been questions,” she says, comparing it to the trend of Gen Z using TikTok and YouTube as a search engine. “The vision for what we're building is actually becoming a social search engine. Everyone thinks it's like a social network, but because people are asking questions already … we're building features where you can ask questions and you can get crowdsourced responses.”

That may sound ambitious for a (so far) briefly-viral social app, but noplace has its share of influential backers. Reddit founder Alexis Ohanian is among the company’s investors. And Zhong herself once made headlines in her prior role as a teenage analyst at a prominent VC firm.

For now, though, noplace feels more to me like a Myspace-inspired novelty, though I’m admittedly not the target demographic. But, as someone who was a teenager on actual Myspace, I often think that I’m grateful my teen years came long before Instagram or TikTok. Not because Myspace was simpler than today’s social media, but because logging off was so much easier.

Zhong sees the distinction a little differently, not as a matter of dial-up connections enforcing a separation between on and offline, but a matter of prioritizing self expression cover clout. “You're just chasing follower count versus being your true self,” Zhong says. “It makes sense how social networks have evolved that way, but it's media platforms. It's not a social network anymore.”

This article originally appeared on Engadget at https://www.engadget.com/how-false-nostalgia-inspired-noplace-a-myspace-like-app-for-gen-z-163813099.html?src=rss

Apple blog TUAW returns as an AI content farm

The Unofficial Apple Weblog (TUAW) has come back online nearly a decade after shutting down. But the once venerable source of Apple news appears to have been transformed by its new owners into an AI-generated content farm.

The site, which ceased operations in 2015, began publishing “new” articles, many of which appear to be nearly identical to content published by MacRumors and other publications, over the past week. But those posts bear the bylines of writers who last worked for TUAW more than a decade ago. The site also has an author page featuring the names of former writers along with photos that appear to be AI-generated.

Christina Warren, who last wrote for TUAW in 2009, flagged the sketchy tactic in a post on Threads. “Someone bought the TUAW domain, populated it with AI-generated slop, and then reused my name from a job I had when I was 21 years old to try to pull some SEO scam that won’t even work in 2024 because Google changed its algo,” she wrote.

Originally started in 2004, TUAW was shut down by AOL in 2015. Much of the site’s original archive can still be found on Engadget. Yahoo, which owns Engadget, sold the TUAW domain in 2024 to an entity called “Web Orange Limited” in 2024, according to a statement on TUAW’s website.

The sale, notably, did not include the TUAW archive. But, it seems that Web Orange Limited found a convenient (if legally dubious) way around that. “With a commitment to revitalize its legacy, the new team at Web Orange Limited meticulously rewrote the content from archived versions available on archive.org, ensuring the preservation of TUAW’s rich history while updating it to meet modern standards and relevance,” the site’s about page states.

TUAW doesn’t say if AI was used in those “rewrites,” but a comparison between the original archive on Engadget and the “rewritten” content on TUAW suggests that Web Orange Limited put little effort into the task. “The article ‘rewrites’ aren’t even assigned to the correct names,” Warren tells Engadget, “It has stuff for me going back to 2004. I didn’t start writing for the site until 2007.”

TUAW didn’t immediately respond to emailed questions about its use of AI or why it was using the bylines of former writers with AI-generated profile photos. Yahoo didn't immediately respond to a request for comment. 

Update July 10, 2024, 11:05 AM ET: After this story was published, the TUAW website updated its author pages to remove many of the names of former staffers. Many have been swapped with generic-sounding names. "Christina Warren" has been changed to "Mary Brown." TUAW still hasn't responded to questions. 

This article originally appeared on Engadget at https://www.engadget.com/apple-blog-tuaw-returns-as-an-ai-content-farm-225326136.html?src=rss

What Meta should change about Threads, one year in

It’s been a year since Meta pushed out Threads in an attempt to take on the platform now known as X. At the time, Mark Zuckerberg said that he hoped it would turn into “a public conversations app with 1 billion+ people on it.”

Meta’s timing was good. Threads launched at a particularly chaotic moment for Twitter, when many people were seeking out alternatives. Threads saw 30 million sign-ups in its first day and the app has since grown to 175 million monthly users, according to Zuckerberg. (X has 600 million monthly users, according to Elon Musk.)

But the earliest iteration of Threads still felt a little bit broken. There was no web version, and a lot of missing features. The company promised interoperability with ActivityPub, the open-source standard that powers Mastodon and other apps in the fediverse, but integration remains minimal.

One year later, it’s still not really clear what Threads is actually for. Its leader has said that "the goal isn’t to replace Twitter” but to create a “public square” for Instagram users and a “less angry place for conversations.”But the service itself still has a number of issues that prevent it from realizing that vision. If Meta really wants to make that happen, here’s what it should change.

If you follow me on Threads, then you probably already know this is my top complaint. But Meta desperately needs to fix the algorithm that powers Threads’ default “For You” feed. The algorithmic feed, which is the default view in both the app and website, is painfully slow. It often surfaces days-old posts, even during major, newsworthy moments when many people are posting about the same topic.

It’s so bad it’s become a running meme to post something along the lines of “I can’t wait to read about this on my ‘For You’ feed tomorrow,” every time there’s a major news event or trending story.

The algorithmic feed is also downright bizarre. For a platform that was built off of Instagram, an app that has extremely fine-tuned recommendations and more than a decade of data about the topics I’m interested in, Threads appears to use none of it. Instead, it has a strange preference for intense personal stories from accounts I’m entirely unconnected to.

In the last year, I’ve seen countless multi-part Threads posts from complete strangers detailing childhood abuse, eating disorders, chronic illnesses, domestic violence, pet loss and other unimaginable horrors. These are not posts I’m seeking out by any means, yet Meta’s algorithm shoves them to the top of my feed.

I’ve aggressively used Threads' swipe gestures to try to rid my feed of excessive trauma dumping, and it’s helped to some extent. But it hasn’t improved the number of strange posts I see from completely random individuals. At this moment the top two posts in my feed are from an event planner offering to share wedding tips and a woman describing a phone call from her health insurance company. (Both posts are 12 hours old.) These types of posts have led to blogger Max Read dubbing Threads the “gas leak social network” because they make it feel as if everyone is “suffering some kind of minor brain damage.”

Look, I get why Meta has been cautious when it comes to content moderation on Threads. The company doesn’t exactly have a great track record on issues like extremism, health misinformation or genocide-inciting hate speech. It’s not surprising they would want to avoid similar headlines about Threads.

But if Meta wants Threads to be a “public square,” it can’t preemptively block searches for topics like COVID-19 and vaccines just because they are “potentially sensitive.” (Instagram head Adam Mosseri claimed this measure was “temporary” last October.) If Meta wants Threads to be a “public square,” it shouldn’t automatically throttle political content from users’ recommendations; and Threads’ leaders shouldn’t assume that users don’t want to see news.

A year in, it’s painfully clear that a platform like Threads is hamstrung without a proper direct messaging feature. For some reason, Threads’ leaders, especially Mosseri, have been adamantly opposed to creating a separate inbox for the app.

Instead, users hoping to privately connect with someone on Threads are forced to switch over to Instagram and hope the person they are trying to reach accepts new message requests. There is an in-app way to send a Threads post to an Instagram friend but this depends on you already being connected on Instagram.

Exactly why Threads can’t have its own messaging feature isn’t exactly clear. Mosseri has suggested that it doesn’t make sense to build a new inbox for the app, but that ignores the fact that many people use Instagram and Threads very differently. Which brings me to…

Meta has said that the reason why it was able to get Threads out the door so quickly was largely thanks to Instagram. Threads was created using a lot of Instagram’s code and infrastructure, which also helped the company get tens of millions of people to sign up for the app on day one.

But continuing to require an Instagram account to use Threads makes little sense a year on. For one, it shuts out a not-insignificant number of people who may be interested in Threads but don’t want to be on Instagram,

There’s also the fact that the apps, though they share some design elements, are completely different kinds of services. And many people, myself included, use Instagram and Threads very differently.

A “public square” platform like Threads works best for public-facing accounts where conversations can have maximum visibility. But most people I know use their Instagram accounts for personal updates, like family photos. And while you can have different visibility settings for each app, you shouldn’t be forced to link the two accounts. This also means that if you want to use Threads anonymously, you would need to create an entirely new Instagram account to serve as a login for the corresponding Threads account.

It seems that Meta is at least considering this. Mosseri said in an interview with Platformer that the company is “working on things like Threads-only accounts” and wants the app to become “more independent.”

These aren’t the only factors that will determine whether Threads will be, as Zuckerberg has speculated, Meta’s next 1 billion-user app. Meta will, eventually, need to make money from the service, which is currently advertising-free. But before Meta’s multibillion-dollar ad machine can be pointed at Threads, the company will need to better explain who its newest app is actually for.

This article originally appeared on Engadget at https://www.engadget.com/what-meta-should-change-about-threads-one-year-in-173036784.html?src=rss

What Meta should change about Threads, one year in

It’s been a year since Meta pushed out Threads in an attempt to take on the platform now known as X. At the time, Mark Zuckerberg said that he hoped it would turn into “a public conversations app with 1 billion+ people on it.”

Meta’s timing was good. Threads launched at a particularly chaotic moment for Twitter, when many people were seeking out alternatives. Threads saw 30 million sign-ups in its first day and the app has since grown to 175 million monthly users, according to Zuckerberg. (X has 600 million monthly users, according to Elon Musk.)

But the earliest iteration of Threads still felt a little bit broken. There was no web version, and a lot of missing features. The company promised interoperability with ActivityPub, the open-source standard that powers Mastodon and other apps in the fediverse, but integration remains minimal.

One year later, it’s still not really clear what Threads is actually for. Its leader has said that "the goal isn’t to replace Twitter” but to create a “public square” for Instagram users and a “less angry place for conversations.”But the service itself still has a number of issues that prevent it from realizing that vision. If Meta really wants to make that happen, here’s what it should change.

If you follow me on Threads, then you probably already know this is my top complaint. But Meta desperately needs to fix the algorithm that powers Threads’ default “For You” feed. The algorithmic feed, which is the default view in both the app and website, is painfully slow. It often surfaces days-old posts, even during major, newsworthy moments when many people are posting about the same topic.

It’s so bad it’s become a running meme to post something along the lines of “I can’t wait to read about this on my ‘For You’ feed tomorrow,” every time there’s a major news event or trending story.

The algorithmic feed is also downright bizarre. For a platform that was built off of Instagram, an app that has extremely fine-tuned recommendations and more than a decade of data about the topics I’m interested in, Threads appears to use none of it. Instead, it has a strange preference for intense personal stories from accounts I’m entirely unconnected to.

In the last year, I’ve seen countless multi-part Threads posts from complete strangers detailing childhood abuse, eating disorders, chronic illnesses, domestic violence, pet loss and other unimaginable horrors. These are not posts I’m seeking out by any means, yet Meta’s algorithm shoves them to the top of my feed.

I’ve aggressively used Threads' swipe gestures to try to rid my feed of excessive trauma dumping, and it’s helped to some extent. But it hasn’t improved the number of strange posts I see from completely random individuals. At this moment the top two posts in my feed are from an event planner offering to share wedding tips and a woman describing a phone call from her health insurance company. (Both posts are 12 hours old.) These types of posts have led to blogger Max Read dubbing Threads the “gas leak social network” because they make it feel as if everyone is “suffering some kind of minor brain damage.”

Look, I get why Meta has been cautious when it comes to content moderation on Threads. The company doesn’t exactly have a great track record on issues like extremism, health misinformation or genocide-inciting hate speech. It’s not surprising they would want to avoid similar headlines about Threads.

But if Meta wants Threads to be a “public square,” it can’t preemptively block searches for topics like COVID-19 and vaccines just because they are “potentially sensitive.” (Instagram head Adam Mosseri claimed this measure was “temporary” last October.) If Meta wants Threads to be a “public square,” it shouldn’t automatically throttle political content from users’ recommendations; and Threads’ leaders shouldn’t assume that users don’t want to see news.

A year in, it’s painfully clear that a platform like Threads is hamstrung without a proper direct messaging feature. For some reason, Threads’ leaders, especially Mosseri, have been adamantly opposed to creating a separate inbox for the app.

Instead, users hoping to privately connect with someone on Threads are forced to switch over to Instagram and hope the person they are trying to reach accepts new message requests. There is an in-app way to send a Threads post to an Instagram friend but this depends on you already being connected on Instagram.

Exactly why Threads can’t have its own messaging feature isn’t exactly clear. Mosseri has suggested that it doesn’t make sense to build a new inbox for the app, but that ignores the fact that many people use Instagram and Threads very differently. Which brings me to…

Meta has said that the reason why it was able to get Threads out the door so quickly was largely thanks to Instagram. Threads was created using a lot of Instagram’s code and infrastructure, which also helped the company get tens of millions of people to sign up for the app on day one.

But continuing to require an Instagram account to use Threads makes little sense a year on. For one, it shuts out a not-insignificant number of people who may be interested in Threads but don’t want to be on Instagram,

There’s also the fact that the apps, though they share some design elements, are completely different kinds of services. And many people, myself included, use Instagram and Threads very differently.

A “public square” platform like Threads works best for public-facing accounts where conversations can have maximum visibility. But most people I know use their Instagram accounts for personal updates, like family photos. And while you can have different visibility settings for each app, you shouldn’t be forced to link the two accounts. This also means that if you want to use Threads anonymously, you would need to create an entirely new Instagram account to serve as a login for the corresponding Threads account.

It seems that Meta is at least considering this. Mosseri said in an interview with Platformer that the company is “working on things like Threads-only accounts” and wants the app to become “more independent.”

These aren’t the only factors that will determine whether Threads will be, as Zuckerberg has speculated, Meta’s next 1 billion-user app. Meta will, eventually, need to make money from the service, which is currently advertising-free. But before Meta’s multibillion-dollar ad machine can be pointed at Threads, the company will need to better explain who its newest app is actually for.

This article originally appeared on Engadget at https://www.engadget.com/what-meta-should-change-about-threads-one-year-in-173036784.html?src=rss

Meta is changing its policy for the most-moderated word on its platforms

Meta is changing a long-running policy regarding the Arabic word “shaheed,” which has been described as the most-moderated word on the company’s apps. The company said in an update to the Oversight Board that use of the word alone would no longer result in a post’s removal.

The Oversight Board had criticized the company for a “blanket ban” on the word, which is often translated as “martyr,” though, as the board noted, it can have multiple meanings. Meta’s previous policy, however, didn’t take that “linguistic complexity” into account, which resulted in a disproportionate number of takedowns over a commonly used word. Shaheed, the board said earlier this year, “accounts for more content removals under the Community Standards than any other single word or phrase,” across the company’s apps.

In its latest update, Meta said that it had tested a new approach to moderating the word following a recommendation from the board. “Initial results from our assessment indicate that continuing to remove content when “Shaheed” is paired with otherwise violating content – or when the three signals of violence outlined by the Board are present – captures the most potentially harmful content without disproportionality impacting voice,” the company wrote.

The change should have a significant impact on Meta’s Arabic-speaking users, who, according to the board, have been unfairly censored as a result of the policy. “The Oversight Board welcomes Meta’s announcement today that it will implement the Board’s recommendations and introduce significant changes to an unfair policy that led to the censoring of millions of people across its platforms,” the board said in a statement. “The policy changes on how to moderate the Arabic word ‘shaheed’ should have a swift impact on when content is removed, with a more nuanced approach ending a blanket ban on a term that Meta has acknowledged is one the most over-enforced on its platforms.”

This article originally appeared on Engadget at https://www.engadget.com/meta-is-changing-its-policy-for-the-most-moderated-word-on-its-platforms-185016272.html?src=rss

Meta changes its labels for AI-generated images after complaints from photographers

Meta is updating its “Made with AI” labels after widespread complaints from photographers that the company was mistakenly flagging non-AI-generated content. In an update, the company said that it will change the wording to “AI info” because the current labels “weren’t always aligned with people’s expectations and didn’t always provide enough context.”

The company introduced the “Made with AI” labels earlier this year after criticism from the Oversight Board about its “manipulated media” policy. Meta said that, like many of its peers, it would rely on “industry standard” signals to determine when generative AI had been used to create an image. However, it wasn’t long before photographers began noticing that Facebook and Instagram were applying the badge on images that hadn’t actually been created with AI. According to tests conducted by PetaPixel, photos edited with Adobe’s generative fill tool in Photoshop would trigger the label even if the edit was only to a “tiny speck.”

While Meta didn’t name Photoshop, the company said in its update that “some content that included minor modifications using AI, such as retouching tools, included industry standard indicators” that triggered the “Made with AI” badge. “While we work with companies across the industry to improve the process so our labeling approach better matches our intent, we’re updating the ‘Made with AI’ label to ‘AI info’ across our apps, which people can click for more information.”

Somewhat confusingly, the new “AI info” labels won’t actually have any details about what AI-enabled tools may have been used for the image in question. A Meta spokesperson confirmed that the contextual menu that appears when users tap on the badge will remain the same. That menu has a generic description of generative AI and notes that Meta may add the notice “when people share content that has AI signals our systems can read.”

This article originally appeared on Engadget at https://www.engadget.com/meta-changes-its-labels-for-ai-generated-images-after-complaints-from-photographers-191533416.html?src=rss

Supreme Court remands social media moderation cases over First Amendment issues

Two state laws that could upend the way social media companies handle content moderation are still in limbo after a Supreme Court ruling sent the challenges back to lower courts, vacating previous rulings. In a 9 - 0 decision in Moody v. NetChoice and NetChoice v. Paxton, the Supreme Court said that earlier rulings in lower courts had not properly evaluated the laws’ impact on the First Amendment.

The cases stem from two state laws, from Texas and Florida, which tried to impose restrictions on social media companies’ ability to moderate content. The Texas law, passed in 2021, allows users to sue large social media companies over alleged “censorship” of their political views. The Supreme Court suspended the law in 2022 following a legal challenge. Meanwhile, the Florida measure, also passed in 2021, attempted to impose fines on social media companies for banning politicians. That law has also been on hold pending legal challenges.

Both laws were challenged by NetChoice, an industry group that represents Meta, Google, X and other large tech companies. NetChoice argued that the laws were unconstitutional and would essentially prevent large platforms from performing any kind of content moderation. The Biden Administration also opposed both laws. In a statement, NetChoice called the decision “a victory for First Amendment rights online.”

In a decision authored by Justice Elena Kagan, the court said that lower court rulings in both cases “concentrated” on the issue of “whether a state law can regulate the content-moderation practices used in Facebook’s News Feed (or near equivalents).” But, she writes, “they did not address the full range of activities the laws cover, and measure the constitutional against the unconstitutional applications.”

Essentially, the usually-divided court agreed that the First Amendment implications of the laws could have broad impacts on parts of these sites unaffected by algorithmic sorting or content moderation (like direct messages, for instance) as well as on speech in general. Analysis of those externalities, Kagan wrote, simply never occurred in the lower court proceedings. The decision to remand means that analysis should take place, and the case may come back before SCOTUS in the future.

“In sum, there is much work to do below on both these cases … But that work must be done consistent with the First Amendment, which does not go on leave when social media are involved,” Kagan wrote. 

This article originally appeared on Engadget at https://www.engadget.com/supreme-court-remands-social-media-moderation-cases-over-first-amendment-issues-154001257.html?src=rss

Bluesky ‘starter packs’ help new users find their way

One of the most difficult parts of joining a new social platform is finding relevant accounts to follow. That has proved especially challenging for people who quit X to try out one of the many Twitter-like services that have cropped up in the last couple of years. Now, Bluesky has an interesting solution to this dilemma. The service introduced “starter packs,” which aim to address that initial discovery problem by allow existing users to build lists of accounts and custom feeds oriented around specific interests or themes.

In a blog post, the company described the feature as a way to “bring friends directly into your slice of Bluesky.” Users can curate up to 50 accounts and three custom feeds into a “starter pack.” That list can then be shared broadly on Bluesky or sent to new users via a QR code. Other users can then opt to follow an entire “pack” all at once, or scroll through to manually add the accounts and feeds they want to follow.

Bluesky starter pack.
Bluesky

Though Bluesky seems to be positioning the feature as a tool for new users, it’s also useful for anyone who feels like their feed is getting a little stale or has been curious about one of the many subcultures that have emerged on the platform. I’ve been on Bluesky for well over a year and I’ve already found some interesting starter packs, including Bluesky for Journalists (for people interested in news content) and Starter Cats (for accounts that post cat photos).

Starter packs also highlight another one of Bluesky’s more interesting features: custom feeds. The open-source service allows users to create their own algorithmic feeds that others can subscribe to and follow, a bit like a list on X. Custom feeds were introduced last year and have also been an important discovery tool. But scrolling a massive list of custom feeds can be overwhelming. Pairing these feeds with curated lists of users, though, is a much easier way to find ones related to topics you're actually interested in.

This article originally appeared on Engadget at https://www.engadget.com/bluesky-starter-packs-help-new-users-find-their-way-234322177.html?src=rss

Meta’s Oversight Board made just 53 decisions in 2023

The Oversight Board has published its latest annual report looking at its influence on Meta and ability to shift the policies that govern Facebook and Instagram. The board says that in 2023 it received 398,597 appeals, the vast majority of which came from Facebook users. But it took on only a tiny fraction of those cases, issuing a total of 53 decisions.

The board suggests, however, that the cases it selects can have an outsize impact on Meta’s users. For example, it credits its work for influencing improvements to Meta’s strike system and the “account status” feature that helps users check if their posts have violated any of the company’s rules.

Sussing out the board’s overall influence, though, is more complicated. The group says that between January of 2021 and May of 2024, it has sent a total of 266 recommendations to Meta. Of those, the company has fully or partially implemented 75, and reported “progress” on 81. The rest have been declined, “omitted or reframed,” or else Meta has claimed some level of implementation but hasn’t offered proof to the board. (There are five recommendations currently awaiting a response.) Those numbers raise some questions about how much Meta is willing to change in response to the board it created.

The Oversight Board's tally of how Meta has responded to its recommendations,
Oversight Board

Notably, the report has no criticism for Meta and offers no analysis of Meta’s efforts (or lack thereof) to comply with its recommendations. The report calls out a case in which it recommended that Meta suspend the former prime minister of Cambodia for six months, noting that it overturned the company’s decision to leave up a video that could have incited violence. But the report makes no mention of the fact that Meta declined to suspend the former prime minister’s account and declined to further clarify its rules for public figures.

The report also hints at thorny topics the board may take on in the coming months. It mentions that it wants to look at content “demotion,” or what some Facebook and Instagram users may call “shadowbans” (the term is a loaded one for Meta, which has repeatedly denied that its algorithms intentionally punish users for no reason). “One area we are interested in exploring is demoted content, where a platform limits a post’s visibility without telling the user,” the Oversight Board writes.

For now, it’s not clear exactly how the group could tackle the issue. The board’s purview currently allows it to weigh in on specific pieces of content that Meta has removed or left up after a user appeal. But it’s possible the board could find another way into the issue. A spokesperson for the Oversight Board notes that the group expressed concern about demoted content in its opinion on content related to the Israel-Hamas war. “This is something the board would like to further explore as Meta’s decisions around demotion are pretty opaque,” the spokesperson said.

This article originally appeared on Engadget at https://www.engadget.com/metas-oversight-board-made-just-53-decisions-in-2023-100017750.html?src=rss

A Meta ‘error’ broke the political content filter on Threads and Instagram

Earlier this year, Meta made the controversial decision to automatically limit political content from users’ recommendations in Threads and Instagram by default. The company said that it didn’t want to “proactively amplify” political posts and that users could opt-in via their Instagram settings if they did want to see such content.

But, it turns out, that Meta continued to limit political content even for users who had opted in to seeing it. An unspecified “error” apparently caused the “political content” toggle — already buried several layers deep into Instagram's settings menu — to revert back to the “limit” setting each time the app closed. Political content, according to Meta, “is likely to mention governments, elections, or social topics that affect a group of people and/or society at large.”

An
Meta

The issue was flagged by Threads users, including Democratic strategist Keith Edwards, and confirmed by Engadget. It’s unclear how long the “error” was affecting users’ recommendations. “This was an error and should not have happened,” Meta spokesperson Andy Stone wrote on Threads. “We're working on getting it fixed.” Meta didn’t respond to questions about how long the setting had not been working properly.

The issue is likely to raise questions about Meta’s stance on political content. Though Threads is often compared to X, the company has taken an aggressive stance on content moderation, limiting the visibility of political content and outright blocking “potentially sensitive” topics, including anything related to COVID-19, from search results.

Stone later confirmed that the supposed bug had been fixed. "Earlier today, we identified an error in which people's selections in the Instagram political content settings tool mistakenly appeared to have reset even though no change had actually been made," he wrote on Threads. "The issue has now been fixed and we encourage people to check and make sure their settings reflect their preferences." 

Update June 26, 2024, 8:04 Pm ET: Added additional comments from Meta spokesperson Andy Stone.

This article originally appeared on Engadget at https://www.engadget.com/a-meta-error-broke-the-political-content-filter-on-threads-and-instagram-173020269.html?src=rss