Meta limits ‘political’ content recommendations on Instagram and Threads

Meta's relationship with politics and political content on its platforms has been a source of enormous controversy, with the platform routinely accused of highlighting material designed to rile up users in the name of engagement. The company has, in recent years, tried to distance itself from its reputation and is now allowing users to restrict algorithmically-suggested political content on both Threads and Instagram. Meta defines political content as "likely to mention governments, elections, or social topics that affect a group of people and/or society at large" — so, in reality, almost everything. The option to limit this far from narrow set of posts is now rolling out to users with the setting automatically set to on by default, the company confirmed to The Verge.

Meta first announced the feature in February, sharing that the company wants "Instagram and Threads to be a great experience for everyone." The statement continued, "If you decide to follow accounts that post political content, we don't want to get between you and their posts, but we also don't want to proactively recommend political content from accounts you don't follow." Basically, if you turn on this feature, it will limit political content visibility through Explore, Reels, in-feed recommendations, and suggested users. Political posts from accounts you follow should appear in your feed as usual.

You can check if the feature has reached your account or turn it off in Instagram's settings. Just go to suggested content, and you'll see a tab called political content. Click on that, and there will be two options: limit or don't limit political content from accounts you don't follow. However, choosing to restrict it doesn't necessarily mean a total embargo. A note under the open specifies, "You might see less political or social topics in your suggested content." Whichever you choose will apply to both Instagram and Threads.

This article originally appeared on Engadget at https://www.engadget.com/meta-limits-political-content-recommendations-on-instagram-and-threads-123033533.html?src=rss

Ron DeSantis signs bill requiring parental consent for kids to join social media platforms in Florida

Florida Governor Ron DeSantis just signed into law a bill named HB 3 that creates much stricter guidelines about how kids under 16 can use and access social media. To that end, the law completely bans children younger than 14 from participating in these platforms. 

The bill requires parent or guardian consent for 14- and 15-year-olds to make an account or use a pre-existing account on a social media platform. Additionally, the companies behind these platforms must abide by requests to delete these accounts within five business days. Failing to do so could rack up major fines, as much as $10,000 for each violation. These penalties increase to $50,000 per instance if it is ruled that the company participated in a “knowing or reckless” violation of the law.

As previously mentioned, anyone under the age of 14 will no longer be able to create or use social media accounts in Florida. The platforms must delete pre-existing accounts and any associated personal information. The bill doesn’t name any specific social media platforms, but suggests that any service that promotes “infinite scrolling” will have to follow these new rules, as will those that feature display reaction metrics, live-streaming and auto-play videos. Email platforms are exempt.

This isn’t just going to change the online habits of kids. There’s also a mandated age verification component, though that only kicks in if the website or app contains a “substantial portion of material” deemed harmful to users under 18. Under the language of this law, Floridians visiting a porn site, for instance, will have to verify their age via a proprietary platform on the site itself or use a third party system. News agencies are exempt from this part of the bill, even if they meet the materials threshold. 

Obviously, that brings up some very real privacy concerns. Nobody wants to enter their private information to look at, ahem, adult content. There’s a provision that gives websites the option to route users to an “anonymous age verification” system, which is defined as a third party that isn’t allowed to retain identifying information. Once again, any platform that doesn’t abide by this restriction could be subject to a $50,000 civil penalty for each instance.

This follows DeSantis vetoing a similar bill earlier this month. That law would have banned teens under 16 from using social media apps and there was no option for parental consent.

NetChoice, a trade association that represents social media platforms, has come out against the law, calling it unconstitutional. The group says that HB 3 will essentially impose an “ID for the internet”, arguing that the age verification component will have to widen to adequately track whether or not children under 14 are signing up for social media apps. NetChoice says “this level of data collection will put Floridians’ privacy and security at risk.”

Paul Renner, the state’s Republican House Speaker, said at a press conference for the bill signing that a “child in their brain development doesn’t have the ability to know that they’re being sucked in to these addictive technologies, and to see the harm, and step away from it. And because of that, we have to step in for them.”

The new law goes into effect on January 1, but it could face some legal challenges. Renner said he expects social media companies to “sue the second after this is signed” and DeSantis acknowledged that the law will likely be challenged on First Amendment issues, according to Associated Press.

Florida isn’t the first state to try to separate kids from their screens. In Arkansas, a federal judge recently blocked enforcement of a law that required parental consent for minors to create new social media accounts. The same thing happened in California. A similar law passed in Utah, but was hit with a pair of lawsuits that forced state reps back to the drawing board. On the federal side of things, the Protecting Kids on Social Media Act would require parental consent for kids under 18 to use social media and, yeah, there’s that whole TikTok ban thing.

This article originally appeared on Engadget at https://www.engadget.com/ron-desantis-signs-bill-requiring-parental-consent-for-kids-to-join-social-media-platforms-in-florida-192116891.html?src=rss

Spotify launches educational video courses in the UK

There was once a time when you went to one place for music, another for education, and so on, but many companies are now attempting to turn themselves into a jack of all trades to compete for survival. The latest example is Spotify, which has announced a test for video-based learning courses. The new feature joins the platform's music, podcasts and audiobooks lineup. 

Spotify has teamed up with a range of content partners: BBC Maestro, PLAYvirtuoso, Thinkific Labs Inc. and Skillshare. They offer content in four main categories: making music, getting creative, learning business and healthy living. "With this offer, we are exploring a potential opportunity to provide educational creators with a new audience who can access their video content, reaching a bigger potential swath of engaged Spotify users while expanding our catalog," Spotify stated in the announcement. The platform claims that around half of users have "engaged" in self-help or educational podcasts

The test courses are available only to UK users, with free and premium subscribers receiving at least two free lessons per course. The series will range in price from £20 ($25) to £80 ($101), regardless of a person's subscription tier. Users can access them on mobile or desktop. Exact pricing and availability might change if the feature moves past the test phase. 

This forays into video-based courses follows shortly after Spotify introduced music videos in beta. They're available on select tracks and, like the classes, aren't available to US subscribers (the UK is among the 11 countries with access). 

This article originally appeared on Engadget at https://www.engadget.com/spotify-launches-educational-video-courses-in-the-uk-131559272.html?src=rss

SAG-AFTRA ratifies TV animation contracts that establish AI protections for voice actors

SAG-AFTRA has ratified new contracts for voice actors working in TV animation after members’ votes came in at over 95 percent in favor of the terms. The three-year agreements put into place new protections around the use of AI, including a requirement that producers obtain an actor’s consent before using their name as a prompt to create an AI-generated voice. SAG-AFTRA announced the contracts’ approval on Friday night. They’ll be effective through June 30, 2026.

Per the new contracts, “the term ‘voice actor’ only includes humans.” The contracts also outline voice actors’ rights around studios’ use of their digital replicas, and require producers to notify and bargain with the union any time they use AI-generated voices instead of voice actors. “This is the first SAG-AFTRA animation voiceover contract with protections against the misuse of artificial intelligence,” TV Animation Negotiating Committee Co-Chairs Bob Bergen and David Jolliffe said in a statement.

SAG-AFTRA’s Executive Director and Chief Negotiator Duncan Crabtree-Ireland said the agreement “represents a meaningful step forward in expanding our A.I. protections,” along with providing “important new terms in the areas of foreign residuals, high-budget SVOD [subscription video-on-demand] productions, late payments and much more.” The contracts establish a series of wage increases, starting with a 7 percent increase dated back to July 1, 2023, which actors will receive retroactive payments for. That will be followed by a 4 percent increase July 1 of this year, and a 3.5 percent increase the following year.

The union earlier this year announced that it had reached a deal with the AI voice generation company Replica Studios to give voice actors a way to “safely” license their digital voice replicas for video games. AI protection were also a crucial component of the strike-ending deal SAG-AFTRA reached with Hollywood studios late last year.

This article originally appeared on Engadget at https://www.engadget.com/sag-aftra-ratifies-tv-animation-contracts-that-establish-ai-protections-for-voice-actors-190911363.html?src=rss

SAG-AFTRA ratifies TV animation contracts that establish AI protections for voice actors

SAG-AFTRA has ratified new contracts for voice actors working in TV animation after members’ votes came in at over 95 percent in favor of the terms. The three-year agreements put into place new protections around the use of AI, including a requirement that producers obtain an actor’s consent before using their name as a prompt to create an AI-generated voice. SAG-AFTRA announced the contracts’ approval on Friday night. They’ll be effective through June 30, 2026.

Per the new contracts, “the term ‘voice actor’ only includes humans.” The contracts also outline voice actors’ rights around studios’ use of their digital replicas, and require producers to notify and bargain with the union any time they use AI-generated voices instead of voice actors. “This is the first SAG-AFTRA animation voiceover contract with protections against the misuse of artificial intelligence,” TV Animation Negotiating Committee Co-Chairs Bob Bergen and David Jolliffe said in a statement.

SAG-AFTRA’s Executive Director and Chief Negotiator Duncan Crabtree-Ireland said the agreement “represents a meaningful step forward in expanding our A.I. protections,” along with providing “important new terms in the areas of foreign residuals, high-budget SVOD [subscription video-on-demand] productions, late payments and much more.” The contracts establish a series of wage increases, starting with a 7 percent increase dated back to July 1, 2023, which actors will receive retroactive payments for. That will be followed by a 4 percent increase July 1 of this year, and a 3.5 percent increase the following year.

The union earlier this year announced that it had reached a deal with the AI voice generation company Replica Studios to give voice actors a way to “safely” license their digital voice replicas for video games. AI protection were also a crucial component of the strike-ending deal SAG-AFTRA reached with Hollywood studios late last year.

This article originally appeared on Engadget at https://www.engadget.com/sag-aftra-ratifies-tv-animation-contracts-that-establish-ai-protections-for-voice-actors-190911363.html?src=rss

SAG-AFTRA ratifies TV animation contracts that establish AI protections for voice actors

SAG-AFTRA has ratified new contracts for voice actors working in TV animation after members’ votes came in at over 95 percent in favor of the terms. The three-year agreements put into place new protections around the use of AI, including a requirement that producers obtain an actor’s consent before using their name as a prompt to create an AI-generated voice. SAG-AFTRA announced the contracts’ approval on Friday night. They’ll be effective through June 30, 2026.

Per the new contracts, “the term ‘voice actor’ only includes humans.” The contracts also outline voice actors’ rights around studios’ use of their digital replicas, and require producers to notify and bargain with the union any time they use AI-generated voices instead of voice actors. “This is the first SAG-AFTRA animation voiceover contract with protections against the misuse of artificial intelligence,” TV Animation Negotiating Committee Co-Chairs Bob Bergen and David Jolliffe said in a statement.

SAG-AFTRA’s Executive Director and Chief Negotiator Duncan Crabtree-Ireland said the agreement “represents a meaningful step forward in expanding our A.I. protections,” along with providing “important new terms in the areas of foreign residuals, high-budget SVOD [subscription video-on-demand] productions, late payments and much more.” The contracts establish a series of wage increases, starting with a 7 percent increase dated back to July 1, 2023, which actors will receive retroactive payments for. That will be followed by a 4 percent increase July 1 of this year, and a 3.5 percent increase the following year.

The union earlier this year announced that it had reached a deal with the AI voice generation company Replica Studios to give voice actors a way to “safely” license their digital voice replicas for video games. AI protection were also a crucial component of the strike-ending deal SAG-AFTRA reached with Hollywood studios late last year.

This article originally appeared on Engadget at https://www.engadget.com/sag-aftra-ratifies-tv-animation-contracts-that-establish-ai-protections-for-voice-actors-190911363.html?src=rss

The Morning After: Neuralink’s first human patient plays chess with his mind

Good morning. I hope you're having a good weekend so far. Unfortunately, our recording schedule meant I didn't get to shoehorn in the fact that the Department of Justice filed an antitrust lawsuit against Apple — it'll pop up again and again for the next six months — but we do have Apple striking a possible deal with Google to use its Gemini AI in future iPhones. Yes, I didn't see that coming, either. 

If you're one of our money-to-spend readers, prepare for Dyson's next-gen robot vacuum, which is finally debuting in the US. It's a mere $1,200. Sorry, $1,199.

This week's stories:

🧠➡️💻 The first human Neuralink patient controlling a computer with his thoughts

🤖🧹Dyson enters the US robot vacuum market with the 360 Vis Nav

🍎🤖 Apple wants to bring Google's Gemini AI to iPhones

And read this:

Just read as Engadget Editor (and Doctor Who critic) Daniel Cooper punches Disney+ in the solar plexus with its awful global release strategy for the next series featuring the timelord. The first two hour-long episodes land on May 11 and will then air on BBC One later that day in prime time. But that initial online launch is midnight if you're in the UK. Dan lives in the UK. Daniel is not happy. 

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-neuralinks-first-human-patient-plays-chess-with-his-mind-150031220.html?src=rss

The Morning After: Neuralink’s first human patient plays chess with his mind

Good morning. I hope you're having a good weekend so far. Unfortunately, our recording schedule meant I didn't get to shoehorn in the fact that the Department of Justice filed an antitrust lawsuit against Apple — it'll pop up again and again for the next six months — but we do have Apple striking a possible deal with Google to use its Gemini AI in future iPhones. Yes, I didn't see that coming, either. 

If you're one of our money-to-spend readers, prepare for Dyson's next-gen robot vacuum, which is finally debuting in the US. It's a mere $1,200. Sorry, $1,199.

This week's stories:

🧠➡️💻 The first human Neuralink patient controlling a computer with his thoughts

🤖🧹Dyson enters the US robot vacuum market with the 360 Vis Nav

🍎🤖 Apple wants to bring Google's Gemini AI to iPhones

And read this:

Just read as Engadget Editor (and Doctor Who critic) Daniel Cooper punches Disney+ in the solar plexus with its awful global release strategy for the next series featuring the timelord. The first two hour-long episodes land on May 11 and will then air on BBC One later that day in prime time. But that initial online launch is midnight if you're in the UK. Dan lives in the UK. Daniel is not happy. 

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-neuralinks-first-human-patient-plays-chess-with-his-mind-150031220.html?src=rss

Instagram porn bots’ latest tactic is ridiculously low-effort, but it’s working

Porn bots are more or less ingrained in the social media experience, despite platforms’ best efforts to stamp them out. We’ve grown accustomed to seeing them flooding the comments sections of memes and celebrities’ posts, and, if you have a public account, you’ve probably noticed them watching and liking your stories. But their behavior keeps changing ever so slightly to stay ahead of automated filters, and now things are starting to get weird.

While porn bots at one time mostly tried to lure people in with suggestive or even overtly raunchy hook lines (like the ever-popular, “DON'T LOOK at my STORY, if you don't want to MASTURBATE!”), the approach these days is a little more abstract. It’s become common to see bot accounts posting a single, inoffensive, completely-irrelevant-to-the-subject word, sometimes accompanied by an emoji or two. On one post I stumbled across recently, five separate spam accounts all using the same profile picture — a closeup of a person in a red thong spreading their asscheeks — commented, “Pristine 🌿,” “Music 🎶,” “Sapphire 💙,” “Serenity 😌” and “Faith 🙏.”

Another bot — its profile picture a headless frontal shot of someone’s lingerie-clad body — commented on the same meme post, “Michigan 🌟.” Once you’ve noticed them, it’s hard not to start keeping a mental log of the most ridiculous instances. “🦄agriculture” one bot wrote. On another post: “terror 🌟” and “😍🙈insect.” The bizarre one-word comments are everywhere; the porn bots, it seems, have completely lost it.

Really, what we’re seeing is the emergence of another avoidance maneuver scammers use to help their bots slip by Meta’s detection technology. That, and they might be getting a little lazy.

side by side screenshots of an Instagram comments' section showing numerous posts by porn bots
Screenshots by Engadget

“They just want to get into the conversation, so having to craft a coherent sentence probably doesn't make sense for them,” Satnam Narang, a research engineer for the cybersecurity company Tenable, told Engadget. Once scammers get their bots into the mix, they can have other bots pile likes onto those comments to further elevate them, explains Narang, who has been investigating social media scams since the MySpace days.

Using random words helps scammers fly under the radar of moderators who may be looking for particular keywords. In the past, they’ve tried methods like putting spaces or special characters between every letter of words that might be flagged by the system. “You can't necessarily ban an account or take an account down if they just comment the word ‘insect’ or ‘terror,’ because it's very benign,” Narang said. “But if they're like, ‘Check my story,’ or something… that might flag their systems. It’s an evasion technique and clearly it's working if you're seeing them on these big name accounts. It's just a part of that dance.”

That dance is one social media platforms and bots have been doing for years, seemingly to no end. Meta has said it stops millions of fake accounts from being created on a daily basis across its suite of apps, and catches “millions more, often within minutes after creation.” Yet spam accounts are still prevalent enough to show up in droves on high traffic posts and slip into the story views of even users with small followings.

The company’s most recent transparency report, which includes stats on fake accounts it’s removed, shows Facebook nixed over a billion fake accounts last year alone, but currently offers no data for Instagram. “Spammers use every platform available to them to deceive and manipulate people across the internet and constantly adapt their tactics to evade enforcement,” a Meta spokesperson said. “That is why we invest heavily in our enforcement and review teams, and have specialized detection tools to identify spam.”

Comments from porn bots on Instagram that read
Screenshot by Engadget

Last December, Instagram rolled out a slew of tools aimed at giving users more visibility into how it’s handling spam bots and giving content creators more control over their interactions with these profiles. Account holders can now, for example, bulk-delete follow requests from profiles flagged as potential spam. Instagram users may also have noticed the more frequent appearance of the “hidden comments” section at the bottom of some posts, where comments flagged as offensive or spam can be relegated to minimize encounters with them.

“It's a game of whack-a-mole,” said Narang, and scammers are winning. “You think you've got it, but then it just pops up somewhere else.” Scammers, he says, are very adept at figuring out why they got banned and finding new ways to skirt detection accordingly.

One might assume social media users today would be too savvy to fall for obviously bot-written comments like “Michigan 🌟,” but according to Narang, scammers’ success doesn’t necessarily rely on tricking hapless victims into handing over their money. They’re often participating in affiliate programs, and all they need is to get people to visit a website — usually branded as an “adult dating service” or the like — and sign up for free. The bots’ “link in bio” typically directs to an intermediary site hosting a handful of URLs that may promise XXX chats or photos and lead to the service in question.

Scammers can get a small amount of money, say a dollar or so, for every real user who makes an account. In the off chance that someone signs up with a credit card, the kickback would be much higher. “Even if one percent of [the target demographic] signs up, you're making some money,” Narang said. “And if you're running multiple, different accounts and you have different profiles pushing these links out, you're probably making a decent chunk of change.” Instagram scammers are likely to have spam bots on TikTok, X and other sites too, Narang said. “It all adds up.”

Comments by porn bots on a Pikachu meme on Instagram, including
Screenshot by Engadget

The harms from spam bots go beyond whatever headaches they may ultimately cause the few who have been duped into signing up for a sketchy service. Porn bots primarily use real people’s photos that they’ve stolen from public profiles, which can be embarrassing once the spam account starts friend requesting everyone the depicted person knows (speaking from personal experience here). The process of getting Meta to remove these cloned accounts can be a draining effort.

Their presence also adds to the challenges that real content creators in the sex and sex-related industries face on social media, which many rely on as an avenue to connect with wider audiences but must constantly fight with to keep from being deplatformed. Imposter Instagram accounts can rack up thousands of followers, funneling potential visitors away from the real accounts and casting doubt on their legitimacy. And real accounts sometimes get flagged as spam in Meta’s hunt for bots, putting those with racy content even more at risk of account suspension and bans.

Unfortunately, the bot problem isn’t one that has any easy solution. “They're just continuously finding new ways around [moderation], coming up with new schemes,” Narang said. Scammers will always follow the money and, to that end, the crowd. While porn bots on Instagram have evolved to the point of posting nonsense to avoid moderators, more sophisticated bots chasing a younger demographic on TikTok are posting somewhat believable commentary on Taylor Swift videos, Narang says.

The next big thing in social media will inevitably emerge sooner or later, and they’ll go there too. “As long as there's money to be made,” Narang said, “there's going to be incentives for these scammers.”

This article originally appeared on Engadget at https://www.engadget.com/instagram-porn-bots-latest-tactic-is-ridiculously-low-effort-but-its-working-181130528.html?src=rss

Researchers ask Meta to keep CrowdTangle online until after 2024 elections

The Mozilla Foundation and dozens of other research and advocacy groups are pushing back on Meta’s decisions to shut down its research tool, CrowdTangle, later this year. In an open letter, the group calls on Meta to keep CrowdTangle online until after 2024 elections, saying that it will harm their ability to track election misinformation in a year where “approximately half the world’s population” are slated to vote.

The letter, published by the Mozilla Foundation and signed by 90 groups as well as the former CEO of CrowdTangle, comes one week after Meta confirmed it would shut down the tool in August 2024. “Meta’s decision will effectively prohibit the outside world, including election integrity experts, from seeing what’s happening on Facebook and Instagram — during the biggest election year on record,” the letter writers say.

“This means almost all outside efforts to identify and prevent political disinformation, incitements to violence, and online harassment of women and minorities will be silenced. It’s a direct threat to our ability to safeguard the integrity of elections.” The group asks Meta to keep CrowdTangle online until January 2025, and to “rapidly onboard” election researchers onto its latest tools.

CrowdTangle has long been a source of frustration for Meta. It allows researchers, journalists and other groups to track how content is spreading across Facebook and Instagram. It’s also often cited by journalists in unflattering stories about Facebook and Instagram. For example, Engadget relied on CrowdTangle in an investigation into why Facebook Gaming was overrun with spam and pirated content in 2022. CrowdTangle was also the source for “Facebook’s Top 10,” a (now defunct) Twitter bot that posted daily updates on the most-interacted withFacebook posts containing links. The project, created by a New York Times reporter, regularly showed far-right and conservative pages over-performing, leading Facebook executives to argue the data wasn't an accurate representation of what was actually popular on the platform.

With CrowdTangle set to shut down, Meta is instead highlighting a new program called the Meta Content Library, which provides researchers with new tools to access publicly-accessible data in a streamlined way. The company has said it’s more powerful than what CrowdTangle enabled, but it’s also much more strictly controlled. Researchers from nonprofits and academic institutions must apply, and be approved, in order to access it. And since the vast majority of newsrooms are for-profit entities, most journalists will be automatically ineligible for access (it’s not clear if Meta would allow reporters at nonprofit newsrooms to use the Content Library.)

The other issue, according to Brandon Silverman, CrowdTangle’s former CEO who left Meta in 2021 is that the Meta Content Library isn’t currently powerful enough to be a full CrowdTangle replacement. “There are some areas where the MCL has way more data than CrowdTangle ever had, including reach and comments in particular,” Brandon Silverman, CrowdTangle’s former CEO who left Meta in 2021 wrote in a post on Substack last week. “But there are also some huge gaps in the tool, both for academics and civil society, and simply arguing that it has more data isn’t a claim that regulators or the press should take seriously.”

In a statement on X, Meta spokesperson Andy Stone said that “academic and nonprofit institutions pursuing scientific or public interest research can apply for access” to the Meta Content Library, including nonprofit election experts. “The Meta Content Library is designed to contain more comprehensive data than CrowdTangle.”

This article originally appeared on Engadget at https://www.engadget.com/researchers-ask-meta-to-keep-crowdtangle-online-until-after-2024-elections-211527731.html?src=rss