Back when I was a kid (puts on old man glasses) we had the Casio SK-1. We’d spend all day making samples of burps and turning them into stupid little songs, but that’s about as far as it went. You couldn’t layer tracks or anything. Modern children, however, are about to get an actual full-featured groovebox, thanks to Playtime Engineering.
The Blipblox myTRACKS is a complete music production studio, according to Playtime. It features a built-in microphone for sampling (just like the Casio SK-1) but also 50 instrument sounds and 25 pads to play them on. These sounds can be arranged into five tracks, resembling many grooveboxes intended for adults, and there are two FX processors and a range of effects. Sure, it looks like a toy and probably feels like a toy, but it’s not really a toy. To that end, the announcement video shows an adult going to town on the thing once the kids are asleep.
You can transform sounds and add effects via two bright purple levers on the side, which work just like typical mod wheels. You’ll be able to buy sound packs online and upload them via USB-C. There's even a MIDI port. It's a groovebox, though not as high-tech as something like the Teenage Engineering OP-1 Field, or the Roland MC-707.
Downloading sound packs and modulating effects may be a bit too complicated for the younger kids in your life, but the myTRACKS also includes hundreds of built-in melodies and drum loops to play around with and have fun. There’s also a randomization feature that the company says “instantly creates new songs for unlimited fun and inspiration.” These songs are likely to annoy you as you go about household chores, but it's better than a child staring at a tablet all day, right?
Playtime Engineering
Now the bad news. The kid-centric groovebox is just a Kickstarter project, for now, with shipments eventually going out in November. However, this isn’t Playtime Engineering’s first rodeo with this type of gadget. The company has released numerous child-friendly synthesizers and music-making devices in its Blipblox line. There’s the original Blipblox synth and the more recent Blipblox After Dark. We praised both of these instruments for being appropriate for children, but still enjoyable for adults. The myTRACKS Kickstarter goes live on April 9 and pricing will range from $250 to $300 for backers.
This article originally appeared on Engadget at https://www.engadget.com/the-blipblox-mytracks-groovebox-is-a-complete-music-production-studio-for-kids-130046515.html?src=rss
Florida Governor Ron DeSantis just signed into law a bill named HB 3 that creates much stricter guidelines about how kids under 16 can use and access social media. To that end, the law completely bans children younger than 14 from participating in these platforms.
The bill requires parent or guardian consent for 14- and 15-year-olds to make an account or use a pre-existing account on a social media platform. Additionally, the companies behind these platforms must abide by requests to delete these accounts within five business days. Failing to do so could rack up major fines, as much as $10,000 for each violation. These penalties increase to $50,000 per instance if it is ruled that the company participated in a “knowing or reckless” violation of the law.
As previously mentioned, anyone under the age of 14 will no longer be able to create or use social media accounts in Florida. The platforms must delete pre-existing accounts and any associated personal information. The bill doesn’t name any specific social media platforms, but suggests that any service that promotes “infinite scrolling” will have to follow these new rules, as will those that feature display reaction metrics, live-streaming and auto-play videos. Email platforms are exempt.
This isn’t just going to change the online habits of kids. There’s also a mandated age verification component, though that only kicks in if the website or app contains a “substantial portion of material” deemed harmful to users under 18. Under the language of this law, Floridians visiting a porn site, for instance, will have to verify their age via a proprietary platform on the site itself or use a third party system. News agencies are exempt from this part of the bill, even if they meet the materials threshold.
Obviously, that brings up some very real privacy concerns. Nobody wants to enter their private information to look at, ahem, adult content. There’s a provision that gives websites the option to route users to an “anonymous age verification” system, which is defined as a third party that isn’t allowed to retain identifying information. Once again, any platform that doesn’t abide by this restriction could be subject to a $50,000 civil penalty for each instance.
This follows DeSantis vetoing a similar bill earlier this month. That law would have banned teens under 16 from using social media apps and there was no option for parental consent.
NetChoice, a trade association that represents social media platforms, has come out against the law, calling it unconstitutional. The group says that HB 3 will essentially impose an “ID for the internet”, arguing that the age verification component will have to widen to adequately track whether or not children under 14 are signing up for social media apps. NetChoice says “this level of data collection will put Floridians’ privacy and security at risk.”
Paul Renner, the state’s Republican House Speaker, said at a press conference for the bill signing that a “child in their brain development doesn’t have the ability to know that they’re being sucked in to these addictive technologies, and to see the harm, and step away from it. And because of that, we have to step in for them.”
The new law goes into effect on January 1, but it could face some legal challenges. Renner said he expects social media companies to “sue the second after this is signed” and DeSantis acknowledged that the law will likely be challenged on First Amendment issues, according to Associated Press.
Florida isn’t the first state to try to separate kids from their screens. In Arkansas, a federal judge recently blocked enforcement of a law that required parental consent for minors to create new social media accounts. The same thing happened in California. A similar law passed in Utah, but was hit with a pair of lawsuits that forced state reps back to the drawing board. On the federal side of things, the Protecting Kids on Social Media Act would require parental consent for kids under 18 to use social media and, yeah, there’s that whole TikTok ban thing.
This article originally appeared on Engadget at https://www.engadget.com/ron-desantis-signs-bill-requiring-parental-consent-for-kids-to-join-social-media-platforms-in-florida-192116891.html?src=rss
There was once a time when you went to one place for music, another for education, and so on, but many companies are now attempting to turn themselves into a jack of all trades to compete for survival. The latest example is Spotify, which has announced a test for video-based learning courses. The new feature joins the platform's music, podcasts and audiobooks lineup.
Spotify has teamed up with a range of content partners: BBC Maestro, PLAYvirtuoso, Thinkific Labs Inc. and Skillshare. They offer content in four main categories: making music, getting creative, learning business and healthy living. "With this offer, we are exploring a potential opportunity to provide educational creators with a new audience who can access their video content, reaching a bigger potential swath of engaged Spotify users while expanding our catalog," Spotify stated in the announcement. The platform claims that around half of users have "engaged" in self-help or educational podcasts.
The test courses are available only to UK users, with free and premium subscribers receiving at least two free lessons per course. The series will range in price from £20 ($25) to £80 ($101), regardless of a person's subscription tier. Users can access them on mobile or desktop. Exact pricing and availability might change if the feature moves past the test phase.
This forays into video-based courses follows shortly after Spotify introduced music videos in beta. They're available on select tracks and, like the classes, aren't available to US subscribers (the UK is among the 11 countries with access).
This article originally appeared on Engadget at https://www.engadget.com/spotify-launches-educational-video-courses-in-the-uk-131559272.html?src=rss
Porn bots are more or less ingrained in the social media experience, despite platforms’ best efforts to stamp them out. We’ve grown accustomed to seeing them flooding the comments sections of memes and celebrities’ posts, and, if you have a public account, you’ve probably noticed them watching and liking your stories. But their behavior keeps changing ever so slightly to stay ahead of automated filters, and now things are starting to get weird.
While porn bots at one time mostly tried to lure people in with suggestive or even overtly raunchy hook lines (like the ever-popular, “DON'T LOOK at my STORY, if you don't want to MASTURBATE!”), the approach these days is a little more abstract. It’s become common to see bot accounts posting a single, inoffensive, completely-irrelevant-to-the-subject word, sometimes accompanied by an emoji or two. On one post I stumbled across recently, five separate spam accounts all using the same profile picture — a closeup of a person in a red thong spreading their asscheeks — commented, “Pristine 🌿,” “Music 🎶,” “Sapphire 💙,” “Serenity 😌” and “Faith 🙏.”
Another bot — its profile picture a headless frontal shot of someone’s lingerie-clad body — commented on the same meme post, “Michigan 🌟.” Once you’ve noticed them, it’s hard not to start keeping a mental log of the most ridiculous instances. “🦄agriculture” one bot wrote. On another post: “terror 🌟” and “😍🙈insect.” The bizarre one-word comments are everywhere; the porn bots, it seems, have completely lost it.
Really, what we’re seeing is the emergence of another avoidance maneuver scammers use to help their bots slip by Meta’s detection technology. That, and they might be getting a little lazy.
Screenshots by Engadget
“They just want to get into the conversation, so having to craft a coherent sentence probably doesn't make sense for them,” Satnam Narang, a research engineer for the cybersecurity company Tenable, told Engadget. Once scammers get their bots into the mix, they can have other bots pile likes onto those comments to further elevate them, explains Narang, who has been investigating social media scams since the MySpace days.
Using random words helps scammers fly under the radar of moderators who may be looking for particular keywords. In the past, they’ve tried methods like putting spaces or special characters between every letter of words that might be flagged by the system. “You can't necessarily ban an account or take an account down if they just comment the word ‘insect’ or ‘terror,’ because it's very benign,” Narang said. “But if they're like, ‘Check my story,’ or something… that might flag their systems. It’s an evasion technique and clearly it's working if you're seeing them on these big name accounts. It's just a part of that dance.”
That dance is one social media platforms and bots have been doing for years, seemingly to no end. Meta has said it stops millions of fake accounts from being created on a daily basis across its suite of apps, and catches “millions more, often within minutes after creation.” Yet spam accounts are still prevalent enough to show up in droves on high traffic posts and slip into the story views of even users with small followings.
The company’s most recent transparency report, which includes stats on fake accounts it’s removed, shows Facebook nixed over a billion fake accounts last year alone, but currently offers no data for Instagram. “Spammers use every platform available to them to deceive and manipulate people across the internet and constantly adapt their tactics to evade enforcement,” a Meta spokesperson said. “That is why we invest heavily in our enforcement and review teams, and have specialized detection tools to identify spam.”
Screenshot by Engadget
Last December, Instagram rolled out a slew of tools aimed at giving users more visibility into how it’s handling spam bots and giving content creators more control over their interactions with these profiles. Account holders can now, for example, bulk-delete follow requests from profiles flagged as potential spam. Instagram users may also have noticed the more frequent appearance of the “hidden comments” section at the bottom of some posts, where comments flagged as offensive or spam can be relegated to minimize encounters with them.
“It's a game of whack-a-mole,” said Narang, and scammers are winning. “You think you've got it, but then it just pops up somewhere else.” Scammers, he says, are very adept at figuring out why they got banned and finding new ways to skirt detection accordingly.
One might assume social media users today would be too savvy to fall for obviously bot-written comments like “Michigan 🌟,” but according to Narang, scammers’ success doesn’t necessarily rely on tricking hapless victims into handing over their money. They’re often participating in affiliate programs, and all they need is to get people to visit a website — usually branded as an “adult dating service” or the like — and sign up for free. The bots’ “link in bio” typically directs to an intermediary site hosting a handful of URLs that may promise XXX chats or photos and lead to the service in question.
Scammers can get a small amount of money, say a dollar or so, for every real user who makes an account. In the off chance that someone signs up with a credit card, the kickback would be much higher. “Even if one percent of [the target demographic] signs up, you're making some money,” Narang said. “And if you're running multiple, different accounts and you have different profiles pushing these links out, you're probably making a decent chunk of change.” Instagram scammers are likely to have spam bots on TikTok, X and other sites too, Narang said. “It all adds up.”
Screenshot by Engadget
The harms from spam bots go beyond whatever headaches they may ultimately cause the few who have been duped into signing up for a sketchy service. Porn bots primarily use real people’s photos that they’ve stolen from public profiles, which can be embarrassing once the spam account starts friend requesting everyone the depicted person knows (speaking from personal experience here). The process of getting Meta to remove these cloned accounts can be a draining effort.
Their presence also adds to the challenges that real content creators in the sex and sex-related industries face on social media, which many rely on as an avenue to connect with wider audiences but must constantly fight with to keep from being deplatformed. Imposter Instagram accounts can rack up thousands of followers, funneling potential visitors away from the real accounts and casting doubt on their legitimacy. And real accounts sometimes get flagged as spam in Meta’s hunt for bots, putting those with racy content even more at risk of account suspension and bans.
Unfortunately, the bot problem isn’t one that has any easy solution. “They're just continuously finding new ways around [moderation], coming up with new schemes,” Narang said. Scammers will always follow the money and, to that end, the crowd. While porn bots on Instagram have evolved to the point of posting nonsense to avoid moderators, more sophisticated bots chasing a younger demographic on TikTok are posting somewhat believable commentary on Taylor Swift videos, Narang says.
The next big thing in social media will inevitably emerge sooner or later, and they’ll go there too. “As long as there's money to be made,” Narang said, “there's going to be incentives for these scammers.”
This article originally appeared on Engadget at https://www.engadget.com/instagram-porn-bots-latest-tactic-is-ridiculously-low-effort-but-its-working-181130528.html?src=rss
Mitchell has made no comment about her music returning to Spotify. Back in 2022, Mitchell wrote in a statement that “irresponsible people are spreading lies that are costing people their lives. I stand in solidarity with Neil Young and the global scientific and medical communities on this issue” of the COVID vaccine, as published by Pitchfork.
Young returned to Spotify on the grounds that Joe Rogan’s podcast is no longer exclusive to the platform, as it now appears on YouTube, Apple Podcasts and Amazon Music. "My decision comes as music services Apple and Amazon have started serving the same disinformation podcast features I had opposed at Spotify," he wrote in a blog post that may have been deleted since being published. The singer also noted that fans would have nowhere to go if he pulled his music from each of the above platforms.
Beyond the obvious reasons, Young and Mitchell had a personal stake in combating medical misinformation. Both musicians were victims of polio, a disease that was wiped out in North America thanks to vaccines.
Joni Mitchell has been experiencing something of a career resurgence in the past few years. She started playing live again in 2022, after an aneurysm in 2015 left her unable to perform. The singer even performed at this year’s Grammys. As for Rogan, he recently signed a new $250 million deal with Spotify to continue his various podcast ventures.
This article originally appeared on Engadget at https://www.engadget.com/joni-mitchell-joins-neil-young-and-returns-to-spotify-170655527.html?src=rss
Fans of Dear Esther, Amnesia: A Machine for Pigs and Everybody's Gone to the Rapture, make sure to mark June 18 on your calendar. On that day, you'll be able to buy a copy of Still Wakes the Deep for the PC (via Steam and Epic Games Store), the Xbox Series X|S and the PS5, though you can also play it with Game Pass for Xbox and PC. It's the latest first-person narrative horror game from The Chinese Room, the developer behind the aforementioned titles in the same genre.
Just a warning if the title itself isn't clear enough: Still Wakes the Deep probably isn't for you if you have thalassophobia. It's set in 1975 and puts you in the shoes of an offshore oil rig worker stationed in North Sea waters. A "terrifying, unrelenting foe" has come onboard, and you'll have to fight for your life while helping what remains of your crew survive in the midst of storms and freezing temperatures. "All lines of communication have been severed. All exits are gone," the game's description says, because horror stories are no fun if you can easily call for help. You'll have no access to weapons, as well — you'll have to use your wits and what you find from your environment to face the "unknowable horror" and escape the rig altogether.
This article originally appeared on Engadget at https://www.engadget.com/still-wakes-the-deep-will-pit-you-against-unknown-nautical-horrors-starting-on-june-18-121529077.html?src=rss
The Mozilla Foundation and dozens of other research and advocacy groups are pushing back on Meta’s decisions to shut down its research tool, CrowdTangle, later this year. In an open letter, the group calls on Meta to keep CrowdTangle online until after 2024 elections, saying that it will harm their ability to track election misinformation in a year where “approximately half the world’s population” are slated to vote.
The letter, published by the Mozilla Foundation and signed by 90 groups as well as the former CEO of CrowdTangle, comes one week after Meta confirmed it would shut down the tool in August 2024. “Meta’s decision will effectively prohibit the outside world, including election integrity experts, from seeing what’s happening on Facebook and Instagram — during the biggest election year on record,” the letter writers say.
“This means almost all outside efforts to identify and prevent political disinformation, incitements to violence, and online harassment of women and minorities will be silenced. It’s a direct threat to our ability to safeguard the integrity of elections.” The group asks Meta to keep CrowdTangle online until January 2025, and to “rapidly onboard” election researchers onto its latest tools.
CrowdTangle has long been a source of frustration for Meta. It allows researchers, journalists and other groups to track how content is spreading across Facebook and Instagram. It’s also often cited by journalists in unflattering stories about Facebook and Instagram. For example, Engadget relied on CrowdTangle in an investigation into why Facebook Gaming was overrun with spam and pirated content in 2022. CrowdTangle was also the source for “Facebook’s Top 10,” a (now defunct) Twitter bot that posted daily updates on the most-interacted withFacebook posts containing links. The project, created by a New York Times reporter, regularly showed far-right and conservative pages over-performing, leading Facebook executives to argue the data wasn't an accurate representation of what was actually popular on the platform.
With CrowdTangle set to shut down, Meta is instead highlighting a new program called the Meta Content Library, which provides researchers with new tools to access publicly-accessible data in a streamlined way. The company has said it’s more powerful than what CrowdTangle enabled, but it’s also much more strictly controlled. Researchers from nonprofits and academic institutions must apply, and be approved, in order to access it. And since the vast majority of newsrooms are for-profit entities, most journalists will be automatically ineligible for access (it’s not clear if Meta would allow reporters at nonprofit newsrooms to use the Content Library.)
The other issue, according to Brandon Silverman, CrowdTangle’s former CEO who left Meta in 2021 is that the Meta Content Library isn’t currently powerful enough to be a full CrowdTangle replacement. “There are some areas where the MCL has way more data than CrowdTangle ever had, including reach and comments in particular,” Brandon Silverman, CrowdTangle’s former CEO who left Meta in 2021 wrote in a post on Substack last week. “But there are also some huge gaps in the tool, both for academics and civil society, and simply arguing that it has more data isn’t a claim that regulators or the press should take seriously.”
In a statement on X, Meta spokesperson Andy Stone said that “academic and nonprofit institutions pursuing scientific or public interest research can apply for access” to the Meta Content Library, including nonprofit election experts. “The Meta Content Library is designed to contain more comprehensive data than CrowdTangle.”
This article originally appeared on Engadget at https://www.engadget.com/researchers-ask-meta-to-keep-crowdtangle-online-until-after-2024-elections-211527731.html?src=rss
Threads has begun testing swipe gestures to help users improve the algorithm that populates the For You feed. It’s reportedly called Algo Tune as, well, it helps people tune their algorithms. It’s pretty rare when any social media site, particularly one run by Meta, allows users to adjust the parameters by which the great and powerful algorithm operates, so this feature is definitely worth keeping an eye on.
It works a lot like Tinder and other dating apps. If you don’t like something on your feed, you swipe left. If you like a post and want to see more like it, you swipe right. That’s pretty much it. The algorithm is allegedly tuned over time by these responses, adjusting your feed to provide more of the content you want and less of the stuff you don’t want. Meta CEO Mark Zuckerberg calls it an “easy way to let us know what you want to see more of on your feed.”
This is just an experiment, for now, so the feature’s only rolling out to a select number of Threads users. The company also hasn’t released any specific information as to how all of the swiping actually influences the algorithm, but that’s par for the course when it comes to these things. The algo must remain protected at all costs.
This article originally appeared on Engadget at https://www.engadget.com/threads-begins-testing-swipe-gestures-to-help-train-the-for-you-algorithm-175004586.html?src=rss
It's been almost three years since we found out that former Naughty Dog and Visceral Games writer and creative director Amy Hennig was working on a Marvel game with her team at Skydance New Media. During Epic Games' State of Unreal showcase at the Game Developers Conference, a new story trailer shed some more light on the game, which is called Marvel 1943: Rise of Hydra.
As the name suggests, it's set during World War II in Occupied Paris. You'll play as four characters in this story-driven action-adventure: a young Steve Rogers (better known as Captain America), T'Challa's grandfather Azzuri (the Black Panther of his era), US soldier and Howling Commandos member Gabriel Jones and Wakandan spy Nanali.
The trailer shows Captain America taking out some foes (presumably Nazis) with his shield as he looks for Black Panther, who we see scampering over rooftops. It ends with the pair clashing on a bridge, but what are the odds that they (along with Gabriel and Nanali) form a shaky alliance to battle a common enemy?
Skydance New Media is using Unreal Engine 5.4 to build the game. The trailer has some striking visual,s including highly detailed facial animations and environments, which are seemingly reflective of what the game actually looks like. "All the sequences you just saw in that trailer are pulled right out of our game, running real-time in Unreal Engine 5," Hennig said. 'No smoke and mirrors." We'll have to wait a little longer — until Marvel 1943: Rise of Hydra arrives in 2025 — to see if Hennig's claims stand up.
This article originally appeared on Engadget at https://www.engadget.com/marvel-1943-rise-of-hydra-from-amy-hennigs-studio-arrives-in-2025-173514552.html?src=rss
Getty has flagged another photo captured by the Princess of Wales as digitally altered that was released back in 2022, featuring Queen Elizabeth II surrounded by her grandchildren and great-grandchildren. "Getty Images is undertaking a review of handout images and in accordance with its editorial policy is placing an editor's note on images where the source has suggested they could be digitally enhanced," a spokesperson told CNN. This comes on the heels of a recent controversy, where a photo of Kate Middleton was revealed to be doctored.
The publication found 19 alterations in the photo that most people likely wouldn't notice unless they zoom in very closely and examine every pattern. It found a few misalignments in the subjects' clothing, random floating artifacts, cloned hair strands and heads that looked like they were pasted in from another photo due to the difference in lighting. Kate, or whoever edited the picture, might have simply been looking to create the best version of it possible, but agencies like Getty only allow minimal editing for the photos in their library to avoid spreading misinformation.
Today would have been Her Late Majesty Queen Elizabeth’s 97th birthday.
This photograph - showing her with some of her grandchildren and great grandchildren - was taken at Balmoral last summer.
— The Prince and Princess of Wales (@KensingtonRoyal) April 21, 2023
The princess' absence from public events since Christmas last year has, as you might have expected, spawned all kinds of conspiracy theories. It even gave rise to a whole Wikipedia article entitled "Where is Kate?" because people around the world are apparently that invested in the British monarchy and can't quite believe that she'd undergone abdominal surgery.
In the midst of it all, William's and Kate's social media accounts posted the aforementioned doctored photo of the Princess of Wales with her children on Mother's Day in the UK. But when the Associated Press and other news agencies pulled the photo because they found that it had been edited, those conspiracy theories became even more outlandish. The wildest claim we've heard so far is that the video of her out shopping with the Prince of Wales wasn't her at all but a body double. Or a clone, apparently, because that's the way it goes on the internet.
This article originally appeared on Engadget at https://www.engadget.com/getty-flags-another-british-royal-family-photo-for-being-digitally-altered-121856385.html?src=rss