Phony AI Biden robocalls reached up to 25,000 voters, says New Hampshire AG

Two companies based in Texas have been linked to a spate of robocalls that used artificial intelligence to mimic President Joe Biden. The audio deepfake was used to urge New Hampshire voters not to participate in the state's presidential primary. New Hampshire Attorney General John Formella said as many as 25,000 of the calls were made to residents of the state in January.

Formella says an investigation has linked the source of the robocalls to Texan companies Life Corporation and Lingo Telecom. No charges have yet been filed against either company or Life Corporation's owner, a person named Walter Monk. The probe is ongoing and other entities are believed to be involved. Federal law enforcement officials are said to be looking into the case too.

“We have issued a cease-and-desist letter to Life Corporation that orders the company to immediately desist violating New Hampshire election laws," Formella said at a press conference, according to CNN. "We have also opened a criminal investigation, and we are taking next steps in that investigation, sending document preservation notices and subpoenas to Life Corporation, Lingo Telecom and any other individual or entity."

The Federal Communications Commission also sent a cease-and-desist letter to Lingo Telecom. The agency said (PDF) it has warned both companies about robocalls in the past.

The deepfake was created using tools from AI voice cloning company ElevenLabs, which banned the user responsible. The company says it is "dedicated to preventing the misuse of audio AI tools and [that it takes] any incidents of misuse extremely seriously."

Meanwhile, the FCC is seeking to ban robocalls that use AI-generated voices. Under the Telephone Consumer Protection Act, the agency is responsible for making rules regarding robocalls. Commissioners are to vote on the issue in the coming weeks.

This article originally appeared on Engadget at https://www.engadget.com/phony-ai-biden-robocalls-reached-up-to-25000-voters-says-new-hampshire-ag-205253966.html?src=rss

Akai adds a 37-key standalone workstation to its MPC lineup

Akai just officially announced the MPC Key 37, a standalone workstation and groovebox. This is the latest standalone MPC device, following last year’s larger Key 61. The Key 37 has everything you need to make a beat or song from scratch without having to use an actual computer and DAW, with some limitations. 

There are 37 full-size keys, complete with aftertouch. There aren’t that many standalone devices out there with a full keybed, so this should excite musicians who lack experience with Akai-style pads. This device does have 16 velocity-sensitive pads for laying down drum parts and triggering samples, so it’s a “best of both worlds” type situation.

The Key 37 ships with 32GB of on-board storage, though 10GB is used up by the OS and included sound packs. Thankfully, there’s a slot for an SD card to expand the storage — these standalone devices fill up fast.

You get the same color 7-inch multi-touch display and four assignable Q-Link knobs as the company’s Key 61 workstation. This is great for making system adjustments and for controlling effects plugins and the like. As a matter of fact, the entire layout recalls the Key 61, though this new release is slightly less powerful.

A pair of hands playing the keyboard.
Akai

The Key 37 features 2GB of RAM, compared to 4GB with the Key 61. This is going to hamper the number of tracks that will play simultaneously without any hiccups. It also lacks the two microphone inputs and associated preamps. There are, however, stereo 1/4-inch inputs and outputs, USB Midi, 5-pin MIDI In/MIDI Out, 4 TRS CV/Gate output jacks and a USB host port. This keyboard also boasts Bluetooth and WiFi connectivity for wireless streaming with platforms like Ableton Link.

Beyond the iconic 16 pad layout, the highlight of any MPC machine is the software. To that end, the Key 37 ships with Akai’s MPC2 desktop software and its standalone suite. You get eight instrument plugins out of the box and a voucher for a premium plug from the company’s ever-growing collection. You even get that cool stem separation software, though it’s not available on the Key 37 yet.

Akai’s latest and greatest may not be as full-featured as 2022’s Key 61, but it’s around half the price. The Key 37 costs $900 and is available to order right now via parent company inMusic and authorized retailers.

This article originally appeared on Engadget at https://www.engadget.com/akai-adds-a-37-key-standalone-workstation-to-its-mpc-lineup-191246047.html?src=rss

How security experts unravel ransomware

Hackers use ransomware to go after every industry, charging as much money as they can to return access to a victim's files. It’s a lucrative business to be in. In the first six months of 2023, ransomware gangs bilked $449 million from their targets, even though most governments advise against paying ransoms. Increasingly, security professionals are coming together with law enforcement to provide free decryption tools — freeing locked files and eliminating the temptation for victims to pony up.

There are a couple main ways that ransomware decryptors go about coming up with tools: reverse engineering for mistakes, working with law enforcement and gathering publicly available encryption keys. The length of the process varies depending on how complex the code is, but it usually requires information on the encrypted files, unencrypted versions of the files and server information from the hacking group. “Just having the output encrypted file is usually useless. You need the sample itself, the executable file,” said Jakub Kroustek, malware research director at antivirus business Avast. It’s not easy, but does pay dividends to the impacted victims when it works.

First, we have to understand how encryption works. For a very basic example, let's say a piece of data might have started as a cognizable sentence, but appears like "J qsfgfs dbut up epht" once it's been encrypted. If we know that one of the unencrypted words in "J qsfgfs dbut up epht" is supposed to be "cats," we can start to determine what pattern was applied to the original text to get the encrypted result. In this case, it's just the standard English alphabet with each letter moved forward one place: A becomes B, B becomes C, and "I prefer cats to dogs" becomes the string of nonsense above. It’s much more complex for the sorts of encryption used by ransomware gangs, but the principle remains the same. The pattern of encryption is also known as the 'key', and by deducing the key, researchers can create a tool that can decrypt the files.

Some forms of encryption, like the Advanced Encryption Standard of 128, 192 or 256 bit keys, are virtually unbreakable. At its most advanced level, bits of unencrypted "plaintext" data, divided into chunks called "blocks," are put through 14 rounds of transformation, and then output in their encrypted — or "ciphertext" — form. “We don’t have the quantum computing technology yet that can break encryption technology,” said Jon Clay, vice president of threat intelligence at security software company Trend Micro. But luckily for victims, hackers don’t always use strong methods like AES to encrypt files.

While some cryptographic schemes are virtually uncrackable it’s a difficult science to perfect, and inexperienced hackers will likely make mistakes. If the hackers don’t apply a standard scheme, like AES, and instead opt to build their own, the researchers can then dig around for errors. Why would they do this? Mostly ego. “They want to do something themselves because they like it or they think it's better for speed purposes,” Jornt van der Wiel, a cybersecurity researcher at Kaspersky, said.

For example, here’s how Kaspersky decrypted the Yanluowang ransomware strain. It was a targeted strain aimed at specific companies, with an unknown list of victims. Yanluowang used the Sosemanuk stream cipher to encrypt data: a free-for-use process that encrypts the plaintext file one digit at a time. Then, it encrypted the key using an RSA algorithm, another type of encryption standard. But there was a flaw in the pattern. The researchers were able to compare the plaintext to the encrypted version, as explained above, and reverse engineer a decryption tool now made available for free. In fact, there are tons that have already been cracked by the No More Ransom project.

Ransomware decryptors will use their knowledge of software engineering and cryptography to get the ransomware key and, from there, create a decryption tool, according to Kroustek. More advanced cryptographic processes may require either brute forcing, or making educated guesses based on the information available. Sometimes hackers use a pseudo-random number generator to create the key. A true RNG will be random, duh, but that means it won’t be easily predicted. A pseudo-RNG, as explained by van der Wiel, may rely on an existing pattern in order to appear random when it's actually not — the pattern might be based on the time it was created, for example. If researchers know a portion of that, they can try different time values until they deduce the key.

But getting that key often relies on working with law enforcement to get more information about how the hacking groups work. If researchers are able to get the hacker’s IP address, they can request the local police to seize servers and get a memory dump of their contents. Or, if hackers have used a proxy server to obscure their location, police might use traffic analyzers like NetFlow to determine where the traffic goes and get the information from there, according to van der Wiel. The Budapest Convention on Cybercrime makes this possible across international borders because it lets police request an image of a server in another country urgently while they wait for the official request to go through.

The server provides information on the hacker’s activities, like who they might be targeting or their process for extorting a ransom. This can tell ransomware decryptors the process the hackers went through in order to encrypt the data, details about the encryption key or access to files that can help them reverse engineer the process. The researchers comb through the server logs for details in the same way you may help your friend dig up details on their Tinder date to make sure they’re legit, looking for clues or details about malicious patterns that can help suss out true intentions. Researchers may, for example, discover part of the plaintext file to compare to the encrypted file to begin the process of reverse engineering the key, or maybe they’ll find parts of the pseudo-RNG that can begin to explain the encryption pattern.

Working with law enforcement helped Cisco Talos create a decryption tool for the Babuk Tortilla ransomware. This version of ransomware targeted healthcare, manufacturing and national infrastructure, encrypting victims' devices and deleting valuable backups. Avast had already created a generic Babuk decryptor, but the Tortilla strain proved difficult to crack. The Dutch Police and Cisco Talos worked together to apprehend the person behind the strain, and gained access to the Tortilla decryptor in the process.

But often the easiest way to come up with these decryption tools stems from the ransomware gangs themselves. Maybe they’re retiring, or just feeling generous, but attackers will sometimes publicly release their encryption key. Security experts can then use the key to make a decryption tool and release that for victims to use going forward.

Generally, experts can’t share a lot about the process without giving ransomware gangs a leg up. If they divulge common mistakes, hackers can use that to easily improve their next ransomware attempts. If researchers tell us what encrypted files they’re working on now, gangs will know they’re on to them. But the best way to avoid paying is to be proactive. “If you’ve done a good job of backing up your data, you have a much higher opportunity to not have to pay,” said Clay.

This article originally appeared on Engadget at https://www.engadget.com/how-security-experts-unravel-ransomware-184531451.html?src=rss

Amazon Fire tablets are up to 35 percent off right now

This may be a good time to buy one of Amazon's latest tablets as many of them are on sale, with discounts of up to 35 percent. The sale brings the 2023 Fire HD 10 down to $95, which is only $15 more than its record low and 32 percent off the $140 list price. This model comes with 3GB of RAM, 32GB of storage and a speedier processor than the last time around. The 1080p HD screen is touch- and stylus-compatible and there's a 5 megapixel camera up front and another in back. Note that this model displays ads on the lockscreen. If you'd rather not see those promos, the ad-free version is also on sale and currently $15 more at $110. 

All Fire tablets are budget slates that let you browse the web, watch shows and play casual games; They probably aren't the best pick if you're looking for a workhorse productivity tablet, which tend to cost significantly more. You won't be able to run Apple apps, which seems obvious, but Fire tablets also don't natively support the Google Play store — even though Fire OS is a fork of Android. Readily available apps come from the Amazon app store, which include most major streamers like Netflix, Max, Peacock, social apps like TikTok and Instagram, and plenty of casual games. If you're just looking for a way to entertain yourself after a day of being productive, Fire tablets offer one of the few ways to do so for under $100. And like all Amazon devices, Alexa is built in to answer questions and control your smart home lights, cameras and doorbells. 

Elsewhere in the sale, the Fire HD 8 is down to $65, which is 35 percent off and around $10 more than its record low. This is an 8-inch version of Amazon's tablet, with 2GB of RAM, 32GB of storage and a 1280 x 800 screen at 189 ppi. There's a 2MP front camera and a claimed battery life of 13 hours. This is also a model with lockscreen promos, the ad-free version is $80. 

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/amazon-fire-tablets-are-up-to-35-percent-off-right-now-181542666.html?src=rss

GoPro rolls out a Mac editing app and a high-end Premium+ subscription tier

GoPro is going back to desktops with a new editing app for Mac. While the company has long offered GoPro Studio and Player + ReelSteady desktop apps, much of its attention has been on mobile since it bought Replay and Splice in 2016. It rebranded the former to Quik.

The latest desktop program is based on Quik and it ties into the GoPro mobile apps. You'll be able to start editing in the Quik mobile app and finish up on your Mac — or vice-versa. Features include a beat sync tool that matches your edit to the rhythm of the backing track. There's an auto-highlight editing function too. Although the Mac editing suite could certainly use more features, GoPro says all the key tools from the Quik mobile app will make their way to desktop by the time a Windows version arrives later this year.

GoPro charges those who don't use its devices $10 per year to use the Quik mobile app. Subscribers to its other tiers will get access to the desktop app at no extra cost. On that note, the company is rebranding its GoPro Subscription to GoPro Premium. It still costs $50 per year (though newcomers get a 50 percent discount for the first year) and it includes perks such as unlimited cloud backups, livestreaming, discounts on equipment and guaranteed camera replacements.

The company is adding a higher subscription tier as well, GoPro Premium+. It includes all of the perks of Premium, along with HyperSmooth Pro video stabilization and up to 500GB of cloud storage for footage captured with non-GoPro cameras (compared with 25GB for Premium). Premium+ costs $100 per year, and Premium users can upgrade for $50.

Update 2/6 1:07PM ET: Clarifying that GoPro bought Replay and rebranded it as Quik.

This article originally appeared on Engadget at https://www.engadget.com/gopro-rolls-out-a-mac-editing-app-and-a-high-end-premium-subscription-tier-173838600.html?src=rss

The latest Amazon Echo Show 8 returns to an all-time low of $90

If you're already onboard with Alexa and have decided you want a smart display, a new deal on Amazon's latest Echo Show 8 may be of interest. The 8-inch display is currently down to $90 at Amazon, Target, Best Buy and other retailers, which matches the lowest price we've seen since the device was unveiled last September. Amazon normally sells the smart display for $150, though we saw it drop to $105 for much of the holiday season. Amazon's offer also includes a Sengled color smart bulb for no extra cost. That bulb is compatible with the Matter smart home standard, and we recommend a similar model in our guide to the best smart lights.

We haven't formally reviewed the latest Echo Show 8, but it's largely similar to the second-gen model from 2021, which we previously called the best smart display for Alexa users. It still offers a 1,280 x 800 resolution panel and a 13-megapixel front-facing camera. The design is mostly the same, though the new model's rounded back is a little less pronounced, and the glass on its front stretches edge-to-edge. Its front camera is also located in the center of the top edge, not off to the right, so it's a bit more convenient for framing yourself during video calls. Internally, there's an upgraded octa-core processor that should make it faster to complete Alexa requests, and the new model can work with other smart home devices using the Zigbee and Thread protocols in addition to Matter. Amazon promises improved sound quality, too, though you still shouldn't expect deep sub-bass or ultra-spaciousness with a smallish speaker like this.

All of this should keep the Echo Show 8 as the sweet spot in Amazon's smart display lineup. It's not as affordable as the Echo Show 5, but it's faster and louder, with a superior camera and more spacious display for showing photos and making video calls. It's not as big as the Echo Show 10, but it's significantly less expensive and easier to fit in more rooms around the house. Either way, you can use it to check the weather, pull up recipes or stream music, among other typical Alexa tasks. And while no smart display like this will truly be comfortable for those protective of their privacy, the Echo Show 8 at least has a camera cover and mic mute button built in. Google's Nest Hub remains a better buy for those who heavily use services like Gmail, Google Calendar and YouTube — and there are still questions regarding Alexa's long-term outlook — but this should be a solid deal if you're looking to build a smart home through Amazon's assistant. 

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/the-latest-amazon-echo-show-8-returns-to-an-all-time-low-of-90-165936914.html?src=rss

US Secretary of Transportation states the obvious: Don’t use the Apple Vision Pro while driving

Ever since the Vision Pro went on sale last week, Apple's pricey AR/VR headset has been spotted in all sorts of unusual places: from the gym to airplanes and everywhere in between. However, after one owner was seen wearing it while driving down the highway in a Tesla Cybertruck, US Secretary of Transportation Pete Buttigieg issued a warning reminding people to use some common sense. 

In a post on X alongside a snippet from the original video, Buttigieg reiterated that "ALL advanced driver assistance systems available today require the human driver to be in control and fully engaged in the driving task at all times." Similarly, Apple's headset ships with multiple warnings advising users not to use it while "operating a moving vehicle" or in "any other situations requiring attention to safety." 

Following Secretary Buttigieg's response, the creator of the video, Dante Lentini, told Gizmodo that the footage was a "skit" made with friends and that the headset was only worn for 30 to 40 seconds while driving. Additionally, Lentini says footage suggesting that he got arrested for his prank was staged. But what makes the video even more irresponsible is that while the Cybertruck comes with Tesla's Autopilot system as standard, that feature has yet to be activated for the first wave of Founder's Edition vehicles. That means Lentini was going down the highway while wearing a headset without the help of any advanced driver-assistance systems.

In some respects, it's a bit sad that Buttigieg's warning even needs to be said. However, given the massive amounts of hype and pre-orders nearing 200,000 units, it was probably only a matter of time until someone got caught driving while wearing Apple's pricey headset. 

This article originally appeared on Engadget at https://www.engadget.com/us-secretary-of-transportation-states-the-obvious-dont-use-the-apple-vision-pro-while-driving-163908086.html?src=rss

Add Taylor Swift to the list of famous people who don’t like their private jets being tracked

It looks like Elon Musk isn’t the only billionaire who doesn’t like having their private jet tracked. Pop star Taylor Swift has threatened legal action against a Florida student who set up multiple social media accounts that release real-time information as to the whereabouts of her personal aircraft, according to The Washington Post.

This is eerily reminiscent of the whole ElonJet scandal of late 2022, in which Twitter banned an account that was tracking Musk’s jet. As a matter of fact, the student facing legal action by Swift’s team is the same guy who ran that account. Jack Sweeney, 21, runs various social media pages that log the takeoffs and landings of aircraft owned by billionaires, politicians and, of course, pop stars.

Back in December, Swift’s attorneys wrote Sweeney a cease-and-desist letter that said the pop star would “have no choice but to pursue any and all legal remedies” if he did not stop publishing details as to her jet’s whereabouts, likening it to “stalking and harassing behavior.”

The letter went on to say that Sweeney’s actions had caused Swift and her family “direct and irreparable harm, as well as emotional and physical distress,” and had heightened her “constant state of fear for her personal safety.” It’s worth noting that Swift has had numerous stalkers and harassers throughout her career. Just last month, a man was arrested for stalking her at home on several occasions.

“While this may be a game to you, or an avenue that you hope will earn you wealth or fame, it is a life-or-death matter for our client,” the legal team wrote. The letter added that there is “no legitimate interest in or public need for this information, other than to stalk, harass, and exert dominion and control.”

Tree Paine, a spokesperson for Swift, made a direct line from Sweeney’s social media accounts to Swift’s harassers, saying that the pop star’s team couldn’t “comment on any ongoing police investigation but can confirm the timing of stalkers suggests a connection.”

Sweeney told The Washington Post that this is just an attempt to scare him away from sharing public data, noting that all of his jet-tracking accounts draw location information from the Federal Aviation Administration and volunteer hobbyists. Aircraft regularly broadcast their locations via transponders so air traffic controllers can see what’s going on. Anyone on the ground can pick up these signals by using a device called an ADS-B receiver, which are widely available online. “This information is already out there,” Sweeney said. “Her team thinks they can control the world.”

Swift’s team wrote that Sweeney is “notorious for disregarding the personal safety of others in exchange for public attention and/or requests for financial gain,” citing an incident in which he asked Elon Musk for $50,000 to take down the ElonJet account.

Facebook and Instagram banned Sweeney’s accounts that track Swift’s air travel late last year, but they’re still live on Bluesky, Mastodon, Telegram and other social media sites. His live-tracking accounts have been banned on X, but he’s allowed to post location data with a 24-hour delay. In addition to the world’s biggest pop star, Sweeney also tracks people like Donald Trump, Jeff Bezos, Kim Kardashian and Mark Zuckerberg.

It doesn’t look like Sweeney's planning to stop tracking the pop star's jet anytime soon. He's lawyered up to defend himself from legal action.

This article originally appeared on Engadget at https://www.engadget.com/add-taylor-swift-to-the-list-of-famous-people-who-dont-like-their-private-jets-being-tracked-163326648.html?src=rss

Meta plans to ramp up labeling of AI-generated images across its platforms

Meta plans to ramp up its labeling of AI-generated images across Facebook, Instagram and Threads to help make it clear that the visuals are artificial. It's part of a broader push to tamp down misinformation and disinformation, which is particularly significant as we wrangle with the ramifications of generative AI (GAI) in a major election year in the US and other countries.

According to Meta's president of global affairs, Nick Clegg, the company has been working with partners from across the industry to develop standards that include signifiers that an image, video or audio clip has been generated using AI. "Being able to detect these signals will make it possible for us to label AI-generated images that users post to Facebook, Instagram and Threads," Clegg wrote in a Meta Newsroom post. "We’re building this capability now, and in the coming months we’ll start applying labels in all languages supported by each app." Clegg added that, as it expands these capabilities over the next year, Meta expects to learn more about "how people are creating and sharing AI content, what sort of transparency people find most valuable and how these technologies evolve." These will help inform both industry best practices and Meta's own policies, he wrote.

Meta says the tools it's working on will be able to detect invisible signals — namely AI generated information that aligns with the C2PA and IPTC technical standards — at scale. As such, it expects to be able to pinpoint and label images from Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock, all of which are incorporating GAI metadata into images that their products whip up.

As for GAI video and audio, Clegg points out that companies in the space haven't started incorporating invisible signals into those at the same scale that they have images. As such, Meta isn't yet able to detect video and audio that's generated by third-party AI tools. In the meantime, Meta expects users to label such content themselves.

"While the industry works towards this capability, we’re adding a feature for people to disclose when they share AI-generated video or audio so we can add a label to it," Clegg wrote. "We’ll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so. If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate, so people have more information and context."

That said, putting the onus on users to add disclosures and labels to AI-generated video and audio seems like a non-starter. Many of those people will be trying to intentionally deceive others. On top of that, others likely just won't bother or won't be aware of the GAI policies.

In addition, Meta is looking to make it harder for people to alter or remove invisible markers from GAI content. The company's FAIR AI research lab has developed tech that "integrates the watermarking mechanism directly into the image generation process for some types of image generators, which could be valuable for open source models so the watermarking can’t be disabled," Clegg wrote. Meta is also working on ways to automatically detect AI-generated material that doesn't have invisible markers.

Meta plans to continue collaborating with industry partners and "remain in a dialogue with governments and civil society" as GAI becomes more prevalent. It believes this is the right approach to handling content that's shared on Facebook, Instagram and Threads for the time being, though it will adjust things if necessary.

One key issue with Meta's approach — at least while it works on ways to automatically detect GAI content that doesn't use the industry-standard invisible markers — is that it requires buy-in from partners. For instance, C2PA has a ledger-style method of authentication. For that to work, both the tools used to create images and the platforms on which they're hosted both need to buy into C2PA.

Meta shared the update on its approach to labeling AI-generated content just a few days after CEO Mark Zuckerberg shed some more light on his company's plans to build general artificial intelligence. He noted that training data is one major advantage Meta has. The company estimates that the photos and videos shared on Facebook and Instagram amount to a dataset that's greater than the Common Crawl. That's a dataset of some 250 billion web pages that has been used to train other AI models. Meta will be able to tap into both, and it doesn't have to share the data it has vacuumed up through Facebook and Instagram with anyone else.

The pledge to more broadly label AI-generated content also comes just one day after Meta's Oversight Board determined that a video that was misleadingly edited to suggest that President Joe Biden repeatedly touched the chest of his granddaughter could stay on the company's platforms. In fact, Biden simply placed an "I voted" sticker on her shirt after she voted in person for the first time. The board determined that the video was permissible under Meta's rules on manipulated media, but it urged the company to update those community guidelines.

This article originally appeared on Engadget at https://www.engadget.com/meta-plans-to-ramp-up-labeling-of-ai-generated-images-across-its-platforms-160234038.html?src=rss

Bluesky is ditching its waitlist and is now open to everyone

Bluesky, the open-source Twitter alternative, is getting rid of its waitlist and opening its decentralized platform to everyone. The service, which opened in beta last spring, currently has a little over 3 million users, though that number could rise quickly now that prospective users don’t need an invitation to join.

It’s a significant moment for Bluesky, which began as an internal project at Jack Dorsey’s Twitter (Bluesky ended its association with the entity now known as X after Elon Musk’s takeover, though Dorsey is on Bluesky’s board.) The company is part of a growing movement for decentralized social media, which proponents say could address many of the shortcomings of centrally-controlled platforms like Facebook, X and TikTok.

“We really believe that the future of social is, and should be, open and decentralized,” Bluesky CEO Jay Graber tells Engadget. “This is something that we think is good for the public conversation overall.”

For those who missed Bluesky’s first hype cycle last spring, the service is functionally similar to X and Threads. Its posts — lovingly referred to by some early users as “skeets” — default to a chronological timeline, though users can also follow numerous other algorithmic feeds created by fellow users. Soon, the company will take a similar approach to content moderation, allowing third-parties to create their own “labeling services” for Bluesky content.

The service is still much smaller than most of its counterparts and doesn’t yet have a direct messaging feature. But it has become a haven for a number of once high-profile Twitter users and others looking for more Weird Twitter vibes and less Elon Musk.

Much like how Mastodon and other services in the fediverse are built on the ActivityPub protocol, Bluesky runs on its own open-source standard called AT Protocol.Right now, the only Bluesky is the version of the service created by Bluesky, the company. But that will soon change, as the company plans to start experimenting with federation, which will allow other developers and groups to create their own instances of Bluesky.

“The protocol is like an API that's permanently open,” Graber says. “And that means that developer creativity can kind of go wild.”

Of course, the world of Twitter alternatives looks considerably different since Bluesky first launched. Meta’s Threads app has grown to 130 million users since last summer. Meta has also started to make some Threads posts available on Mastodon, the first step toward making it compatible with the rest of the fediverse.

But while Threads may be showing some support for open-source protocols, that’s not the same as decentralization, Graber argues. “If they integrate with ActivityPub, you would still be on a Facebook-owned app with this little window into a more open world, and it wouldn't be as easy to leave. We hope that the AT Protocol universe lets people get in between different apps, different services a lot easier.”

This article originally appeared on Engadget at https://www.engadget.com/bluesky-is-ditching-its-waitlist-and-opening-to-everyone-140026198.html?src=rss