US officials announce the takedown of an AI-powered Russian bot farm

US officials and their allies have identified and taken down an artificial intelligence-powered Russian bot farm comprised of almost 1,000 accounts, which spread disinformation and pro-Russian sentiments on X. The Justice Department has revealed the the scheme that was made possible by software was created by a digital media department within RT, a Russian state-controlled media outlet. Its development was apparently led by RT's deputy editor-in-chief back in 2022 and was approved and funded by an officer at Russia's Federal Security Service, the main successor of the KGB. 

In a cybersecurity advisory issued by the FBI, intelligence officers from the Netherlands and cybersecurity authorities from Canada, they specifically mentioned a tool called "Meliorator," which can create "authentic appearing social media personas en masse," generate text messages as well as images and mirror disinformation from other bot personas. Authorities have seized two domains that the operation used to create email addresses that were necessary to sign up for accounts on X, formerly known as Twitter, which served as home to the bots. 

The Justice Department, however, is still in the midst of finding all 968 accounts used by the Russian actors to disseminate false information. X has shared information with authorities on all the identified accounts so far and has already suspended them. As The Washington Post has noted, the bots slipped through X's safeguards, because they can copy-paste OTPs from their email accounts to log in. The operations' use of US-based domain names violates the International Emergency Economic Powers Act, the Justice Department said. Meanwhile, paying for them violates federal money laundering laws in the US.

A lot of profiles created by the tool impersonated Americans by using American-sounding names and setting their locations on X to various places in the US. The examples presented by the Justice Department used headshots against gray backgrounds as their profile photos, which are a pretty good indicator that they were created using AI. One account with the name Ricardo Abbott, which claimed to be from Minneapolis, posted a video of Russian president Vladimir Putin justifying Russia's actions in Ukraine. Another account with the name Sue Williamson posted a video of Putin saying that the war in Ukraine isn't about territorial conflict and is a matter of "principles on which the New World Order will be based.” These posts were then liked and reposted by other bots in the network. 

It's worth noting that while this particular bot farm was confined to X, the people behind it had plans to expand to other platforms, based on the authorities' analysis of the Meliorator software. Foreign actors that spread political disinformation have been using social media to disseminate false news for years. But now they've added AI to their arsenal. Back in May, OpenAI reported that it dismantled five covert influence operations originating from Russia, China, Israel and Iran that were using its models to influence political outcomes.

"Russia intended to use this bot farm to disseminate AI-generated foreign disinformation, scaling their work with the assistance of AI to undermine our partners in Ukraine and influence geopolitical narratives favorable to the Russian government," FBI Director Christopher Wray said in a statement. "The FBI is committed to working with our partners and deploying joint, sequenced operations to strategically disrupt our most dangerous adversaries and their use of cutting-edge technology for nefarious purposes."

As for RT, the media organization told Bloomberg: "Farming is a beloved pastime for millions of Russians." 

This article originally appeared on Engadget at https://www.engadget.com/us-officials-announce-the-takedown-of-an-ai-powered-russian-bot-farm-054034912.html?src=rss

Texas court blocks the FTC’s ban on noncompete agreements

The Federal Trade Commission's (FTC) ban on noncompete agreements was supposed to take effect on September 4, but a Texan court has postponed its implementation by siding with the plaintiffs in a lawsuit that seeks to block the rule. Back in April, the FTC banned noncompetes, which have been widely used in the tech industry for years, to drive innovation and protect workers' rights and wages. A lot of companies are unsurprisingly unhappy with the agency's rule — as NPR notes, Dallas tax services firm Ryan LLC sued the FTC hours after its announcement. The US Chamber of Commerce and other groups of American businesses eventually joined the lawsuit. 

"Noncompete clauses keep wages low, suppress new ideas, and rob the American economy of dynamism," FTC Chair Lina M. Khan said when the rule was announced. They prevent employees from moving to another company or from building businesses of their own in the same industry, so they may be stuck working in a job with lower pay or in an environment they don't like. But the Chamber of Commerce’s chief counsel Daryl Joseffer called the ban an attempt by the government to micromanage business decisions in a statement sent to Bloomberg

"The FTC’s blanket ban on noncompetes is an unlawful power grab that defies the agency’s constitutional and statutory authority and sets a dangerous precedent where the government knows better than the markets," Joseffer said. The FTC disagrees and told NPR that its "authority is supported by both statute and precedent."

US District Judge Ada Brown, an appointee of former President Donald Trump, wrote in her decision that "the text, structure, and history of the FTC Act reveal that the FTC lacks substantive rulemaking authority with respect to unfair methods of competition." Brown also said that the plaintiffs are "likely to succeed" in getting the rule struck down and that it's in the public's best interest to grant the plaintiff's motion for preliminary injunction. The judge added that the court will make a decision "on the ultimate merits of this action on or before August 30."

This article originally appeared on Engadget at https://www.engadget.com/texas-court-blocks-the-ftcs-ban-on-noncompete-agreements-150020601.html?src=rss

Japan’s government says goodbye to floppy disks

Floppy disks may seem like a relic from an ancient time of computers but there are still places and even governments in the world that still use them to run its most basic functions. Japan is no longer one of those countries.

Japan’s Digital Agency announced on Wednesday it has rid its use of outdated floppy disks to operate its government computer systems. The only system still in place that requires the use of floppy disks is an environmental system that monitors vehicle recycling, according to Reuters.

Digital Minister Taro Kono declared in a statement to the news agency, “We have won the war on floppy disks on June 28!” Presumably, the statement wasn’t printed on that annoying dot matrix printer paper with the edges that never tear straight.

Kono’s agency started his crusade against ‘90s era computer technology in 2022 shortly after his appointment to the Digital Agency. Around 1,900 of Japan’s government procedures used floppy disks and other outdated technology such as fax machines, CDs and MiniDiscs. He famously declared “a war on floppy discs [sic]” to his 2.5 million followers on X.

Of course, Japan isn’t the only country that used to rely on floppy disks long after the rest of the world moved on to more efficient forms of data storage. The US military was still using 8-inch floppy disks to operate its Strategic Automated Command and Control System (SACCS), a 1970s computer system that received nuclear launch codes and sent emergency messages to military centers and field sources. The world learned the scary truth about SACCS thanks to CBS’s 60 Minutes and reporter Lesley Stahl. The Defense Department finally phased out the system in 2019. Let’s hope they also removed the shag carpeting and velvet upholstery.

This article originally appeared on Engadget at https://www.engadget.com/japans-government-says-goodbye-to-floppy-disks-214449682.html?src=rss

Texas age-verification law for pornography websites is going to the Supreme Court

Texas will be the main battleground for a case about porn websites that is now headed to the Supreme Court. The Free Speech Coalition, a nonprofit group that represents the adult industry, petitioned the top court in April to review a state law that requires websites with explicit material to collect proof of users' ages. SCOTUS today agreed to take on the case challenging a previous ruling by the US Court of Appeals for the 5th Circuit as a part of its next term beginning in October.

Texas was one of many states over the last year to pass this type of age-verification legislation aimed at porn websites. While supporters of these bills have said they are intended to protect minors from seeing inappropriate content, their critics have called the laws an overreach that could create new privacy risks. In response to the laws, Pornhub ended its operation in those states, a move that attracted public attention to the situation.

"While purportedly seeking to limit minors' access to online sexual content, the Act imposes significant burdens on adults' access to constitutionally protected expression," the FSC petition says. "Of central relevance here, it requires every user, including adults, to submit personally identifying information to access sensitive, intimate content over a medium — the internet — that poses unique security and privacy concerns."

This case is one of the latest First Amendment rights questions to go before the Supreme Court. Earlier this month, the court remanded a case about social media content moderation back to lower courts and passed judgment on how closely social media companies can engage with federal officials about misinformation.

This article originally appeared on Engadget at https://www.engadget.com/texas-age-verification-law-for-pornography-websites-is-going-to-the-supreme-court-233511418.html?src=rss

The Morning After: Supreme Court rejects rulings on social media moderation

Two state laws from Texas and Florida, that could upend the way social media companies handle content moderation are still up in the air. The Supreme Court sent the challenges back to lower courts, which vacates previous rulings. In a 9 - 0 decision in Moody v. NetChoice and NetChoice v. Paxton, the Supreme Court said that earlier rulings in lower courts had not properly evaluated the laws’ impact on the First Amendment. Never heard of NetChoice? It’s an industry group representing Meta, Google, X and other large tech companies. So it’s incredibly well-funded. NetChoice argued that the laws were unconstitutional.

The Texas law, passed in 2021, allows users to sue large social media companies over alleged “censorship” of their political views. The Supreme Court suspended the law in 2022 following a legal challenge. The Florida measure, also passed in 2021, attempted to impose fines on social media companies for banning politicians – that’s also on hold.

Justice Elena Kagan said that lower court rulings in both cases “concentrated” on the issue of “whether a state law can regulate the content-moderation practices used in Facebook’s News Feed (or near equivalents).” However, she writes, “they did not address the full range of activities the laws cover, and measure the constitutional against the unconstitutional applications.” It seems the lower courts need to do their homework.

— Mat Smith

The Kindle Scribe Essentials bundle is nearly $200 off at Amazon

Sega’s new Crazy Taxi reboot will be an open-world MMO

The best gaming handhelds

The Sims 4’s Lovestruck expansion lets you dive into a steamy polyamory sandbox

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!

TMA
Midjourney

Midjourney, a popular AI-powered image generator, is creating images of Donald Trump and Joe Biden despite saying that it would block users from doing so ahead of the upcoming US presidential election. Engadget managed to get the tool to create images of Trump multiple times. The only time Midjourney refused to create an image of Trump or Biden was when it was asked to do so explicitly. “The Midjourney community voted to prevent using ‘Donald Trump’ and ‘Joe Biden’ during election season,” the service said in that instance. Midjourney did not respond to a request for comment from Engadget.

Continue reading.

Talking of AI-generated fakes, YouTube quietly added a new policy last month that lets you request the removal of AI-generated content that features your likeness. YouTube says several factors will determine whether it considers a removal, including whether the content is altered or synthetic (and whether it’s disclosed as such), easily identifiable as the person in question or realistic.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-supreme-court-rejects-rulings-on-social-media-moderation-111527524.html?src=rss

Midjourney is creating Donald Trump pictures when asked for images of ‘the president of the United States’

Midjourney, a popular AI-powered image generator, is creating images of Donald Trump and Joe Biden despite saying that it would block users from doing so ahead of the upcoming US presidential election.

When Engadget prompted the service to create an image of “the president of the United States,” Midjourney generated four images in various styles of former president Donald Trump.

Midjourney created an image of Trump despite saying it wouldn't.
Midjourney

When asked to create an image of “the next president of the United States,” the tool generated four images of Trump as well.

Midjourney generated Donald Trump images despite saying it wouldn't.
Midjourney

When Engadget prompted Midjourney to create an image of “the current president of the United States,” the service generated three images of Trump and one image of former president Barack Obama.

Midjourney also created an image of former President Obama
Midjourney

The only time Midjourney refused to create an image of Trump or Biden was when it was asked to do so explicitly. “The Midjourney community voted to prevent using ‘Donald Trump’ and ‘Joe Biden’ during election season,” the service said in that instance. Other users on X were able to get Midjourney to generate Trump’s images too.

The tests show that Midjourney’s guardrails to prevent users from generating images of Trump and Biden ahead of the upcoming US presidential election aren’t enough — in fact, it’s really easy for people to get around them. Other chatbots like OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini and Meta AI did not create images of Trump or Biden despite multiple prompts.

Midjourney did not respond to a request for comment from Engadget.

Midjourney was one the first AI-powered image generators to explicitly ban users from generating images of Trump and Biden. “I know it’s fun to make Trump pictures — I make Trump pictures,” the company’s CEO, David Holz, told users in a chat session on Discord, earlier this year. “However, probably better to just not — better to pull out a little bit during this election. We’ll see.” A month later, Holz reportedly told users that it was time to “put some foots down on election-related stuff for a bit” and admitted that “this moderation stuff is kind of hard.” The company’s existing content rules prohibit the creation of “misleading public figures” and “events portrayals” with the “potential to mislead.”

Last year, Midjourney was used to create a fake image of Pope Francis wearing a puffy white Balenciaga jacket that went viral. It was also used to create fake images of Trump being arrested ahead of his arraignment at the Manhattan Criminal Court last year for his involvement in a hush money payment made to adult film star Stormy Daniels. Shortly afterwards, the company halted free trials of the service and, instead, required people to pay at least $10 a month to use it.

Last month, the Center for Countering Digital Hate, a non-profit organization that aims to stop the spread of misinformation and hate speech online, found that Midjourney’s guardrails against generating misleading images of popular politicians including Trump and Biden failed 40% of its tests. The CCDH was able to use Midjourney to create an image of president Biden being arrested and Trump appearing next to a body double. The CCDH was also able to bypass Midjourney’s guardrails by using descriptions of each candidate’s physical appearance rather than their names to generate misleading images.

“Midjourney is far too easy to manipulate in practice – in some cases it’s completely evaded just by adding punctuation to slip through the net,” wrote CCDH CEO Imran Ahmed in a statement at the time. “Bad actors who want to subvert elections and sow division, confusion and chaos will have a field day, to the detriment of everyone who relies on healthy, functioning democracies.

Earlier this year, a coalition of 20 tech companies including OpenAI, Google, Meta, Amazon, Adobe and X signed an agreement to help prevent deepfakes in elections taking place in 2024 around the world by preventing their services from generating images and other media that would influence voters. Midjourney was absent from that list.

This article originally appeared on Engadget at https://www.engadget.com/midjourney-is-creating-donald-trump-pictures-when-asked-for-images-of-the-president-of-the-united-states-212427937.html?src=rss

Supreme Court remands social media moderation cases over First Amendment issues

Two state laws that could upend the way social media companies handle content moderation are still in limbo after a Supreme Court ruling sent the challenges back to lower courts, vacating previous rulings. In a 9 - 0 decision in Moody v. NetChoice and NetChoice v. Paxton, the Supreme Court said that earlier rulings in lower courts had not properly evaluated the laws’ impact on the First Amendment.

The cases stem from two state laws, from Texas and Florida, which tried to impose restrictions on social media companies’ ability to moderate content. The Texas law, passed in 2021, allows users to sue large social media companies over alleged “censorship” of their political views. The Supreme Court suspended the law in 2022 following a legal challenge. Meanwhile, the Florida measure, also passed in 2021, attempted to impose fines on social media companies for banning politicians. That law has also been on hold pending legal challenges.

Both laws were challenged by NetChoice, an industry group that represents Meta, Google, X and other large tech companies. NetChoice argued that the laws were unconstitutional and would essentially prevent large platforms from performing any kind of content moderation. The Biden Administration also opposed both laws. In a statement, NetChoice called the decision “a victory for First Amendment rights online.”

In a decision authored by Justice Elena Kagan, the court said that lower court rulings in both cases “concentrated” on the issue of “whether a state law can regulate the content-moderation practices used in Facebook’s News Feed (or near equivalents).” But, she writes, “they did not address the full range of activities the laws cover, and measure the constitutional against the unconstitutional applications.”

Essentially, the usually-divided court agreed that the First Amendment implications of the laws could have broad impacts on parts of these sites unaffected by algorithmic sorting or content moderation (like direct messages, for instance) as well as on speech in general. Analysis of those externalities, Kagan wrote, simply never occurred in the lower court proceedings. The decision to remand means that analysis should take place, and the case may come back before SCOTUS in the future.

“In sum, there is much work to do below on both these cases … But that work must be done consistent with the First Amendment, which does not go on leave when social media are involved,” Kagan wrote. 

This article originally appeared on Engadget at https://www.engadget.com/supreme-court-remands-social-media-moderation-cases-over-first-amendment-issues-154001257.html?src=rss

FCC chair asks telecoms companies to prove they’re actually trying to stop political AI robocalls

FCC Chairwoman Jessica Rosenworcel has drafted a series of letters to nine major telecom companies, including AT&T and Comcast, to ask if they’re actually doing anything about AI political robocalls. AI-generated voices are getting pretty good at mimicking humans and we’ve already seen this technology in action, when an audio deepfake urged voters to skip the New Hampshire Democratic primary.

“We know that AI technologies will make it cheap and easy to flood our networks with deepfakes used to mislead and betray trust. It is especially chilling to see AI voice cloning used to impersonate candidates during elections. As AI tools become more accessible to bad actors and scammers, we need to do everything we can to keep this junk off our networks,” wrote Rosenworcel.

It’s worth noting that all AI robocalls were banned back in February, political or not, but the big telecom companies have yet to announce any enforcement plans. The mandate, however, does give State Attorneys General the ability to prosecute those involved in the robocalls.

Rosenworcel has also been trying to force political campaigns to disclose whether or not they used AI in TV or radio ads, as reported by US News & World Report. The proposed plan, however, has faced opposition from the Republican chair of the Federal Election Commission. Chairman Sean Cooksey wrote in a letter to Rosenworcel that the plan would overwrite the authority of the FEC to enforce federal campaign law, prompting a legal challenge.

This article originally appeared on Engadget at https://www.engadget.com/fcc-chair-asks-telecoms-companies-to-prove-theyre-actually-trying-to-stop-political-ai-robocalls-184227549.html?src=rss

Supreme Court ruling may allow officials to coordinate with social platforms again

The US Supreme Court has ruled on controversial attempt by two states, Missouri and Louisiana, to limit Biden Administration officials and other government agencies from engaging with workers at social media companies about misinformation, election interference and other policies. Rather than set new guidelines on acceptable communication between these parties, the Court held that the plaintiffs lacked standing to bring the issue at all. 

In Murthy, the states (as well as five individual social media users) alleged that, in the midst of the COVID pandemic and the 2020 election, officials at the CDC, FBI and other government agencies "pressured" Meta, Twitter and Google "to censor their speech in violation of the First Amendment."

The Court wrote, in an opinion authored by Justice Barrett, that "the plaintiffs must show a substantial risk that, in the near future, at least one platform will restrict the speech of at least one plaintiff in response to the actions of at least one Government defendant. Here, at the preliminary injunction stage, they must show that they are likely to succeed in carrying that burden." She went on to describe this as "a tall order." 

Though a Louisiana District Court order blocking contact between social media companies and Biden Administration officials has been on hold, the case has still had a significant impact on relationships between these parties. Last year, Meta revealed that its security researchers were no longer receiving their usual briefings from the FBI or CISA (Cybersecurity and Infrastructure Security Agency) regarding foreign election interference. FBI officials had also warned that there were instances in which they discovered election interference attempts but didn’t warn social media companies due to additional layers of legal scrutiny implemented following the lawsuit. With today's ruling it seems possible such contact might now be allowed to continue. 

In part, it seems the Court was reluctant to rule on the case because of the potential for far-reaching First Amendment implications. Among the arguments made by the Plaintiffs was an assertion of a "right to listen" theory, that social media users have a Constitutional right to engage with content. "This theory is startlingly broad," Barrett wrote, "as it would grant all social-media users the right to sue over someone else’s censorship." The opinion was joined by Justices Roberts, Sotomayor, Kagan, Kavanaugh and Jackson. Justice Alito dissented, and was joined by Justices Thomas and Gorsuch. 

The case was one of a handful involving free speech and social media to come before the Supreme Court this term. The court is also set to rule on two linked cases involving state laws from Texas and Florida that could upend the way social media companies handle content moderation.

This article originally appeared on Engadget at https://www.engadget.com/supreme-court-ruling-may-allow-officials-to-coordinate-with-social-platforms-again-144045052.html?src=rss

Julian Assange has been released from prison in a plea deal with the US

WikiLeaks founder Julian Assange has been released from prison and has agreed to plead guilty to violating the Espionage Act. The WikiLeaks account on X, formerly Twitter, has announced his release after being granted bail by the High Court in London. It also tweeted a video that appears to show Assange boarding a plane at Stansted Airport. The WikiLeaks founder and former editor-in-chief is expected to appear in a courtroom in the US Northern Mariana Islands on June 26 in order to finalize his plea deal with the US government. 

According to a letter from the US Department of Justice obtained by The Washington Post, Assange is specifically pleading guilty to "conspiring to unlawfully obtain and disseminate classified information relating to the national defense of the United States." He will also be returning to Australia, his country of citizenship, right after the proceedings. CBS News reports that Justice Department prosecutors recommended a sentence of 62 months, and seeing as Assange already spent more than five years in a UK prison, he won't be spending any time behind bars in the US. 

Assange was the editor-in-chief of WikiLeaks when the website published US classified information, obtained by whistleblower and former Army intelligence officer Chelsea Manning, about the wars in Afghanistan and Iraq. In 2010, Sweden issued an arrest warrant for Assange over allegations of sexual assault by two women. Swedish authorities dropped their investigation into the rape allegations in 2017. 

Assange sought asylum at the Ecuadorian Embassy in London after losing his appeal against the warrant, and he lived there for seven years until he was evicted. Lenín Moreno, the president of Ecuador at the time, explained that his asylum was "unsustainable and no longer viable" because he displayed "discourteous and aggressive behavior." London's Metropolitan Police Service removed Assange from the embassy and arrested him on behalf of the US under an extradition warrant.

In WikiLeaks' announcement of his release, it said Assange left Belmarsh maximum security prison "after having spent 1,901 days there." The organization said that the "global campaign" by "press freedom campaigners, legislators and leaders from across the political spectrum" enabled "a long period of negotiations with the US Department of Justice" that led to the plea deal. 

This article originally appeared on Engadget at https://www.engadget.com/julian-assange-has-been-released-from-prison-in-a-plea-deal-with-the-us-044226610.html?src=rss