Brazil bans X for refusing to comply with Supreme Court order

Brazilian Supreme Court Justice Alexandre de Moraes has ordered the nation’s internet service providers to block the social media platform X. The New York Times reports that the order stems from owner Elon Musk’s refusal to appoint a legal representative for his case and comply with Moraes’ order to shut down X accounts he deemed as harmful to the democratic process. The order has been published online by Brazilian news site Poder 360.

The justice issued a deadline to telecom companies and tech giants to remove the X from its app stores and platforms. Apple and Google have five days to take down the social media app from its app stores. Brazil’s telecommunication’s agency Anatel has confirmed it has received the order, and ISPs in the country have just 24 hours to comply with the order.

Justice Moraes’ order doesn’t just block the country’s access to X. It also makes it a crime to use the app through a virtual private network (VPN). Anyone caught accessing X with a VPN could face a daily fine of 50,000 Brazilian Real (around $8,900).

Justice Moraes also froze the Brazillian bank accounts of SpaceX’s Starlink internet service provider on Thursday to further pressure Musk to comply with the court’s order. SpaceX, like X, is a private company majority owned by Musk, and X has $3 million in unpaid fines related to its case in the country. The day before, Justice Moraes issued a threat to ban the X platform entirely across Brazil if the social media company did not appoint a legal representative in the country. The deadline passed without any change to the court’s docket so the judge followed through on his promise.

Starlink expressed its disapproval with the order, vowing to fight the ruling. It even threatened to make its services free to customers to subvert the justice’s order.

The legal fight between Justice Moraes and Musk has been fuming for months. The Supreme Court Judge is also Brazil’s electoral authority and has been monitoring and issuing orders to candidates to steer clear of spreading false information through internet and social media channels.

Brazil’s 2022 presidential election between infamous incumbent Jair Bolsonaro and challenger and former President Luiz Inácio Lula da Silva was reportedly filled with attempts to present voters with false information. Justice Moraes was, until recently, president of the nation's Superior Electoral Court, which gave him the power to order takedowns of content that violated previous court orders. The judge issued a similar block of the messaging app Telegram for failing to freeze offending accounts, which was lifted after compliance.

Musk characterized Moraes’ directives to take down or freeze similar misinformation accounts from X as “censorship orders.” Earlier this month, Musk expressed his continued refusal to comply with the court by closing X’s Brazilian office in order “to protect the safety of our staff.” X’s Global Governments Affairs team also promised to publish all of “Judge de Moraes’ illegal demands and all related court filings.”

This article originally appeared on Engadget at https://www.engadget.com/social-media/brazil-bans-x-for-refusing-to-comply-with-supreme-court-order-230247980.html?src=rss

Starlink’s local bank accounts are frozen as X prepares to be shut down in Brazil

A judge in Brazil has blocked Starlink’s bank accounts in the country amid a deepening dispute with X. The move comes as the same Supreme Court judge has threatened to shut down X in the country, and is a direct response to the ongoing legal battle with the social media company, Reuters reported.

X owner Elon Musk has been feuding with Brazil Supreme Court judge Alexandre de Moraes for months over demands to block certain accounts in the country. The company closed down its operations in Brazil earlier this month as a result of the court orders, which X has characterized as “censorship orders.”

Now, Moraes is apparently attempting to use one of Musk’s other companies, SpaceX-owned Starlink, in an attempt to get X to comply with the court order. “This order is based on an unfounded determination that Starlink should be responsible for the fines levied—unconstitutionally—against X,” Starlink wrote in a statement on X. “It was issued in secret and without affording Starlink any of the due process of law guaranteed by the Constitution of Brazil. We intend to address the matter legally.”

Moraes has also threatened to shut down X in the country entirely. On Wednesday, the judge said X would be shut down in Brazil if they didn’t appoint a legal representative in the country. X said in an update Thursday, shortly after that deadline had passed, that it “soon” expects Moraes to order the shutdown.

“We are absolutely not insisting that other countries have the same free speech laws as the United States,” the company wrote in a statement published in English and Portuguese. “The fundamental issue at stake here is that Judge de Moraes demands we break Brazil’s own laws. We simply won’t do that.” The company said it planned to publish Moraes' "illegal demands and all related court filings" in the coming days. 

This article originally appeared on Engadget at https://www.engadget.com/social-media/starlinks-local-bank-accounts-are-frozen-as-x-prepares-to-be-shut-down-in-brazil-234046493.html?src=rss

FCC fines telecoms operator $1 million for transmitting Biden deepfake

In January, calls using an AI-generated voice imitating President Biden instructed voters not to take part in the New Hampshire Primary. Now, as the 2024 election nears, the Federal Communications Commission is sending a message by further cracking down on those responsible for the Biden deepfake. Lingo Telecom, which transmitted the fraudulent calls, will pay the FCC a $1 million civil penalty and must demonstrate and implement a compliance plan. 

In response to the settlement, The Enforcement Bureau Chief Loyaan A. Egal stated, "..the potential combination of the misuse of generative AI voice-cloning technology and caller ID spoofing over the U.S. communications network presents a significant threat. This settlement sends a strong message that communications service providers are the first line of defense against these threats and will be held accountable to ensure they do their part to protect the American public."

This step follows the FCC's proposed $6 million fine for Steven Kramer, the political consultant who directed the calls. The FCC alleges he also violated the Truth in Caller ID Act by spoofing a local politician's phone number. The enforcement action in Kramer's case is still pending. 

This article originally appeared on Engadget at https://www.engadget.com/fcc-fines-telecoms-operator-1-million-for-transmitting-biden-deepfake-120010234.html?src=rss

Texas judge blocks the FTC from enforcing its ban on noncompete agreements

The Federal Trade Commission's (FTC) efforts to ban noncompete agreements has been blocked by a federal judge in Texas. According to The Washington Post, US District Judge Ada Brown has determined that the agency doesn't have the authority to enforce the rule, which was supposed to take effect on September 4. She reportedly wrote in her decision that the FTC only looked at "inconsistent and flawed empirical evidence" and didn't consider evidence in support of noncompetes. "The role of an administrative agency is to do as told by Congress, not to do what the agency thinks it should do," she added. 

FTC Chair Lina M. Khan explained that "noncompete clauses keep wages low, suppress new ideas, and rob the American economy of dynamism" when the agency voted 3-2 in favor of the ban. Noncompete agreements are widely used in the tech industry, and preventing companies from adding them to contracts would mean that workers will be able to freely move to a new job or start a business in the same field. The two Republican commissioners in the FTC, Melissa Holyoak and Andrew Ferguson, voted against the ban and also said that the agency "overstepped the boundaries of its power."

In July, Brown temporarily blocked the rule's enforcement to assess the lawsuit filed by Dallas tax services firm Ryan LLC mere hours after the FTC announced the ban. The US Chamber of Commerce and other groups of American businesses eventually joined the tax firm in challenging the new rule on noncompete clauses. 

"We are disappointed by Judge Brown's decision and will keep fighting to stop noncompetes that restrict the economic liberty of hardworking Americans, hamper economic growth, limit innovation, and depress wages," FTC spokesperson Victoria Graham told The Post. "We are seriously considering a potential appeal, and today's decision does not prevent the FTC from addressing noncompetes through case-by-case enforcement actions."

A federal judge in Florida also blocked the rule last week, though only for the lawsuit's plaintiffs. Meanwhile, another judge in Pennsylvania ruled last month that the agency has the authority to enforce the ban in a separate case filed by a tree-care company in the state. All three cases could still be appealed and could even make their way to the Supreme Court. 

This article originally appeared on Engadget at https://www.engadget.com/general/texas-judge-blocks-the-ftc-from-enforcing-its-ban-on-noncompete-agreements-133059676.html?src=rss

X is closing its operations in Brazil immediately, but its service will remain live for users

X says it's ending business operations in Brazil effective immediately, but the service will remain available to users in the country. The company says Alexandre de Moraes, the president of the Superior Electoral Court and a justice of the Supreme Federal Court, threatened one of X's legal representatives with arrest if it did not "comply with his censorship orders." 

According to Reuters, de Moreas demanded that X remove certain content from its platform. Rather than comply, X has opted to end its local operations "to protect the safety of our staff." 

According to X, de Moraes made the threat in a "secret order," which it shared publicly. X owner Elon Musk claimed that the demand "would require us to break (in secret) Brazilian, Argentinian, American and international law." He added that, "The decision to close the 𝕏 office in Brazil was difficult, but, if we had agreed to @alexandre’s (illegal) secret censorship and private information handover demands, there was no way we could explain our actions without being ashamed."

"Despite our numerous appeals to the Supreme Court not being heard, the Brazilian public not being informed about these orders and our Brazilian staff having no responsibility or control over whether content is blocked on our platform, Moraes has chosen to threaten our staff in Brazil rather than respect the law or due process," X said in a statement on its Global Government Affairs account. "[de Moraes'] actions are incompatible with democratic government. The people of Brazil have a choice to make — democracy, or Alexandre de Moraes."

Musk has been railing against de Moraes for months. In April, he said he would defy orders from the legislator to block certain accounts in Brazil, claiming that they were unconstitutional. In response, de Moraes opened an obstruction of justice inquiry against Musk. X said later in April it would comply with every order issued by Brazil's top courts.

That same month, the House Judiciary Committee released an interim staff report claiming that the Brazilian government was trying to force X (and other social media platforms) to censor more than 300 accounts. It said that the accounts included those belonging to former Brazil president Jair Bolsonaro, a member of the country's federal senate and a journalist.

X does not have a public relations team that can be reached for comment.

This article originally appeared on Engadget at https://www.engadget.com/social-media/x-is-closing-its-operations-in-brazil-immediately-but-its-service-will-remain-live-for-users-165224020.html?src=rss

California state IDs can now be stored in Apple Wallet and Google Wallet

California is the latest state to make its driver's licenses mobile. Today, Governor Gavin Newsom's office announced that both Apple Wallet and Google Wallet will be adding support for California driver's licenses and state IDs. The release clarified that residents still need to carry a physical copy of their identification, but that the mobile option would make age verification faster during air travel and at participating businesses.

“We’re partnering with two iconic California companies – Apple and Google – to provide convenient, private and secure driver’s licenses and ID cards directly on people’s phones," Newsom said. "This is a big step in our efforts to better serve all Californians, meeting people where they’re at and with technology people use every day.”

The addition of licenses to these tech companies' wallet apps is part of a bigger program by California's Department of Motor Vehicles. The mobile Drivers License (mDL) pilot introduced a proprietary wallet app from the state agency that gave California residents the same capabilities to upload their driver's licenses to their smartphones. More than 500,000 residents have done so to date in the mDL program.

Arizona was the first state to bring driver's licenses to Apple Wallet in 2022, although both iOS and Android were exploring the technology years before. Maryland, Arizona, Colorado, Georgia and Ohio have also adopted support for mobile identification. And any news about identification is a good reminder that Real ID laws, which require more documentation to board a plane or enter some government facilities, are slated to take effect in 2025.

This article originally appeared on Engadget at https://www.engadget.com/apps/california-state-ids-can-now-be-stored-in-apple-wallet-and-google-wallet-200021839.html?src=rss

US Homeland Security will reportedly collect face scans of migrant kids

Update, August 15, 5:50PM ET: The US Department of Homeland Security has issued a statement disputing some of MIT Technology Review's reporting. We've updated our post below with its statement and more details. 


The US Department of Homeland Security (DHS), which is looking to improve its facial recognition algorithms, is reportedly planning to use the facial data of migrant children entering the country for training. According to MIT Technology Review, the agency intends to collect and analyze facial captures of kids younger than 14. John Boyd, the assistant director of Homeland Security's Office of Biometric Identity Management who's involved in the development of biometric services for the government, told the publication that the collection will include children "down to the infant."

Programs that collect biometric information and even DNA samples from migrants entering the country typically only apply to people between 14 and 79 years old. Boyd said Homeland Security's plan was likely made possible by some of its sub-offices' decision to remove age restrictions for the collection of biometric data. Since the information is also supposed to be used for research purposes and not for the agency's actual operations, Homeland's restrictions for biometric collection also don't apply to the program. 

Boyd told MIT Technology Review that the agency hasn't started collecting biometric information under the program yet, at least to the best of his knowledge, but that he can confirm that his office is funding it. He added that his office takes privacy seriously and that it doesn't share data with commercial industries. Data collected by the program could help improve facial recognition technologies' understanding of how faces change as humans age. The program could ultimately help authorities find missing children even after years have passed. 

However, critics and expects have raised concerns about collecting data from migrants, a lot of whom are entering the country in hopes of a better life and may feel like they have no choice but to agree to getting their facial and fingerprint information taken. It's even more concerning in this case, because children can't give their informed consent.

Homeland Security is disputing some of MIT Technology Review's reporting, though, and a spokesperson told Engadget that the publication got its information from a presentation meant to understand emerging technologies and their theoretical applications. "The DHS does not collect facial images from minors under 14, and has no current plans to do so for either operational or research purposes," the spokesperson said. 

This article originally appeared on Engadget at https://www.engadget.com/general/us-homeland-security-will-reportedly-collect-face-scans-of-migrant-kids-133042516.html?src=rss

FCC proposes new rules for AI-generated robocalls and robotexts

The Federal Communications Commission has proposed new rules governing the use of AI-generated phone calls and texts. Part of the proposal centers on create a clear definition for AI-generated calls, with the rest focuses on consumer protection by making companies disclose when AI is being used in calls or texts.

"This provides consumers with an opportunity to identify and avoid those calls or texts that contain an enhanced risk of fraud and other scams," the FCC said. The agency is also looking ensure that legitimate uses of AI to assist people with disabilities to communicate remains protected.

Today's proposal is the latest action by the FCC to regulate how AI is used in robocalls and robotexts. The commission has already moved to place a ban on AI-generated voices in robocalls and has called on telecoms to crack down on the practice. Ahead of this year's November election, there has already been one notable use of AI robocalls attempting to spread misinformation to New Hampshire voters.

This article originally appeared on Engadget at https://www.engadget.com/ai/fcc-proposes-new-rules-for-ai-generated-robocalls-and-robotexts-200013807.html?src=rss

Senators introduce bill to protect individuals against AI-generated deepfakes

Today, a group of senators introduced the NO FAKES Act, a law that would make it illegal to create digital recreations of a person's voice or likeness without that individual's consent. It's a bipartisan effort from Senators Chris Coons (D-Del.), Marsha Blackburn (R-Tenn.), Amy Klobuchar (D-Minn.) and Thom Tillis (R-N.C.), fully titled the Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2024.

If it passes, the NO FAKES Act would create an option for people to seek damages when their voice, face or body are recreated by AI. Both individuals and companies would be held liable for producing, hosting or sharing unauthorized digital replicas, including ones made by generative AI.

We've already seen many instances of celebrities finding their imitations of themselves out in the world. "Taylor Swift'' was used to scam people with a fake Le Creuset cookware giveaway. A voice that sounded a lot like Scarlet Johannson's showed up in a ChatGPT voice demo. AI can also be used to make political candidates appear to make false statements, with Kamala Harris the most recent example. And it's not only celebrities who can be victims of deepfakes.

"Everyone deserves the right to own and protect their voice and likeness, no matter if you’re Taylor Swift or anyone else," Senator Coons said. "Generative AI can be used as a tool to foster creativity, but that can’t come at the expense of the unauthorized exploitation of anyone’s voice or likeness."

The speed of new legislation notoriously flags behind the speed of new tech development, so it's encouraging to see lawmakers taking AI regulation seriously. Today's proposed act follows the Senate's recent passage of the DEFIANCE Act, which would allow victims of sexual deepfakes to sue for damages. 

Several entertainment organizations have lent their support to the NO FAKES Act, including SAG-AFTRA, the RIAA, the Motion Picture Association, and the Recording Academy. Many of these groups have been pursuing their own actions to get protection against unauthorized AI recreations. SAG-AFTRA recently went on strike against several game publishers to try and secure a union agreement for likenesses in video games.

Even OpenAI is listed among the act's backers. "OpenAI is pleased to support the NO FAKES Act, which would protect creators and artists from unauthorized digital replicas of their voices and likenesses," said Anna Makanju, OpenAI's vice president of global affairs. "Creators and artists should be protected from improper impersonation, and thoughtful legislation at the federal level can make a difference."

This article originally appeared on Engadget at https://www.engadget.com/senators-introduce-bill-to-protect-individuals-against-ai-generated-deepfakes-202809816.html?src=rss

The Senate just passed two landmark bills aimed at protecting minors online

The Senate has passed two major online safety bills amid years of debate over social media’s impact on teen mental health. The Kids Online Safety Act (KOSA) and the Children and Teens' Online Privacy Protection Act, also known as COPPA 2.0, passed the Senate in a vote of 91 - T3.

The bills will next head to the House, though it’s unclear if the measures will have enough support to pass. If passed into law, the bills would be the most significant pieces of legislation regulating tech companies in years.

KOSA requires social media companies like Meta to offer controls to disable algorithmic feeds and other “addictive” features for children under the age of 16. It also requires companies to provide parental supervision features and safeguard minors from content that promotes eating disorders, self harm, sexual exploitation and other harmful content.

One of the most controversial provisions in the bill creates what’s known as a “duty of care.” This means platforms are required to prevent or mitigate certain harmful effects of their products, like “addictive” features or algorithms that promote dangerous content. The Federal Trade Commission would be in charge of enforcing the standard.

The bill was originally introduced in 2022 but stalled amid pushback from digital rights and other advocacy groups who said the legislation would force platforms to spy on teens. A revised version, meant to address some of those concerns, was introduced last year, though the ACLU, EFF and other free speech groups still oppose the bill. In a statement last week, the ACLU said that KOSA would encourage social media companies “to censor protected speech” and “incentivize the removal of anonymous browsing on wide swaths of the internet.”

COPPA 2.0, on the other hand, has been less controversial among privacy advocates. An expansion of the 1998 Children and Teens' Online Privacy Protection Act, it aims to revise the nearly 30-year-old law to better reflect the modern internet and social media landscape. If passed, the law would prohibit companies from targeting advertising to children and collecting personal data on teens between 13 and 16 without consent. It also requires companies to offer an “eraser button” for personal data to delete children and teens’ personal information from a platform when “technologically feasible.”

The vote underscores how online safety has become a rare source of bipartisan agreement in the Senate, which has hosted numerous hearings on teen safety issues in recent years. The CEOs of Meta, Snap, Discord, X and TikTok testified at one such hearing earlier this year, during which South Carolina Senator Lindsey Graham accused the executives of having “blood on their hands” for numerous safety lapses.

This article originally appeared on Engadget at https://www.engadget.com/the-senate-just-passed-two-landmark-bills-aimed-at-protecting-minors-online-170935128.html?src=rss