X is reportedly now complying with orders from Brazil’s Supreme Court

X is reportedly reversing course after weeks of refusing to comply with conditions set by the Brazilian Supreme Court that would allow it to operate in the country again. According to The New York Times, the company’s lawyers said in a Friday court filing that X has named a legal representative in Brazil as demanded by justice Alexandre de Moraes and removed accounts that the judge had identified as a threat to democracy, along with paying the fines it owed. But, the publication also reports that the Brazil Supreme Court has said X did not submit all the necessary paperwork, and now has five days to do so.

The paperwork X failed to submit is that which would prove it formally appointed a legal representative in Brazil, as required by Brazilian law, according to Reuters. X named Rachel de Oliveira Conceicao as its new legal representative in the filing on Friday. The company has been working to restore service to users in Brazil after it was blocked at the end of August, and briefly came back online earlier this week using Cloudflare’s DNS. But, it said that this was “inadvertent and temporary.” In a statement, an X spokesperson said at the time, “While we expect the platform to be inaccessible again in Brazil soon, we continue efforts to work with the Brazilian government to return very soon for the people of Brazil.”

Brazil has threatened X and Starlink with daily fines of nearly $1 million if they do not comply with the ban in the country. Justice Moraes also made it so users in Brazil could be fined roughly $8,900 if caught using a VPN to access X. The company’s latest move is a step toward resolving the issue and potentially bringing X back to Brazil legally.

This article originally appeared on Engadget at https://www.engadget.com/social-media/x-is-reportedly-now-complying-with-orders-from-brazils-supreme-court-170651920.html?src=rss

Tesla Semi fire required 50,000 gallons of water to extinguish

California firefighters needed to spray 50,000 gallons of water to extinguish a roadside Tesla Semi fire, the US National Transportation Safety Board (NTSB) announced in a preliminary report. Crews also used an aircraft to drop fire retardent in the "immediate area as a precautionary measure," according to the agency.

The crash happened at 3:13 AM on August 19 on the I80 freeway east of Sacramento. The tractor-trailer departed the roadway while navigating a curve, struck a traffic delineator and eventually hit a tree. The driver was uninjured but taken to hospital as a precaution.

Tesla Semi fire required 50,000 gallons of water to extinguish
California Highway Patrol

The Tesla Semi's large 900kWh battery caught fire and reached a temperature of 1,000 degrees F while spewing toxic fumes. It continued to burn into the late afternoon as firefighters dowsed it with water to cool it down (Tesla sent a technical expert to assess high-voltage hazards and fire safety). It wasn't until 7:20 PM (over 16 hours after the crash) that the freeway was reopened. 

All of that caught the attention of the NTSB, which sent a team of investigators, mainly to examine the fire risks posed by large lithium-ion battery packs. The agency — which can only make safety recommendations and has no enforcement authority — said that "all aspects of the crash remain under investigation while the NTSB determines the probable cause." 

Given the long road shutdown time, dangerously hot fire and toxic fumes, the accident is likely to provoke a lot of discussion in and out of government. The NTSB concluded in 2021 that battery fires pose a risk to emergency responders and that manufacturers' guidelines around such fires were inadequate. 

This article originally appeared on Engadget at https://www.engadget.com/transportation/evs/tesla-semi-fire-required-50000-gallons-of-water-to-extinguish-120006477.html?src=rss

A new report raises concerns about the future of NASA

A concerning report from the National Academies of Sciences, Engineering and Medicine (NASEM) expresses some serious concerns about the future of America’s space exploration agency.

The NASEM report was written by a panel of aerospace experts and lays out what it sees as a possible "hollow future” for the National Aeronautics and Space Administration (NASA). It addresses issues of underfunding due to “declining long-term national emphasis on aeronautics and civil space,” an assertion that NASA itself is aware of and agrees with. The report also notes that NASA’s problems extend far beyond having enough funding to carry out its missions and operations.

Some of the report’s “core findings” suggest areas of concern that could affect the space agency’s future. These include a focus on “short-term measures without adequate consideration for longer-term needs and implications,” reliance on “milestone-based purchase-of-service contracts” and inefficiency due to “slow and cumbersome business operations.” The report also raised concerns about the current generation of talent being siphoned off by private aerospace companies, and the next generation of engineers not receiving an adequate foundation of knowledge due to our underfunded public school systems. Finally the report states bluntly that NASA’s infrastructure “is already well beyond its design life.”

These and other issues could lead to even more serious problems. Norman Augustine, a former Lockheed Martin chief executive and the report’s lead author, told The Washington Post that reliance on the private sector could further erode NASA's workforce, reducing its role to one of oversight instead of problem-solving.

Congress could allocate more funds to NASA to address these concerns but that’s not likely since it’s constantly struggling to prevent government shutdowns. Instead, Augustine says NASA could focus on prioritizing its efforts on more strategic goals and initiatives.

This article originally appeared on Engadget at https://www.engadget.com/science/space/a-new-report-raises-concerns-about-the-future-of-nasa-184643260.html?src=rss

Utah judge blocks law preventing youth from accessing social media freely

On Tuesday, Chief US District Judge Robert Shelby granted a preliminary injunction to block Utah from limiting the social media usage of minors. Republican Governor Spencer Cox had signed the Utah Minor Protection in Social Media Act earlier in March. It was supposed to take effect on October 1, but the court’s decision to block the law is a victory for young social media users in Utah.

This isn’t the first time Utah’s governor has attempted to limit social media use among the youths in the state. Last year, he signed two bills that required parents to grant permission for teens to create social media accounts, and these accounts had limitations like curfews and age verification. He replacing the older laws in March due to lawsuits challenging their legality.

Under the law, social media companies would have been forced to verify the age of all users. If a minor registers for an account, they are subject to various limitations. The content they share would be seen only by connected accounts. Additionally, minor accounts could not be searched for or messaged by non-followers or friends, effectively nonexistent to strangers.

The primary reason for the preliminary injunction is due to NetChoice’s claim that the law constitutes a violation of the First Amendment. NetChoice is a trade association formed by tech giants such as X (formerly Twitter), Snap, Meta and Google. The association has managed to win in court battles and block similar laws entirely or in part in states like Arkansas, California and Texas.

This article originally appeared on Engadget at https://www.engadget.com/social-media/utah-judge-blocks-law-preventing-youth-from-accessing-social-media-freely-160008587.html?src=rss

Utah judge blocks law preventing youth from accessing social media freely

On Tuesday, Chief US District Judge Robert Shelby granted a preliminary injunction to block Utah from limiting the social media usage of minors. Republican Governor Spencer Cox had signed the Utah Minor Protection in Social Media Act earlier in March. It was supposed to take effect on October 1, but the court’s decision to block the law is a victory for young social media users in Utah.

This isn’t the first time Utah’s governor has attempted to limit social media use among the youths in the state. Last year, he signed two bills that required parents to grant permission for teens to create social media accounts, and these accounts had limitations like curfews and age verification. He replacing the older laws in March due to lawsuits challenging their legality.

Under the law, social media companies would have been forced to verify the age of all users. If a minor registers for an account, they are subject to various limitations. The content they share would be seen only by connected accounts. Additionally, minor accounts could not be searched for or messaged by non-followers or friends, effectively nonexistent to strangers.

The primary reason for the preliminary injunction is due to NetChoice’s claim that the law constitutes a violation of the First Amendment. NetChoice is a trade association formed by tech giants such as X (formerly Twitter), Snap, Meta and Google. The association has managed to win in court battles and block similar laws entirely or in part in states like Arkansas, California and Texas.

This article originally appeared on Engadget at https://www.engadget.com/social-media/utah-judge-blocks-law-preventing-youth-from-accessing-social-media-freely-160008587.html?src=rss

US senators urge regulators to probe potential AI antitrust violations

The US government has noticed the potentially negative effects of generative AI on areas like journalism and content creation. Senator Amy Klobuchar, along with seven Democrat colleagues, urged the Federal Trade Commission (FTC) and Justice Department to probe generative AI products like ChatGPT for potential antitrust violations, they wrote in a press release

"Recently, multiple dominant online platforms have introduced new generative AI features that answer user queries by summarizing, or, in some cases, merely regurgitating online content from other sources or platforms," the letter states. "The introduction of these new generative AI features further threatens the ability of journalists and other content creators to earn compensation for their vital work." 

The lawmakers went on to note that traditional search results lead users to publishers' websites while AI-generated summaries keep the users on the search platform "where that platform alone can profit from the user's attention through advertising and data collection." 

These products also have significant competitive consequences that distort markets for content. When a generative AI feature answers a query directly, it often forces the content creator—whose content has been relegated to a lower position on the user interface—to compete with content generated from their own work.

The fact that AI may be scraping news sites and then not even directing users to the original source could be a form of "exclusionary conduct or an unfair method of competition in violation of antitrust laws," the lawmakers concluded. (That's on top being a potential violation of copyright laws, but that's another legal battle altogether.)

Lawmakers have already proposed a couple of bills designed to protect artists, journalists and other from unauthorized generative AI use. In July, three senators introduced the COPIED Act to combat and monitor the rise of AI content and deepfakes. Later in the month, a group of senators introduced the NO FAKES Act, a law that would make it illegal to make digital recreations of a person's voice or likeness without their consent.

AI poses a particularly large risk to journalism, both local and global, by removing the sources of revenue that allow for original and investigative reporting. The New York Times, for one, cited instances of ChatGPT providing users with "near-verbatim excerpts" from paywalled articles. OpenAI recently admitted that it's impossible to train generative AI without copyrighted materials. 

This article originally appeared on Engadget at https://www.engadget.com/ai/us-senators-urge-regulators-to-probe-potential-ai-antitrust-violations-110012387.html?src=rss

US senators urge regulators to probe potential AI antitrust violations

The US government has noticed the potentially negative effects of generative AI on areas like journalism and content creation. Senator Amy Klobuchar, along with seven Democrat colleagues, urged the Federal Trade Commission (FTC) and Justice Department to probe generative AI products like ChatGPT for potential antitrust violations, they wrote in a press release

"Recently, multiple dominant online platforms have introduced new generative AI features that answer user queries by summarizing, or, in some cases, merely regurgitating online content from other sources or platforms," the letter states. "The introduction of these new generative AI features further threatens the ability of journalists and other content creators to earn compensation for their vital work." 

The lawmakers went on to note that traditional search results lead users to publishers' websites while AI-generated summaries keep the users on the search platform "where that platform alone can profit from the user's attention through advertising and data collection." 

These products also have significant competitive consequences that distort markets for content. When a generative AI feature answers a query directly, it often forces the content creator—whose content has been relegated to a lower position on the user interface—to compete with content generated from their own work.

The fact that AI may be scraping news sites and then not even directing users to the original source could be a form of "exclusionary conduct or an unfair method of competition in violation of antitrust laws," the lawmakers concluded. (That's on top being a potential violation of copyright laws, but that's another legal battle altogether.)

Lawmakers have already proposed a couple of bills designed to protect artists, journalists and other from unauthorized generative AI use. In July, three senators introduced the COPIED Act to combat and monitor the rise of AI content and deepfakes. Later in the month, a group of senators introduced the NO FAKES Act, a law that would make it illegal to make digital recreations of a person's voice or likeness without their consent.

AI poses a particularly large risk to journalism, both local and global, by removing the sources of revenue that allow for original and investigative reporting. The New York Times, for one, cited instances of ChatGPT providing users with "near-verbatim excerpts" from paywalled articles. OpenAI recently admitted that it's impossible to train generative AI without copyrighted materials. 

This article originally appeared on Engadget at https://www.engadget.com/ai/us-senators-urge-regulators-to-probe-potential-ai-antitrust-violations-110012387.html?src=rss

Majority of Attorneys General support a warning label for social media platforms

US Surgeon General Vivek Murthy issued an op-ed in June calling for social media to come with a warning label about its negative health impacts, similar to the warnings placed on cigarettes and tobacco products. Now, 42 attorneys general have drafted an open letter to Congress to signal their support for Murthy's plan.

"This ubiquitous problem requires federal action—and a surgeon general’s warning on social media platforms, though not sufficient to address the full scope of the problem, would be one consequential step toward mitigating the risk of harm to youth," the group's letter reads. "A warning would not only highlight the inherent risks that social media platforms presently pose for young people, but also complement other efforts to spur attention, research, and investment into the oversight of social media platforms."

Almost every state's AG signed the letter; the only holdouts are Alaska, Arizona, Iowa, Kansas, Louisiana, Missouri, Montana, Nebraska, Ohio, Texas and West Virginia. Attorneys general from American Samoa, District of Columbia and the US Virgin Islands also signed.

The attorneys general cited the Kids Online Safety Act and the Children and Teens Online Privacy Protection Act, which both recently passed in the Senate, as other important measures for protecting young people's mental health. The measures took multiple tries to get to a floor vote in the Senate, and it's unclear whether they have the support to pass in the House of Representatives.

This article originally appeared on Engadget at https://www.engadget.com/social-media/majority-of-attorneys-general-support-a-warning-label-for-social-media-platforms-184138728.html?src=rss

Even the NSA now has a podcast

Well, it's official: everyone has a podcast. Today, the NSA launched No Such Podcast, a nod to the entity's nickname, No Such Agency, back when its mere existence was classified. The NSA bills the podcast as bringing "people to the table from across the agency to discuss our role as a combat support agency, our foreign signals intelligence and cybersecurity missions, and so much more. NSA is known as home to the world's greatest codemakers and codebreakers — their stories are now being decoded."

However, the podcast is far from Edward Snowden-level sharing. The NSA's chief of strategic communications, Sara Siegle, is quick to add that some of the agency's work is too sensitive to discuss. This podcast will be a platform to tell "more" stories while sharing expertise and highlighting government officials.

No Such Podcast is available on YouTube and wherever you regularly get your podcasts. The NSA published two episodes on launch day, with the first focusing on cybersecurity and the other detailing the agency's role in finding Osama Bin Laden. The NSA will release six more episodes weekly through mid-October.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/even-the-nsa-now-has-a-podcast-140028493.html?src=rss

Elon Musk’s Starlink will comply with the Brazil X ban after all

Update, September 3, 5:15PM ET: Starlink has reversed course on its decision to not comply with Brazil’s block of X. In a statement posted to X, the company said:

“To our customers in Brazil (who may not be able to read this as a result of X being blocked by @alexandre):

The Starlink team is doing everything possible to keep you connected.

Following last week’s order from @alexandre that froze Starlink’s finances and prevents Starlink from conducting financial transactions in Brazil, we immediately initiated legal proceedings in the Brazilian Supreme Court explaining the gross illegality of this order and asking the Court to unfreeze our assets. Regardless of the illegal treatment of Starlink in freezing of our assets, we are complying with the order to block access to X in Brazil.

We continue to pursue all legal avenues, as are others who agree that @alexandre’s recent orders violate the Brazilian constitution.”

The original story, "Starlink is refusing to comply with Brazil's X ban," as published on September 2, continues below unedited.


After the country’s Supreme Court ordered internet service providers to block access to X, the platform was largely unavailable in the country by Sunday night. The only ways to access X since then have been through VPNs (for those willing to risk huge fines) and Starlink, the satellite internet service that’s also run by X owner Elon Musk.

The president of Brazil’s telecom agency, Anatel, said that Starlink refused to comply with the court order until officials released its frozen assets, The New York Times reports. Alexandre de Moraes, the Supreme Court justice who has been on the warpath against X, also blocked the local bank accounts of Starlink, which is a SpaceX subsidiary. Moraes, who has accused X of disseminating hate speech and disinformation, is said to have done so with the aim of collecting $3 million in fines levied against X for ignoring his orders to block certain accounts.

Starlink petitioned the court to unblock its assets but the court dismissed the request. Musk called the Starlink account freeze "illegal," arguing that SpaceX and X are separate entities while claiming he owns 40 percent of the former.

There are around 250,000 Starlink customers in Brazil. The service has proven popular there in rural areas and among Indigenous tribes in the Amazon. Starlink pledged to provide free internet access to its Brazilian customers while its accounts in the country remain blocked.

If Starlink maintains its stance on X, Brazil could revoke the internet service’s license. If it continues to operate after that, officials could seize equipment from 23 ground stations. The gear helps Starlink improve the quality of its satellite connections.

Meanwhile, a majority of a Supreme Court panel upheld the X ban, which Moraes issued after Musk defied several of his orders, at a trial on Monday. X will have the right to appeal the decision. The panel also approved an order by Moraes to fine anyone caught using a VPN to access X in Brazil a daily fine of 50,000 Brazilian Real (around $8,900).

This article originally appeared on Engadget at https://www.engadget.com/big-tech/elon-musks-starlink-will-comply-with-the-brazil-x-ban-after-all-181144471.html?src=rss