SpaceX is suing the California Coastal Commission for not letting it launch more rockets

Last week, the California Coastal Commission rejected a plan for SpaceX to launch up to 50 rockets this year at the Vandenberg Space Force Base in Santa Barbara County. The company responded yesterday with a lawsuit, alleging that the state agency's denial was overreaching its authority and discriminating against its CEO.

The Commission's goal is to protect California's coasts and beaches, as well as the animals living in them. The agency has control over private companies' requests to use the state coastline, but it can't deny activities by federal departments. The denied launch request was actually made by the US Space Force on behalf of SpaceX, asking that the company be allowed to launch 50 of its Falcon 9 rockets, up from 36.

While the commissioners did raise concerns about SpaceX CEO Elon Musk's political screed and the spotty safety records at his companies during their review of the launch request, the assessment focused on the relationship between SpaceX and Space Force. The Space Force case is that "because it is a customer of — and reliant on — SpaceX’s launches and satellite network, SpaceX launches are a federal agency activity," the Commission review stated. "However, this does not align with how federal agency activities are defined in the Coastal Zone Management Act’s regulations or the manner in the Commission has historically implemented those regulations." The California Coastal Commission claimed that at least 80 percent of the SpaceX rockets contain payloads for Musk's Starlink company rather than payloads for government clients.

The SpaceX suit filed with the Central District of California court is seeking an order to designate the launches as federal activity, which would cut the Commission's oversight out of its future launch plans.

This article originally appeared on Engadget at https://www.engadget.com/science/space/spacex-is-suing-the-california-coastal-commission-for-not-letting-it-launch-more-rockets-204610537.html?src=rss

FTC ratifies ‘click-to-cancel’ rule, making it easier for consumers to end subscriptions

The Federal Trade Commission has made it easier for consumers to cancel subscriptions. In a decision that went down along party lines, the agency voted to ratify a “click-to-cancel” rule that will require providers to make it as easy to cancel a subscription as it is to sign up for one. First proposed last year, the rulemaking prohibits companies from misrepresenting their recurring services and memberships, as well as failing to clearly disclose any material terms related to those offerings.

“Too often, businesses make people jump through endless hoops just to cancel a subscription,” said Chair Lina Khan. “The FTC’s rule will end these tricks and traps, saving Americans time and money. Nobody should be stuck paying for a service they no longer want.”

After considering more than 16,000 comments on the matter, the FTC decided not to write the final rulemaking as originally proposed. Most notably, the agency scrapped a proposal that would have required companies to provide consumers with annual reminders for subscription renewals. It also won’t mandate a rule that would have forced sellers to obtain the consent of those seeking to cancel a subscription before telling them about potential modifications to their plan or reasons why they should continue paying for a service.

A separate statement issued by Commissioner Rebecca Slaughter (PDF link) provides insight into the decision. Essentially, the agency felt the FTC Act doesn’t give it the authority to require a renewal notice. I’ll note here that the dissenting opinion (PDF link), written by Republican Commissioner Melissa Holyoak, contends that the entire rulemaking is overly broad, and accuses the Democratic majority of attempting to push through the change before next month's election.

“Americans understand the importance and value of such a requirement; many have discovered that they or their parents had been paying for years or even decades for a service wholly unused, such as a dial-up internet service from the 1990s,” Slaughter writes in her statement. “… Of course, we are always mindful that our authority under the FTC Act to issue rules under section 18 has limits; sometimes, as here, those limits prevent us from codifying in a rule practices that we might, as a matter of policy, prefer to require explicitly.”

Slaughter points out that state and federal lawmakers do have the authority to mandate renewal notices, and notes some states, such as Virginia, have even recently gone down that path. “The comment record compiled in this rulemaking proceeding strongly supports the wisdom of federal and state legislators’ carefully considering adopting such a law,” Slaughter writes.

Provided there’s no legal challenge to the FTC’s decision, today’s rulemaking will go into effect 180 days after it is published in the Federal Register. When the agency moved to ban noncompete clauses earlier this year, a federal judge in Texas issued a nationwide injunction. That decision is still stuck in legal limbo. 

This article originally appeared on Engadget at https://www.engadget.com/big-tech/ftc-ratifies-click-to-cancel-rule-making-it-easier-for-consumers-to-end-subscriptions-160752238.html?src=rss

China calls allegations that it infiltrated US critical infrastructure a ‘political farce’

China has denied allegations by the US government and Microsoft that a state-sponsored hacking group called the Volt Typhoon has infiltrated US critical infrastructure, according to Bloomberg. The country's National Computer Virus Emergency Response Center called the claims a "political farce" orchestrated by US officials in a new report. It also reportedly cited more than 50 cybersecurity experts who agreed with the agency that there's no sufficient evidence linking Volt Typhoon to the Chinese government. 

Moreover, the Chinese agency said that it's the US that uses "cyber warfare forces" to penetrate networks and conduct intelligence gathering. It even accused the US of using a tool called "Marble" that can insert code strings in the Chinese and Russian languages to frame China and Russia for its activities.

Microsoft and the National Security Agency (NSA) first reported about Volt Typhoon back in May 2023. They said that the group installed surveillance malware in "critical" systems on the island of Guam and other parts of the US and has had access to those systems for at least the past five years. In February this year, the Cybersecurity and Infrastructure Security Agency (CISA), the NSA and the FBI issued an advisory warning critical infrastructure organizations that state-sponsored cyber actors from China "are seeking to pre-position themselves on IT networks for disruptive or destructive cyberattacks."

The US agencies said Volt Typhoon had infiltrated the US Department of Energy, US Environmental Protection Agency, as well as various government agencies in Australia, the UK, Canada and New Zealand. Volt Typhoon doesn't act like other cyberattackers and espionage groups do. It hasn't used the malware it installed to attack any of its targets — at least not yet. The group is "pre-positioning" itself so that it can disrupt critical infrastructure functions when it wants to, which the US government believes is "in the event of potential geopolitical tensions and/or military conflicts" with the United States.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/china-calls-allegations-that-it-infiltrated-us-critical-infrastructure-a-political-farce-120023769.html?src=rss

Google strikes a deal with a nuclear startup to power its AI data centers

Google is turning to nuclear energy to help power its AI drive. On Monday, the company said it will partner with the startup Kairos Power to build seven small nuclear reactors in the US. The deal targets adding 500 megawatts of nuclear power from the small modular reactors (SMRs) by the decade’s end. The first is expected to be up and running by 2030, with the remainder arriving through 2035.

It’s the first-ever corporate deal to buy nuclear power from SMRs. Small modular reactors are smaller than existing reactors. Their components are built inside a factory rather than on-site, which can help lower construction costs compared to full-scale plants.

Kairos will need the US Nuclear Regulatory Commission to approve design and construction permits for the plans. The startup has already received approval for a demonstration reactor in Tennessee, with an online date targeted for 2027. The company already builds test units (without nuclear-fuel components) at a development facility in Albuquerque, NM, where it assesses components, systems and its supply chain.

The companies didn’t announce the financial details of the arrangement. Google says the deal’s structure will help to keep costs down and get the energy online sooner.

“By procuring electricity from multiple reactors — what experts call an ‘orderbook’ of reactors — we will help accelerate the repeated reactor deployments that are needed to lower costs and bring Kairos Power’s technology to market more quickly,” Michael Terrell, Google’s senior director for energy and climate, wrote in a blog post. “This is an important part of our approach to scale the benefits of advanced technologies to more people and communities, and builds on our previous efforts.”

The AI boom — and the enormous amount of data center power it requires — has led to several deals between Big Tech companies and the nuclear industry. In September, Microsoft forged an agreement with Constellation Energy to bring a unit of the Three Mile Island plant in Pennsylvania back online. In March, Amazon bought a nuclear-powered data center from Talen Energy.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/google-strikes-a-deal-with-a-nuclear-startup-to-power-its-ai-data-centers-201403750.html?src=rss

Kamala Harris’ Twitch account streamed Tim Waltz rally alongside live WoW gameplay

In August, Kamala Harris' campaign launched a Twitch account in an effort to reach young people and some of the "hardest-to-reach voters" out there. It debuted with a stream of Harris' acceptance speech at the Democratic National Convention, which is perhaps what one could expect from an account owned by a presidential campaign. On the evening of October 9, though, the channel streamed live gameplay for the first time — along with a live feed of Vice Presidential nominee Tim Waltz's speech in Arizona. 

As Wired notes, Twitch creator Preheat kicked things off by playing World of Warcraft on the channet at 6:30PM ET. Preheat, who told Wired that they volunteered for the task because of Harris' platforms, also provided commentary about the game and encouraged viewers to vote. "GOP is the opposite of POG," they said at one point during the stream. A spokesperson told the publication that the campaign is hoping to reach young male votes that make up most of Twitch's userbase by streaming the rally alongside WoW gameplay. 

Harris isn't the first politician to use Twitch to reach voters. Joe Biden's administration streamed his inauguration on the website, while Donald Trump's camp had been streaming rallies and speeches on the platform since 2019. The former president's account was suspended following the January 6 US Capitol riot, but it was reinstated in July this year. Alexandria Ocasio-Cortez is on Twitch, as well, and streamed herself a few times while playing Among Us

This article originally appeared on Engadget at https://www.engadget.com/entertainment/streaming/kamala-harris-twitch-account-streamed-tim-waltz-rally-alongside-live-wow-gameplay-021612716.html?src=rss

Meta AI will launch in six more countries today, including the UK

Meta AI is beginning a big international rollout. The AI assistant will arrive today in Brazil, Bolivia, Guatemala, Paraguay, Philippines and the UK. It is also slated to debut in Algeria, Egypt, Indonesia, Iraq, Jordan, Libya, Malaysia, Morocco, Saudi Arabia, Sudan, Thailand, Tunisia, United Arab Emirates, Vietnam and Yemen over the coming weeks, although the company did not offer specific dates for those countries.

This expansion is also adding new language support to Meta AI. Starting today, it is getting support for Tagalog, while Arabic, Indonesian, Thai and Vietnamese will join the assistant "soon." Customers can use the Meta AI assistant on the web or within the company's social media apps: Facebook, Instagram, WhatsApp and Messenger. 

The final element of today's announcement is that Meta AI will be launched on Ray-Ban Meta smart glasses in the UK and in Australia. The UK launch will only include voice support for now; Meta did not provide a timeline for when UK customers might get the full multimodal capabilities on the glasses.

The EU is a notable absence in this expansion. Meta said this summer that it would not introduce multimodal AI services in the EU due to concerns over regulation in the bloc. CEO Mark Zuckerberg has been public with critiques of how European regulators are handling the proliferation of artificial intelligence.

This article originally appeared on Engadget at https://www.engadget.com/ai/meta-ai-will-launch-in-six-more-countries-today-including-the-uk-150057934.html?src=rss

Viewers don’t trust candidates who use generative AI in political ads, study finds

Artificial intelligence is expected to have an impact on the upcoming US election in November. States have been trying to protect against misinformation by passing laws that require political advertisements to disclose when they have used generative AI. Twenty states now have rules on the books, and according to new research, voters have a negative reaction to seeing those disclaimers. That seems like a pretty fair response: If a politician uses generative AI to mislead voters, then voters don't appreciate that. The study was conducted by New York University’s Center on Technology Policy and first reported by The Washington Post.

The investigation had a thousand participants watch political ads from fictional candidates. Some of the ads were accompanied by a disclaimer that AI was used in the creation of the spot, while others had no disclaimer. The presence of a disclaimer was linked to viewers rating the promoted candidate as less trustworthy and less appealing. Respondents also said they would be more likely to flag or report the ads on social media when they contained disclaimers. In attack ads, participants were more likely to express negative opinions about the candidate who sponsored the spot rather than the candidate being attacked. The researchers also found that the presence of an AI disclaimer led to worse or unchanged opinions regardless of the fictional candidate's political party.

The researchers tested two different disclaimers inspired by two different state requirements for AI disclosure in political ads. The text tied to Michigan's law reads: "This video has been manipulated by technical means and depicts speech or conduct that did not occur." The other disclaimer is based on Florida's law, and says: "This video was created in whole or in part with the use of generative artificial intelligence." Although the approach of Michigan's requirements is more common among state laws, study participants said they preferred seeing the broader disclaimer for any type of AI use.

While these disclaimers can play a part in transparency about the presence of AI in an ad, they aren't a perfect failsafe. As many as 37 percent of the respondents said they didn't recall seeing any language about AI after viewing the ads.

This article originally appeared on Engadget at https://www.engadget.com/ai/viewers-dont-trust-candidates-who-use-generative-ai-in-political-ads-study-finds-194532117.html?src=rss

Judge blocks new California law barring distribution of election-related AI deepfakes

One of California's new AI laws, which aims to prevent AI deepfakes related to elections from spreading online, has been blocked a month before the US presidential elections. As TechCrunch and Reason report, Judge John Mendez has issued a preliminary injunction, preventing the state's attorney general from enforcing AB 2839. California Governor Gavin Newsom signed it into law, along with other bills focusing on AI, back in mid-September. After doing so, he tweeted a screenshot of a story about X owner Elon Musk sharing an AI deepfake video of Vice President Kamala Harris without labeling it as fake. "I just signed a bill to make this illegal in the state of California," he wrote. 

AB 2839 holds anybody who distributes AI deepfakes accountable, if they feature political candidates and if they're posted within 120 days of an election in the state. Anybody who sees those deepfakes can file a civil action against the person who distributed it, and a judge can order the poster to take the manipulated media down if they don't want to face monetary penalties. After Newsom signed it into law, the video's original poster, X user Christopher Kohls, filed a lawsuit to block it, arguing that the video was satire and hence protected by the First Amendment. 

Judge Mendez has agreed with Kohls, noting in his decision [PDF] that AB 2839 does not pass strict scrutiny and is not narrowly tailored. He also said that the law's disclosure requirements are unduly burdensome. "Almost any digitally altered content, when left up to an arbitrary individual on the internet, could be considered harmful," he wrote. The judge likened YouTube videos, Facebook posts and X tweets to newspaper advertisements and political cartoons and asserted that the First Amendment "protects an individual’s right to speak regardless of the new medium these critiques may take." Since this is merely a preliminary injunction, the law may be unblocked in the future, though that might not happen in time for this year's presidential elections. 

This article originally appeared on Engadget at https://www.engadget.com/ai/judge-blocks-new-california-law-barring-distribution-of-election-related-ai-deepfakes-133043341.html?src=rss

Women of color running for Congress are attacked disproportionately on X, report finds

Women of color running for Congress in 2024 have faced a disproportionate number of attacks on X compared with other candidates, according to a new report from the nonprofit Center for Democracy and Technology (CDT) and the University of Pittsburgh.

The report sought to “compare the levels of offensive speech and hate speech that different groups of Congressional candidates are targeted with based on race and gender, with a particular emphasis on women of color.” To do this, the report’s authors analyzed 800,000 tweets that covered a three-month period between May 20 and August 23 of this year. That dataset represented all posts mentioning a candidate running for Congress with an account on X.

The report’s authors found that more than 20 percent of posts directed at Black and Asian women candidates “contained offensive language about the candidate.” It also found that Black women in particular were targeted with hate speech more often compared with other candidates.

“On average, less than 1% of all tweets that mentioned a candidate contained hate speech,” the report says. “However, we found that African-American women candidates were more likely than any other candidate to be subject to this type of post (4%).” That roughly lines up with X’s recent transparency report — the company’s first since Elon Musk took over the company — which said that rule-breaking content accounts for less than 1 percent of all posts on its platform.

In a statement, an X spokesperson said the company had suspended more than 1 million accounts and removed more than 2 million posts in the first half of 2024 for breaking the company's rules. "While we encourage people to express themselves freely on X, abuse, harassment, and hateful conduct have no place on our platform and violate the X Rules," the spokesperson said. 

Notably, the CDT’s report analyzed both hate speech — which ostensibly violates X’s policies — and “offensive speech,” which the report defined as “words or phrases that demean, threaten, insult, or ridicule a candidate.” While the latter category may not be against X’s rules, the report notes that the volume of suck attacks could still deter women of color from running for office. It recommends that X and other platforms take “specific measures” to counteract such effects.

“This should include clear policies that prohibit attacks against someone based on race or gender, greater transparency into how their systems address these types of attacks, better reporting tools and means for accountability, regular risk assessments with an emphasis on race and gender, and privacy preserving mechanisms for independent researchers to conduct studies using their data. The consequences of the status-quo where women of color candidates are targeted with significant attacks online at much higher rates than other candidates creates an immense barrier to creating a truly inclusive democracy.”

Update: October 2, 2024, 12:13 PM ET: This post was updated to include a statement from an X spokesperson. 

This article originally appeared on Engadget at https://www.engadget.com/social-media/women-of-color-running-for-congress-are-attacked-disproportionately-on-x-report-finds-043206066.html?src=rss

California Gov. Newsom vetoes bill SB 1047 that aims to prevent AI disasters

California Gov. Gavin Newsom has vetoed bill SB 1047, which aims to prevent bad actors from using AI to cause "critical harm" to humans. The California state assembly passed the legislation by a margin of 41-9 on August 28, but several organizations including the Chamber of Commerce had urged Newsom to veto the bill. In his veto message on Sept. 29, Newsom said the bill is "well-intentioned" but "does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it." 

SB 1047 would have made the developers of AI models liable for adopting safety protocols that would stop catastrophic uses of their technology. That includes preventive measures such as testing and outside risk assessment, as well as an "emergency stop" that would completely shut down the AI model. A first violation would cost a minimum of $10 million and $30 million for subsequent infractions. However, the bill was revised to eliminate the state attorney general's ability to sue AI companies with negligent practices if a catastrophic event does not occur. Companies would only be subject to injunctive relief and could be sued if their model caused critical harm.

This law would apply to AI models that cost at least $100 million to use and 10^26 FLOPS for training. It also would have covered derivative projects in instances where a third party has invested $10 million or more in developing or modifying the original model. Any company doing business in California would be subject to the rules if it meets the other requirements. Addressing the bill's focus on large-scale systems, Newsom said, "I do not believe this is the best approach to protecting the public from real threats posed by the technology." The veto message adds:

By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 - at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.

The earlier version of SB 1047 would have created a new department called the Frontier Model Division to oversee and enforce the rules. Instead, the bill was altered ahead of a committee vote to place governance at the hands of a Board of Frontier Models within the Government Operations Agency. The nine members would be appointed by the state's governor and legislature.

The bill faced a complicated path to the final vote. SB 1047 was authored by California State Sen. Scott Wiener, who told TechCrunch: "We have a history with technology of waiting for harms to happen, and then wringing our hands. Let’s not wait for something bad to happen. Let’s just get out ahead of it." Notable AI researchers Geoffrey Hinton and Yoshua Bengio backed the legislation, as did the Center for AI Safety, which has been raising the alarm about AI's risks over the past year.

"Let me be clear - I agree with the author - we cannot afford to wait for a major catastrophe to occur before taking action to protect the public," Newsom said in the veto message. The statement continues:

California will not abandon its responsibility. Safety protocols must be adopted. Proactive guardrails should be implemented, and severe consequences for bad actors must be clear and enforceable. I do not agree, however, that to keep the public safe, we must settle for a solution that is not informed by an empirical trajectory analysis of AI systems and capabilities. Ultimately, any framework for effectively regulating AI needs to keep pace with the technology itself.

SB 1047 drew heavy-hitting opposition from across the tech space. Researcher Fei-Fei Li critiqued the bill, as did Meta Chief AI Scientist Yann LeCun, for limiting the potential to explore new uses of AI. The trade group repping tech giants such as Amazon, Apple and Google said SB 1047 would limit new developments in the state's tech sector. Venture capital firm Andreeson Horowitz and several startups also questioned whether the bill placed unnecessary financial burdens on AI innovators. Anthropic and other opponents of the original bill pushed for amendments that were adopted in the version of SB 1047 that passed California's Appropriations Committee on August 15. 

This article originally appeared on Engadget at https://www.engadget.com/ai/california-gov-newsom-vetoes-bill-sb-1047-that-aims-to-prevent-ai-disasters-220826827.html?src=rss