Starlink terminals are reportedly being used by Russian forces in Ukraine

Starlink satellite internet terminals are being widely used by Russian forces in Ukraine, according to a report by The Wall Street Journal. The publication indicates that the terminals, which were developed by Elon Musk’s SpaceX, are being used to coordinate attacks in eastern Ukraine and Crimea. Additionally, Starlink terminals can be used on the battlefield to control drones and other forms of military tech.

The terminals are reaching Russian forces via a complex network of black market sellers. This is despite the fact that Starlink devices are banned in the country. WSJ followed some of these sellers as they smuggled the terminals into Russia and even made sure deliveries got to the front lines. Reporting also indicates that some of the terminals were originally purchased on eBay.

This black market for Starlink terminals allegedly stretches beyond occupied Ukraine and into Sudan. Many of these Sudanese dealers are reselling units to the Rapid Support Forces, a paramilitary group that’s been accused of committing atrocities like ethnically motivated killings, targeted abuse of human rights activists, sexual violence and the burning of entire communities. WSJ notes that hundreds of terminals have found their way to members of the Rapid Support Forces.

Back in February, Elon Musk addressed earlier reports that Starlink terminals were being used by Russian soldiers in the war against Ukraine. “To the best of our knowledge, no Starlinks have been sold directly or indirectly to Russia,” he wrote on X. The Kremlin also denied the reports, according to Reuters. Despite these proclamations, WSJ says that “thousands of the white pizza-box-sized devices” have landed with “some American adversaries and accused war criminals.”

After those February reports, House Democrats have demanded that Musk take action, according to Business Insider, noting that Russian military use of the tech is “potentially in violation of US sanctions and export controls.” Starlink actually has the ability to disable individual terminals and each item includes geofencing technology that is supposed to prevent use in unauthorized countries, though it's unclear if black market sellers can get around these hurdles.

AHouse Democrats have demanded that Musk take action, ar. He took steps to limit Ukraine’s use of the technology on the grounds that the terminals were never intended for use in military conflicts. According to his biography, Musk also blocked Ukraine’s use of Starlink near Crimea early in the conflict, ending the country’s plans for an attack on Russia’s naval fleet. Mykhailo Podolyak, an advisor to Ukrainian President Volodymyr Zelensky, wrote on X that “civilians, children are being killed” as a result of Musk’s decision. He further dinged the billionaire by writing “this is the price of a cocktail of ignorance and a big ego.”

However, Musk fired back and said that Starlink was never active in the area near Crimea, so there was nothing to disable. He also said that the policy in question was decided upon before Ukraine’s planned attack on the naval fleet. Ukraine did lose access to more than 1,300 Starlink terminals in the early days of the conflict due to a payment issue. SpaceX reportedly charged Ukraine $2,500 per month to keep each unit operational, which ballooned to $3.25 million per month. This pricing aligns with the company’s high cost premium plan. It’s worth noting that SpaceX has donated more than 3,600 terminals to Ukraine.

SpaceX has yet to comment on the WSJ report regarding the blackmarket proliferation of Starlink terminals. We’ll update this post when it does.

This article originally appeared on Engadget at https://www.engadget.com/starlink-terminals-are-reportedly-being-used-by-russian-forces-in-ukraine-154832503.html?src=rss

The FCC will vote to restore net neutrality later this month

The Federal Communications Commission (FCC) plans to vote to restore net neutrality later this month. With Democrats finally holding an FCC majority in the final year of President Biden’s first term, the agency can fulfill a 2021 executive order from the President and bring back the Obama-era rules that the Trump administration’s FCC gutted in 2017.

The FCC plans to hold the vote during a meeting on April 25. Net neutrality treats broadband services as an essential resource under Title II of the Communications Act, giving the FCC greater authority to regulate the industry. It lets the agency prevent ISPs from anti-consumer behavior like unfair pricing, blocking or throttling content and providing pay-to-play “fast lanes” to internet access.

Democrats had to wait three years to enact Biden’s 2021 executive order to reinstate the net neutrality rules passed in 2015 by President Obama’s FCC. The confirmation process of Biden FCC nominee Gigi Sohn for telecommunications regulator played no small part. She withdrew her nomination in March 2023 following what she called “unrelenting, dishonest and cruel attacks.”

Republicans (and Democratic Senator Joe Manchin) opposed her confirmation through a lengthy 16-month process. During that period, telecom lobbying dollars flowed freely and Republicans cited past Sohn tweets critical of Fox News, along with vocal opposition from law enforcement, as justification for blocking the confirmation. Democrats finally regained an FCC majority with the swearing-in of Anna Gomez in late September, near the end of Biden’s third year in office.

“The pandemic proved once and for all that broadband is essential,” FCC Chairwoman Rosenworcel wrote in a press release. “After the prior administration abdicated authority over broadband services, the FCC has been handcuffed from acting to fully secure broadband networks, protect consumer data, and ensure the internet remains fast, open, and fair. A return to the FCC’s overwhelmingly popular and court-approved standard of net neutrality will allow the agency to serve once again as a strong consumer advocate of an open internet.”

This article originally appeared on Engadget at https://www.engadget.com/the-fcc-will-vote-to-restore-net-neutrality-later-this-month-161813609.html?src=rss

The FCC will vote to restore net neutrality later this month

The Federal Communications Commission (FCC) plans to vote to restore net neutrality later this month. With Democrats finally holding an FCC majority in the final year of President Biden’s first term, the agency can fulfill a 2021 executive order from the President and bring back the Obama-era rules that the Trump administration’s FCC gutted in 2017.

The FCC plans to hold the vote during a meeting on April 25. Net neutrality treats broadband services as an essential resource under Title II of the Communications Act, giving the FCC greater authority to regulate the industry. It lets the agency prevent ISPs from anti-consumer behavior like unfair pricing, blocking or throttling content and providing pay-to-play “fast lanes” to internet access.

Democrats had to wait three years to enact Biden’s 2021 executive order to reinstate the net neutrality rules passed in 2015 by President Obama’s FCC. The confirmation process of Biden FCC nominee Gigi Sohn for telecommunications regulator played no small part. She withdrew her nomination in March 2023 following what she called “unrelenting, dishonest and cruel attacks.”

Republicans (and Democratic Senator Joe Manchin) opposed her confirmation through a lengthy 16-month process. During that period, telecom lobbying dollars flowed freely and Republicans cited past Sohn tweets critical of Fox News, along with vocal opposition from law enforcement, as justification for blocking the confirmation. Democrats finally regained an FCC majority with the swearing-in of Anna Gomez in late September, near the end of Biden’s third year in office.

“The pandemic proved once and for all that broadband is essential,” FCC Chairwoman Rosenworcel wrote in a press release. “After the prior administration abdicated authority over broadband services, the FCC has been handcuffed from acting to fully secure broadband networks, protect consumer data, and ensure the internet remains fast, open, and fair. A return to the FCC’s overwhelmingly popular and court-approved standard of net neutrality will allow the agency to serve once again as a strong consumer advocate of an open internet.”

This article originally appeared on Engadget at https://www.engadget.com/the-fcc-will-vote-to-restore-net-neutrality-later-this-month-161813609.html?src=rss

California introduces ‘right to disconnect’ bill that would allow employees to possibly relax

Burnout, quiet quitting, strikes — the news (and likely your schedule) is filled with markers that workers are overwhelmed and too much is expected of them. There's little regulation in the United States to prevent employers from forcing workers to be at their desks or on call at all hours, but that might soon change. California State Assemblyman Matt Haney has introduced AB 2751, a "right to disconnect" proposition, The San Francisco Standard reports

The bill is in its early stages but, if passed, would make every California employer lay out exactly what a person's hours are and ensure they aren't required to respond to work-related communications while off the clock. Time periods in which a salaried employee might have to work longer hours would need to be laid out in their contract. Exceptions would exist for emergencies. 

The Department of Labor would monitor adherence and fine companies a minimum of $100 for wrongdoing — whether that's forcing employees to be on Zoom, their inbox, answering texts or monitoring Slack when they're not getting paid to do so. "I do think it’s fitting that California, which has created many of these technologies, is also the state that introduces how we make it sustainable and update our protections for the times we live in and the world we’ve created," Haney told The Standard

It's not clear how much support exists for AB 2751, but as a tech hub and a major economic center, the bill has the potential to create tremendous impact for workers in California, and pressure other states to follow suit. The bill follows similar legislation in other countries. In 2017, France became the first nation to implement a "right to disconnect" policy, a model which has been copied in Argentina, Ireland, Mexico and Spain.

This article originally appeared on Engadget at https://www.engadget.com/california-introduces-right-to-disconnect-bill-that-would-allow-employees-to-possibly-relax-151705072.html?src=rss

California introduces ‘right to disconnect’ bill that would allow employees to possibly relax

Burnout, quiet quitting, strikes — the news (and likely your schedule) is filled with markers that workers are overwhelmed and too much is expected of them. There's little regulation in the United States to prevent employers from forcing workers to be at their desks or on call at all hours, but that might soon change. California State Assemblyman Matt Haney has introduced AB 2751, a "right to disconnect" proposition, The San Francisco Standard reports

The bill is in its early stages but, if passed, would make every California employer lay out exactly what a person's hours are and ensure they aren't required to respond to work-related communications while off the clock. Time periods in which a salaried employee might have to work longer hours would need to be laid out in their contract. Exceptions would exist for emergencies. 

The Department of Labor would monitor adherence and fine companies a minimum of $100 for wrongdoing — whether that's forcing employees to be on Zoom, their inbox, answering texts or monitoring Slack when they're not getting paid to do so. "I do think it’s fitting that California, which has created many of these technologies, is also the state that introduces how we make it sustainable and update our protections for the times we live in and the world we’ve created," Haney told The Standard

It's not clear how much support exists for AB 2751, but as a tech hub and a major economic center, the bill has the potential to create tremendous impact for workers in California, and pressure other states to follow suit. The bill follows similar legislation in other countries. In 2017, France became the first nation to implement a "right to disconnect" policy, a model which has been copied in Argentina, Ireland, Mexico and Spain.

This article originally appeared on Engadget at https://www.engadget.com/california-introduces-right-to-disconnect-bill-that-would-allow-employees-to-possibly-relax-151705072.html?src=rss

NYC’s business chatbot is reportedly doling out ‘dangerously inaccurate’ information

An AI chatbot released by the New York City government to help business owners access pertinent information has been spouting falsehoods, at times even misinforming users about actions that are against the law, according to a report from The Markup. The report, which was co-published with the local nonprofit newsrooms Documented and The City, includes numerous examples of inaccuracies in the chatbot’s responses to questions relating to housing policies, workers’ rights and other topics.

Mayor Adams’ administration introduced the chatbot in October as an addition to the MyCity portal, which launched in March 2023 as “a one-stop shop for city services and benefits.” The chatbot, powered by Microsoft’s Azure AI, is aimed at current and aspiring business owners, and was billed as a source of “actionable and trusted information” that comes directly from the city government’s sites. But it is a pilot program, and a disclaimer on the website notes that it “may occasionally produce incorrect, harmful or biased content.”

In The Markup’s tests, the chatbot repeatedly provided incorrect information. In response to the question, “Can I make my store cashless?”, for example, it replied, “Yes, you can make your store cashless in New York City” — despite the fact that New York City banned cashless stores in 2020. The report shows the chatbot also responded incorrectly about whether employers can take their workers’ tips, whether landlords have to accept section 8 vouchers or tenants on rental assistance, and whether businesses have to inform staff of scheduling changes. A housing policy expert that spoke to The Markup called the chatbot “dangerously inaccurate” at its worst.

The city has indicated that the chatbot is still a work in progress. In a statement to The Markup, Leslie Brown, a spokesperson for the NYC Office of Technology and Innovation, said the chatbot “has already provided thousands of people with timely, accurate answers,” but added, “We will continue to focus on upgrading this tool so that we can better support small businesses across the city.” 

This article originally appeared on Engadget at https://www.engadget.com/nycs-business-chatbot-is-reportedly-doling-out-dangerously-inaccurate-information-203926922.html?src=rss

Microsoft Copilot has reportedly been blocked on all Congress-owned devices

US Congressional staff members can no longer use Microsoft's Copilot on their government-issued devices, according to Axios. The publication said it obtained a memo from House Chief Administrative Officer Catherine Szpindor, telling Congress personnel that the AI chatbot is now officially prohibited. Apparently, the Office of Cybersecurity has deemed Copilot to be a risk "due to the threat of leaking House data to non-House approved cloud services." While there's nothing stopping them from using Copilot on their own phones and laptops, it will now be blocked on all Windows devices owned by the Congress. 

Almost a year ago, the Congress also set a strict limit on the use of ChatGPT, which is powered by OpenAI's large language models, just like Copilot. It banned staffers from using the chatbot's free version on House computers, but it allowed them to continue using the paid (ChatGPT Plus) version for research and evaluation due to its tighter privacy controls. More recently, the White House revealed rules federal agencies have to follow when it comes to generative AI, which would ensure that any tool they use "do not endanger the rights and safety" of Americans. 

Microsoft told Axios that it does recognize government users' need for higher security requirements. Last year, it announced a roadmap of tools and services meant for government use, including an Azure OpenAI service for classified workloads and a new version of Microsoft 365's Copilot assistant. The company said that all those tools and services will feature higher levels of security that would make it more suitable for handling sensitive data. Szpindor's office, according to Axios, will evaluate the government version Copilot when it becomes available before deciding if it can be used on House devices. 

This article originally appeared on Engadget at https://www.engadget.com/microsoft-copilot-has-reportedly-been-blocked-on-all-congress-owned-devices-034946166.html?src=rss

Microsoft Copilot has reportedly been blocked on all Congress-owned devices

US Congressional staff members can no longer use Microsoft's Copilot on their government-issued devices, according to Axios. The publication said it obtained a memo from House Chief Administrative Officer Catherine Szpindor, telling Congress personnel that the AI chatbot is now officially prohibited. Apparently, the Office of Cybersecurity has deemed Copilot to be a risk "due to the threat of leaking House data to non-House approved cloud services." While there's nothing stopping them from using Copilot on their own phones and laptops, it will now be blocked on all Windows devices owned by the Congress. 

Almost a year ago, the Congress also set a strict limit on the use of ChatGPT, which is powered by OpenAI's large language models, just like Copilot. It banned staffers from using the chatbot's free version on House computers, but it allowed them to continue using the paid (ChatGPT Plus) version for research and evaluation due to its tighter privacy controls. More recently, the White House revealed rules federal agencies have to follow when it comes to generative AI, which would ensure that any tool they use "do not endanger the rights and safety" of Americans. 

Microsoft told Axios that it does recognize government users' need for higher security requirements. Last year, it announced a roadmap of tools and services meant for government use, including an Azure OpenAI service for classified workloads and a new version of Microsoft 365's Copilot assistant. The company said that all those tools and services will feature higher levels of security that would make it more suitable for handling sensitive data. Szpindor's office, according to Axios, will evaluate the government version Copilot when it becomes available before deciding if it can be used on House devices. 

This article originally appeared on Engadget at https://www.engadget.com/microsoft-copilot-has-reportedly-been-blocked-on-all-congress-owned-devices-034946166.html?src=rss

The White House lays out extensive AI guidelines for the federal government

It's been five months since President Joe Biden signed an executive order (EO) to address the rapid advancements in artificial intelligence. The White House is today taking another step forward in implementing the EO with a policy that aims to regulate the federal government's use of AI. Safeguards that the agencies must have in place include, among other things, ways to mitigate the risk of algorithmic bias.

"I believe that all leaders from government, civil society and the private sector have a moral, ethical and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm while ensuring everyone is able to enjoy its benefits," Vice President Kamala Harris told reporters on a press call.

Harris announced three binding requirements under a new Office of Management and Budget (OMB) policy. First, agencies will need to ensure that any AI tools they use "do not endanger the rights and safety of the American people." They have until December 1 to make sure they have in place "concrete safeguards" to make sure that AI systems they're employing don't impact Americans' safety or rights. Otherwise, the agency will have to stop using an AI product unless its leaders can justify that scrapping the system would have an "unacceptable" impact on critical operations.

Impact on Americans' rights and safety

Per the policy, an AI system is deemed to impact safety if it "is used or expected to be used, in real-world conditions, to control or significantly influence the outcomes of" certain activities and decisions. Those include maintaining election integrity and voting infrastructure; controlling critical safety functions of infrastructure like water systems, emergency services and electrical grids; autonomous vehicles; and operating the physical movements of robots in "a workplace, school, housing, transportation, medical or law enforcement setting."

Unless they have appropriate safeguards in place or can otherwise justify their use, agencies will also have to ditch AI systems that infringe on the rights of Americans. Purposes that the policy presumes to impact rights defines include predictive policing; social media monitoring for law enforcement; detecting plagiarism in schools; blocking or limiting protected speech; detecting or measuring human emotions and thoughts; pre-employment screening; and "replicating a person’s likeness or voice without express consent."

When it comes to generative AI, the policy stipulates that agencies should assess potential benefits. They all also need to "establish adequate safeguards and oversight mechanisms that allow generative AI to be used in the agency without posing undue risk."

Transparency requirements

The second requirement will force agencies to be transparent about the AI systems they're using. "Today, President Biden and I are requiring that every year, US government agencies publish online a list of their AI systems, an assessment of the risks those systems might pose and how those risks are being managed," Harris said. 

As part of this effort, agencies will need to publish government-owned AI code, models and data, as long as doing so won't harm the public or government operations. If an agency can't disclose specific AI use cases for sensitivity reasons, they'll still have to report metrics

Vice President Kamala Harris delivers remarks during a campaign event with President Joe Biden in Raleigh, N.C., Tuesday, March 26, 2024. (AP Photo/Stephanie Scarbrough)
ASSOCIATED PRESS

Last but not least, federal agencies will need to have internal oversight of their AI use. That includes each department appointing a chief AI officer to oversee all of an agency's use of AI. "This is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government who are specifically tasked with overseeing AI adoption and use," Harris noted. Many agencies will also need to have AI governance boards in place by May 27.

The vice president added that prominent figures from the public and private sectors (including civil rights leaders and computer scientists) helped shape the policy along with business leaders and legal scholars.

The OMB suggests that, by adopting the safeguards, the Transportation Security Administration may have to let airline travelers opt out of facial recognition scans without losing their place in line or face a delay. It also suggests that there should be human oversight over things like AI fraud detection and diagnostics decisions in the federal healthcare system.

As you might imagine, government agencies are already using AI systems in a variety of ways. The National Oceanic and Atmospheric Administration is working on artificial intelligence models to help it more accurately forecast extreme weather, floods and wildfires, while the Federal Aviation Administration is using a system to help manage air traffic in major metropolitan areas to improve travel time.

"AI presents not only risk, but also a tremendous opportunity to improve public services and make progress on societal challenges like addressing climate change, improving public health and advancing equitable economic opportunity," OMB Director Shalanda Young told reporters. "When used and overseen responsibly, AI can help agencies to reduce wait times for critical government services to improve accuracy and expand access to essential public services."

This policy is the latest in a string of efforts to regulate the fast-evolving realm of AI. While the European Union has passed a sweeping set of rules for AI use in the bloc, and there are federal bills in the pipeline, efforts to regulate AI in the US have taken more of a patchwork approach at state level. This month, Utah enacted a law to protect consumers from AI fraud. In Tennessee, the Ensuring Likeness Voice and Image Security Act (aka the Elvis Act — seriously) is an attempt to protect musicians from deepfakes i.e. having their voices cloned without permission.

This article originally appeared on Engadget at https://www.engadget.com/the-white-house-lays-out-extensive-ai-guidelines-for-the-federal-government-090058684.html?src=rss

China bans Intel and AMD processors in government computers

China has introduced guidelines that bar the the use of US processors from AMD and Intel in government computers and servers, The Financial Times has reported. The new rules also block Microsoft Windows and foreign database products in favor of domestic solutions, marking the latest move in a long-running tech trade war between the two countries.

Government agencies must now use "safe and reliable" domestic replacements for AMD and Intel chips. The list includes 18 approved processors, including chips from Huawei and the state-backed company Phytium — both of which are banned in the US. 

The new rules — introduced in December and quietly implemented recently — could have a significant impact on Intel and AMD. China accounted for 27 percent of Intel's $54 billion in sales last year and 15 percent of AMD's revenue of $23 billion, according to the FT. It's not clear how many chips are used in government versus the private sector, however. 

The moves are China's most aggressive yet to restrict the use of US-built technology. Last year, Beijing prohibited domestic firms from using Micron chips in critical infrastructure. Meanwhile, the US has banned a wide range of Chinese companies ranging from chip manufacturers to aerospace firms. The Biden administration has also blocked US companies like NVIDIA from selling AI and other chips to China. 

The US, Japan and the Netherlands have dominated the manufacturing of cutting-edge processors, and those nations recently agreed to tighten export controls on lithography machines from ASL, Nikon and Tokyo Electron. However, Chinese companies, including Baidu, Huawei, Xiaomi and Oppo have already started designing their own semiconductors to prepare for a future wherein they could longer import chips from the US and other countries.

This article originally appeared on Engadget at https://www.engadget.com/china-bans-intel-and-amd-processors-in-government-computers-065859238.html?src=rss