The FCC will vote to restore net neutrality later this month

The Federal Communications Commission (FCC) plans to vote to restore net neutrality later this month. With Democrats finally holding an FCC majority in the final year of President Biden’s first term, the agency can fulfill a 2021 executive order from the President and bring back the Obama-era rules that the Trump administration’s FCC gutted in 2017.

The FCC plans to hold the vote during a meeting on April 25. Net neutrality treats broadband services as an essential resource under Title II of the Communications Act, giving the FCC greater authority to regulate the industry. It lets the agency prevent ISPs from anti-consumer behavior like unfair pricing, blocking or throttling content and providing pay-to-play “fast lanes” to internet access.

Democrats had to wait three years to enact Biden’s 2021 executive order to reinstate the net neutrality rules passed in 2015 by President Obama’s FCC. The confirmation process of Biden FCC nominee Gigi Sohn for telecommunications regulator played no small part. She withdrew her nomination in March 2023 following what she called “unrelenting, dishonest and cruel attacks.”

Republicans (and Democratic Senator Joe Manchin) opposed her confirmation through a lengthy 16-month process. During that period, telecom lobbying dollars flowed freely and Republicans cited past Sohn tweets critical of Fox News, along with vocal opposition from law enforcement, as justification for blocking the confirmation. Democrats finally regained an FCC majority with the swearing-in of Anna Gomez in late September, near the end of Biden’s third year in office.

“The pandemic proved once and for all that broadband is essential,” FCC Chairwoman Rosenworcel wrote in a press release. “After the prior administration abdicated authority over broadband services, the FCC has been handcuffed from acting to fully secure broadband networks, protect consumer data, and ensure the internet remains fast, open, and fair. A return to the FCC’s overwhelmingly popular and court-approved standard of net neutrality will allow the agency to serve once again as a strong consumer advocate of an open internet.”

This article originally appeared on Engadget at https://www.engadget.com/the-fcc-will-vote-to-restore-net-neutrality-later-this-month-161813609.html?src=rss

California introduces ‘right to disconnect’ bill that would allow employees to possibly relax

Burnout, quiet quitting, strikes — the news (and likely your schedule) is filled with markers that workers are overwhelmed and too much is expected of them. There's little regulation in the United States to prevent employers from forcing workers to be at their desks or on call at all hours, but that might soon change. California State Assemblyman Matt Haney has introduced AB 2751, a "right to disconnect" proposition, The San Francisco Standard reports

The bill is in its early stages but, if passed, would make every California employer lay out exactly what a person's hours are and ensure they aren't required to respond to work-related communications while off the clock. Time periods in which a salaried employee might have to work longer hours would need to be laid out in their contract. Exceptions would exist for emergencies. 

The Department of Labor would monitor adherence and fine companies a minimum of $100 for wrongdoing — whether that's forcing employees to be on Zoom, their inbox, answering texts or monitoring Slack when they're not getting paid to do so. "I do think it’s fitting that California, which has created many of these technologies, is also the state that introduces how we make it sustainable and update our protections for the times we live in and the world we’ve created," Haney told The Standard

It's not clear how much support exists for AB 2751, but as a tech hub and a major economic center, the bill has the potential to create tremendous impact for workers in California, and pressure other states to follow suit. The bill follows similar legislation in other countries. In 2017, France became the first nation to implement a "right to disconnect" policy, a model which has been copied in Argentina, Ireland, Mexico and Spain.

This article originally appeared on Engadget at https://www.engadget.com/california-introduces-right-to-disconnect-bill-that-would-allow-employees-to-possibly-relax-151705072.html?src=rss

California introduces ‘right to disconnect’ bill that would allow employees to possibly relax

Burnout, quiet quitting, strikes — the news (and likely your schedule) is filled with markers that workers are overwhelmed and too much is expected of them. There's little regulation in the United States to prevent employers from forcing workers to be at their desks or on call at all hours, but that might soon change. California State Assemblyman Matt Haney has introduced AB 2751, a "right to disconnect" proposition, The San Francisco Standard reports

The bill is in its early stages but, if passed, would make every California employer lay out exactly what a person's hours are and ensure they aren't required to respond to work-related communications while off the clock. Time periods in which a salaried employee might have to work longer hours would need to be laid out in their contract. Exceptions would exist for emergencies. 

The Department of Labor would monitor adherence and fine companies a minimum of $100 for wrongdoing — whether that's forcing employees to be on Zoom, their inbox, answering texts or monitoring Slack when they're not getting paid to do so. "I do think it’s fitting that California, which has created many of these technologies, is also the state that introduces how we make it sustainable and update our protections for the times we live in and the world we’ve created," Haney told The Standard

It's not clear how much support exists for AB 2751, but as a tech hub and a major economic center, the bill has the potential to create tremendous impact for workers in California, and pressure other states to follow suit. The bill follows similar legislation in other countries. In 2017, France became the first nation to implement a "right to disconnect" policy, a model which has been copied in Argentina, Ireland, Mexico and Spain.

This article originally appeared on Engadget at https://www.engadget.com/california-introduces-right-to-disconnect-bill-that-would-allow-employees-to-possibly-relax-151705072.html?src=rss

NYC’s business chatbot is reportedly doling out ‘dangerously inaccurate’ information

An AI chatbot released by the New York City government to help business owners access pertinent information has been spouting falsehoods, at times even misinforming users about actions that are against the law, according to a report from The Markup. The report, which was co-published with the local nonprofit newsrooms Documented and The City, includes numerous examples of inaccuracies in the chatbot’s responses to questions relating to housing policies, workers’ rights and other topics.

Mayor Adams’ administration introduced the chatbot in October as an addition to the MyCity portal, which launched in March 2023 as “a one-stop shop for city services and benefits.” The chatbot, powered by Microsoft’s Azure AI, is aimed at current and aspiring business owners, and was billed as a source of “actionable and trusted information” that comes directly from the city government’s sites. But it is a pilot program, and a disclaimer on the website notes that it “may occasionally produce incorrect, harmful or biased content.”

In The Markup’s tests, the chatbot repeatedly provided incorrect information. In response to the question, “Can I make my store cashless?”, for example, it replied, “Yes, you can make your store cashless in New York City” — despite the fact that New York City banned cashless stores in 2020. The report shows the chatbot also responded incorrectly about whether employers can take their workers’ tips, whether landlords have to accept section 8 vouchers or tenants on rental assistance, and whether businesses have to inform staff of scheduling changes. A housing policy expert that spoke to The Markup called the chatbot “dangerously inaccurate” at its worst.

The city has indicated that the chatbot is still a work in progress. In a statement to The Markup, Leslie Brown, a spokesperson for the NYC Office of Technology and Innovation, said the chatbot “has already provided thousands of people with timely, accurate answers,” but added, “We will continue to focus on upgrading this tool so that we can better support small businesses across the city.” 

This article originally appeared on Engadget at https://www.engadget.com/nycs-business-chatbot-is-reportedly-doling-out-dangerously-inaccurate-information-203926922.html?src=rss

Microsoft Copilot has reportedly been blocked on all Congress-owned devices

US Congressional staff members can no longer use Microsoft's Copilot on their government-issued devices, according to Axios. The publication said it obtained a memo from House Chief Administrative Officer Catherine Szpindor, telling Congress personnel that the AI chatbot is now officially prohibited. Apparently, the Office of Cybersecurity has deemed Copilot to be a risk "due to the threat of leaking House data to non-House approved cloud services." While there's nothing stopping them from using Copilot on their own phones and laptops, it will now be blocked on all Windows devices owned by the Congress. 

Almost a year ago, the Congress also set a strict limit on the use of ChatGPT, which is powered by OpenAI's large language models, just like Copilot. It banned staffers from using the chatbot's free version on House computers, but it allowed them to continue using the paid (ChatGPT Plus) version for research and evaluation due to its tighter privacy controls. More recently, the White House revealed rules federal agencies have to follow when it comes to generative AI, which would ensure that any tool they use "do not endanger the rights and safety" of Americans. 

Microsoft told Axios that it does recognize government users' need for higher security requirements. Last year, it announced a roadmap of tools and services meant for government use, including an Azure OpenAI service for classified workloads and a new version of Microsoft 365's Copilot assistant. The company said that all those tools and services will feature higher levels of security that would make it more suitable for handling sensitive data. Szpindor's office, according to Axios, will evaluate the government version Copilot when it becomes available before deciding if it can be used on House devices. 

This article originally appeared on Engadget at https://www.engadget.com/microsoft-copilot-has-reportedly-been-blocked-on-all-congress-owned-devices-034946166.html?src=rss

Microsoft Copilot has reportedly been blocked on all Congress-owned devices

US Congressional staff members can no longer use Microsoft's Copilot on their government-issued devices, according to Axios. The publication said it obtained a memo from House Chief Administrative Officer Catherine Szpindor, telling Congress personnel that the AI chatbot is now officially prohibited. Apparently, the Office of Cybersecurity has deemed Copilot to be a risk "due to the threat of leaking House data to non-House approved cloud services." While there's nothing stopping them from using Copilot on their own phones and laptops, it will now be blocked on all Windows devices owned by the Congress. 

Almost a year ago, the Congress also set a strict limit on the use of ChatGPT, which is powered by OpenAI's large language models, just like Copilot. It banned staffers from using the chatbot's free version on House computers, but it allowed them to continue using the paid (ChatGPT Plus) version for research and evaluation due to its tighter privacy controls. More recently, the White House revealed rules federal agencies have to follow when it comes to generative AI, which would ensure that any tool they use "do not endanger the rights and safety" of Americans. 

Microsoft told Axios that it does recognize government users' need for higher security requirements. Last year, it announced a roadmap of tools and services meant for government use, including an Azure OpenAI service for classified workloads and a new version of Microsoft 365's Copilot assistant. The company said that all those tools and services will feature higher levels of security that would make it more suitable for handling sensitive data. Szpindor's office, according to Axios, will evaluate the government version Copilot when it becomes available before deciding if it can be used on House devices. 

This article originally appeared on Engadget at https://www.engadget.com/microsoft-copilot-has-reportedly-been-blocked-on-all-congress-owned-devices-034946166.html?src=rss

The White House lays out extensive AI guidelines for the federal government

It's been five months since President Joe Biden signed an executive order (EO) to address the rapid advancements in artificial intelligence. The White House is today taking another step forward in implementing the EO with a policy that aims to regulate the federal government's use of AI. Safeguards that the agencies must have in place include, among other things, ways to mitigate the risk of algorithmic bias.

"I believe that all leaders from government, civil society and the private sector have a moral, ethical and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm while ensuring everyone is able to enjoy its benefits," Vice President Kamala Harris told reporters on a press call.

Harris announced three binding requirements under a new Office of Management and Budget (OMB) policy. First, agencies will need to ensure that any AI tools they use "do not endanger the rights and safety of the American people." They have until December 1 to make sure they have in place "concrete safeguards" to make sure that AI systems they're employing don't impact Americans' safety or rights. Otherwise, the agency will have to stop using an AI product unless its leaders can justify that scrapping the system would have an "unacceptable" impact on critical operations.

Impact on Americans' rights and safety

Per the policy, an AI system is deemed to impact safety if it "is used or expected to be used, in real-world conditions, to control or significantly influence the outcomes of" certain activities and decisions. Those include maintaining election integrity and voting infrastructure; controlling critical safety functions of infrastructure like water systems, emergency services and electrical grids; autonomous vehicles; and operating the physical movements of robots in "a workplace, school, housing, transportation, medical or law enforcement setting."

Unless they have appropriate safeguards in place or can otherwise justify their use, agencies will also have to ditch AI systems that infringe on the rights of Americans. Purposes that the policy presumes to impact rights defines include predictive policing; social media monitoring for law enforcement; detecting plagiarism in schools; blocking or limiting protected speech; detecting or measuring human emotions and thoughts; pre-employment screening; and "replicating a person’s likeness or voice without express consent."

When it comes to generative AI, the policy stipulates that agencies should assess potential benefits. They all also need to "establish adequate safeguards and oversight mechanisms that allow generative AI to be used in the agency without posing undue risk."

Transparency requirements

The second requirement will force agencies to be transparent about the AI systems they're using. "Today, President Biden and I are requiring that every year, US government agencies publish online a list of their AI systems, an assessment of the risks those systems might pose and how those risks are being managed," Harris said. 

As part of this effort, agencies will need to publish government-owned AI code, models and data, as long as doing so won't harm the public or government operations. If an agency can't disclose specific AI use cases for sensitivity reasons, they'll still have to report metrics

Vice President Kamala Harris delivers remarks during a campaign event with President Joe Biden in Raleigh, N.C., Tuesday, March 26, 2024. (AP Photo/Stephanie Scarbrough)
ASSOCIATED PRESS

Last but not least, federal agencies will need to have internal oversight of their AI use. That includes each department appointing a chief AI officer to oversee all of an agency's use of AI. "This is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government who are specifically tasked with overseeing AI adoption and use," Harris noted. Many agencies will also need to have AI governance boards in place by May 27.

The vice president added that prominent figures from the public and private sectors (including civil rights leaders and computer scientists) helped shape the policy along with business leaders and legal scholars.

The OMB suggests that, by adopting the safeguards, the Transportation Security Administration may have to let airline travelers opt out of facial recognition scans without losing their place in line or face a delay. It also suggests that there should be human oversight over things like AI fraud detection and diagnostics decisions in the federal healthcare system.

As you might imagine, government agencies are already using AI systems in a variety of ways. The National Oceanic and Atmospheric Administration is working on artificial intelligence models to help it more accurately forecast extreme weather, floods and wildfires, while the Federal Aviation Administration is using a system to help manage air traffic in major metropolitan areas to improve travel time.

"AI presents not only risk, but also a tremendous opportunity to improve public services and make progress on societal challenges like addressing climate change, improving public health and advancing equitable economic opportunity," OMB Director Shalanda Young told reporters. "When used and overseen responsibly, AI can help agencies to reduce wait times for critical government services to improve accuracy and expand access to essential public services."

This policy is the latest in a string of efforts to regulate the fast-evolving realm of AI. While the European Union has passed a sweeping set of rules for AI use in the bloc, and there are federal bills in the pipeline, efforts to regulate AI in the US have taken more of a patchwork approach at state level. This month, Utah enacted a law to protect consumers from AI fraud. In Tennessee, the Ensuring Likeness Voice and Image Security Act (aka the Elvis Act — seriously) is an attempt to protect musicians from deepfakes i.e. having their voices cloned without permission.

This article originally appeared on Engadget at https://www.engadget.com/the-white-house-lays-out-extensive-ai-guidelines-for-the-federal-government-090058684.html?src=rss

China bans Intel and AMD processors in government computers

China has introduced guidelines that bar the the use of US processors from AMD and Intel in government computers and servers, The Financial Times has reported. The new rules also block Microsoft Windows and foreign database products in favor of domestic solutions, marking the latest move in a long-running tech trade war between the two countries.

Government agencies must now use "safe and reliable" domestic replacements for AMD and Intel chips. The list includes 18 approved processors, including chips from Huawei and the state-backed company Phytium — both of which are banned in the US. 

The new rules — introduced in December and quietly implemented recently — could have a significant impact on Intel and AMD. China accounted for 27 percent of Intel's $54 billion in sales last year and 15 percent of AMD's revenue of $23 billion, according to the FT. It's not clear how many chips are used in government versus the private sector, however. 

The moves are China's most aggressive yet to restrict the use of US-built technology. Last year, Beijing prohibited domestic firms from using Micron chips in critical infrastructure. Meanwhile, the US has banned a wide range of Chinese companies ranging from chip manufacturers to aerospace firms. The Biden administration has also blocked US companies like NVIDIA from selling AI and other chips to China. 

The US, Japan and the Netherlands have dominated the manufacturing of cutting-edge processors, and those nations recently agreed to tighten export controls on lithography machines from ASL, Nikon and Tokyo Electron. However, Chinese companies, including Baidu, Huawei, Xiaomi and Oppo have already started designing their own semiconductors to prepare for a future wherein they could longer import chips from the US and other countries.

This article originally appeared on Engadget at https://www.engadget.com/china-bans-intel-and-amd-processors-in-government-computers-065859238.html?src=rss

Senators ask intelligence officials to declassify details about TikTok and ByteDance

As the Senate considers the bill that would force a sale or ban of TikTok, lawmakers have heard directly from intelligence officials about the alleged national security threat posed by the app. Now, two prominent senators are asking the office of the Director of National Intelligence to declassify and make public what they have shared.

“We are deeply troubled by the information and concerns raised by the intelligence community in recent classified briefings to Congress,” Democratic Senators Richard Blumenthal and Republican Senator Marsha Blackburn write. “It is critically important that the American people, especially TikTok users, understand the national security issues at stake.”

The exact nature of the intelligence community's concerns about the app has long been a source of debate. Lawmakers in the House received a similar briefing just ahead of their vote on the bill. But while the briefing seemed to bolster support for the measure, some members said they left unconvinced, with one lawmaker saying that “not a single thing that we heard … was unique to TikTok.”

According to Axios, some senators described their briefing as “shocking,” though the group isn’t exactly known for their particularly nuanced understanding of the tech industry. (Blumenthal, for example, once pressed Facebook executives on whether they would “commit to ending finsta.”) In its report, Axios says that one lawmaker “said they were told TikTok is able to spy on the microphone on users' devices, track keystrokes and determine what the users are doing on other apps.” That may sound alarming, but it’s also a description of the kinds of app permissions social media services have been requesting for more than a decade.

TikTok has long denied that its relationship with parent company ByteDance would enable Chinese government officials to interfere with its service or spy on Americans. And so far, there is no public evidence that TikTok has ever been used in this way. If US intelligence officials do have evidence that is more than hypothetical, it would be a major bombshell in the long-running debate surrounding the app.

This article originally appeared on Engadget at https://www.engadget.com/senators-ask-intelligence-officials-to-declassify-details-about-tiktok-and-bytedance-180655697.html?src=rss

The case against the TikTok ban bill

A year ago, I visited TikTok’s US headquarters to preview its new “transparency center,” a central piece of its multibillion-dollar effort to convince the US its meme factory isn’t a national security threat. That effort has failed. The company’s negotiations with the government stalled out and the company is now facing its most serious threat to a future in the United States yet.

Last Wednesday, the House of Representatives overwhelmingly approved a bill that, if passed into law, would force ByteDance to sell TikTok or face an outright ban in the US. That lawmakers view TikTok with suspicion is nothing new. Because TikTok’s parent company, ByteDance, is based in China, they believe the Chinese government could manipulate TikTok’s algorithms or access its users’ data via ByteDance employees. But what has been surprising about the Protecting Americans from Foreign Adversary Controlled Applications Act is that it managed to gather so much support from both sides of the aisle seemingly out of nowhere.

After a surprise introduction, the bipartisan bill cleared committee in two days with a unanimous 50 - 0 vote, and was approved by the full House in a 352 - 65 vote less than a week later. Of the dozens of bills attempting to regulate the tech industry in recent years, including at least two to ban TikTok, none have gained nearly as much momentum.

But the renewed support for banning or forcing a sale of TikTok doesn’t seem to be tied to any newly uncovered information about TikTok, ByteDance or the Chinese Communist Party. Instead, lawmakers have largely been rehashing the same concerns that have been raised about the app for years.

One issue often raised is data access. TikTok, like many of its social media peers, scoops up large amounts of data from its users. The practice has gotten the company into hot water in the past when many of those users were discovered to be minors. Many lawmakers cite its large cache of user data, which they claim could be obtained by Chinese government officials, as one of the most significant risks posed by TikTok.

“Our bipartisan legislation would protect American social media users by driving the divestment of foreign adversary-controlled apps to ensure that Americans are protected from the digital surveillance and influence operations of regimes that could weaponize their personal data against them,” Representative Raja Krishnamoorthi, on the bill’s co-sponsors, said in a statement.

TikTok has repeatedly denied sharing any data with the Chinese government and says it would not comply if they were requested to do so. However, ByteDance has been caught mishandling TikTok user data in the past. In 2022, ByteDance fired four employees, including two based in China, for accessing the data of reporters who had written stories critical of the company. There’s no evidence those actions were directed by the Chinese government.

In fact the Protecting Americans from Foreign Adversary Controlled Applications Act would do little to address the data access issue, experts say. Even if the app was banned or controlled by a different company, Americans’ personal information would remain readily available from the largely unregulated data broker industry.

Data brokers gain access to vast troves of Americans’ personal data via scores of apps, websites, credit card companies and other businesses. Currently, there are few restrictions on what data can be collected or who can buy it. Biden Administration officials have warned that China is already buying up this data, much of it more revealing than anything TikTok collects.

“The data that's been collected about you will almost certainly live longer than you will, and there's really nothing you can do to delete it or get rid of it,” Justin Cappos, an NYU computer science professor and member of the NYU Center for Cybersecurity, told Engadget. “If the US really wants to solve this, the way to do it isn't to blame a social media company in China and make them the face of the problem. It's really to pass the meaningful data privacy regulations and go after [data] collection and go after these data brokers.”

The House recently passed a bill that would bar data brokers from selling Americans’ personal information to “adversary” countries like China. But, if passed, the law wouldn’t address the sale of that data to other entities or the wholesale collection of it to begin with.

Digital rights and free speech advocates like the Electronic Frontier Foundation (EFF) have also raised the possibility that the US forcing a ban or sale of TikTok could give other countries cover to enact similar bans or restrictions on US-based social media platforms. In a letter to lawmakers opposing the measure, the EFF, American Civil Liberties Union and other groups argued that it would “set an alarming global precedent for excessive government control over social media platforms.”

David Greene, a senior staff attorney at the EFF notes that the United States has forcefully criticized nations that have banned social media apps. “The State Department has been highly critical of countries that have shut down services,” Greene told Engadget, noting that the US condemned the Nigerian government for blocking Twitter in 2021. “Shutting down a whole service is essentially an anti-democratic thing.”

Intelligence officials held a classified briefing with members of Congress about TikTok shortly before the vote on the House floor. That’s led some pundits to believe that there must be new information about TikTok, but some lawmakers have suggested otherwise.“Not a single thing that we heard in today’s classified briefing was unique to TikTok,” Representative Sara Jacobs told the Associated Press. “It was things that happen on every single social media platform.” Likewise, the top Democrat on the House Intelligence Committee, Representative Jim Hines, said that TikTok is “largely a potential threat … if Congress were serious about dealing with this threat, we would start with a federal privacy bill.”

This article originally appeared on Engadget at https://www.engadget.com/the-case-against-the-tiktok-ban-bill-161517973.html?src=rss