The Federal Communications Commission (FCC) is keeping a close eye on internet providers to make sure they provide Americans with equal access to broadband services regardless of customers' "income level, race, ethnicity, color, religion or national origin." Two years after the Bipartisan Infrastructure Law became official, the FCC has adopted a final set of relevant rules to enforce.
The Commission will have the power to investigate possible instances of "digital discrimination" under the new rules and could penalize providers for violating them. It could, for instance, look into a company's pricing, network upgrades and maintenance procedures to decide whether a provider is keeping an affluent area well maintained while failing to provide the same level of service to a low-income area.
As The Wall Street Journal explains, it could even hold companies like AT&T and Comcast liable even if they weren't intentionally discriminatory, as long as their actions "differentially impact consumers' access to broadband." If the FCC does receive complaints against a particular provider, though, it will take into account any technical and economic challenges it may be facing that prevents it from providing equal access to its services.
According to The Journal, the FCC approved the new rules in a 3-2 vote. Their critics — mainly internet providers and Republican members of the Congress — argued that the decision could affect investments and that the commission is taking things too far by penalizing unintentional discrimination. But FCC Chairwoman Jessica Rosenworcel found the rules to be reasonable, especially since the agency will "accept genuine reasons of technical and economic feasibility as valid reasons."
In addition to adopting a set of rules for digital discrimination, the FCC has also updated its protections against SIM swapping and port-out scams. It will now require wireless providers to notify customers immediately when a SIM change or a port-out is requested for their account and phone number. Further, providers are required to take additional steps to protect their subscribers from the schemes. The FCC has voted to begin a formal inquiry to look into the impact of artificial intelligence on robocalls, as well. It could, after all, be used to block unwanted voice and text messages, but it could also be used to more easily defraud people through calls and texts.
Finally, the commission is now requiring mobile providers to split phone lines from family plans for victims of domestic violence when the abuser is on the account. Providers will also have to remove records of calls and texts to domestic violence hotlines from subscribers' logs, and they're expected to support survivors who can't afford lines of their own through the FCC Lifeline program.
Update, November 16, 2023, 8:50PM ET: This story has been updated to add information about the FCC's new rules supporting domestic violence survivors.
This article originally appeared on Engadget at https://www.engadget.com/the-fcc-will-crack-down-on-isps-to-improve-connectivity-in-poorer-areas-125041256.html?src=rss
The state agencies of Maine had fallen victim to cybercriminals who exploited a vulnerability in the MOVEit file transfer tool, making them the latest addition to the growing list of entities affected by the massive hack involving the software. In a notice the government has published about the cybersecurity incident, it said the event impacted approximately 1.3 million individuals, which basically make up the state's whole population. The state first caught wind of the software vulnerability in MOVEit on May 31 this year and found that cybercriminals were able to access and download files from its various agencies on May 28 and 29.
While the nature of stolen data varies per person based on their interaction with a particular agency, the notice says that the bad actors had stolen names, Social Security numbers, birthdates, driver's license and state identification numbers, as well as taxpayer identification numbers. In some cases, they were also able to get away with people's medical and health insurance information. Over 50 percent of the stolen data came from the Maine Department of Health and Human Services, followed by the Maine Department of Education.
The state government had blocked internet access to and from the MOVEit server as soon as it became aware of the incident. However, since the cybercriminals were already able to steal residents' information, it's also offering two years of complimentary credit monitoring and identity theft protection services to people whose SSNs and taxpayer numbers were compromised. As TechCrunch notes, the Clop ransomware gang that's believed to be behind previously reported incidents, has yet to release data stolen from Maine's agencies.
Clop took credit for an earlier New York City Department of Education hack, wherein the information of approximately 45,000 students was stolen. Cybercriminals exploiting the vulnerability haven't only been targeting the government, though, but also companies around the world. Sony is one of them. There's also Maximus Health Services, Inc, a US government contractor, whose breach has been the biggest MOVEit-related incident, so far.
The Securities and Exchange Commission is already investigating MOVEit creator Progress Software, though it only just sent the company a subpoena in October and is still in the "fact-finding inquiry" phase of its probe.
This article originally appeared on Engadget at https://www.engadget.com/basically-all-of-maine-had-data-stolen-by-a-ransomware-gang-061407794.html?src=rss
The state agencies of Maine had fallen victim to cybercriminals who exploited a vulnerability in the MOVEit file transfer tool, making them the latest addition to the growing list of entities affected by the massive hack involving the software. In a notice the government has published about the cybersecurity incident, it said the event impacted approximately 1.3 million individuals, which basically make up the state's whole population. The state first caught wind of the software vulnerability in MOVEit on May 31 this year and found that cybercriminals were able to access and download files from its various agencies on May 28 and 29.
While the nature of stolen data varies per person based on their interaction with a particular agency, the notice says that the bad actors had stolen names, Social Security numbers, birthdates, driver's license and state identification numbers, as well as taxpayer identification numbers. In some cases, they were also able to get away with people's medical and health insurance information. Over 50 percent of the stolen data came from the Maine Department of Health and Human Services, followed by the Maine Department of Education.
The state government had blocked internet access to and from the MOVEit server as soon as it became aware of the incident. However, since the cybercriminals were already able to steal residents' information, it's also offering two years of complimentary credit monitoring and identity theft protection services to people whose SSNs and taxpayer numbers were compromised. As TechCrunch notes, the Clop ransomware gang that's believed to be behind previously reported incidents, has yet to release data stolen from Maine's agencies.
Clop took credit for an earlier New York City Department of Education hack, wherein the information of approximately 45,000 students was stolen. Cybercriminals exploiting the vulnerability haven't only been targeting the government, though, but also companies around the world. Sony is one of them. There's also Maximus Health Services, Inc, a US government contractor, whose breach has been the biggest MOVEit-related incident, so far.
The Securities and Exchange Commission is already investigating MOVEit creator Progress Software, though it only just sent the company a subpoena in October and is still in the "fact-finding inquiry" phase of its probe.
This article originally appeared on Engadget at https://www.engadget.com/basically-all-of-maine-had-data-stolen-by-a-ransomware-gang-061407794.html?src=rss
Apple will pay $25 million in backpay and civil penalties to settle allegations that it favored visa holders and discriminated against US citizens and permanent residents during its hiring process, the Department of Justice said in a statement on Thursday. This is the largest amount that the DOJ has collected under the anti-discrimination provision of the Immigration and Nationality Act.
At the heart of the issue is a federal program administered by the Department of Labor and the Department of Homeland Security called the Permanent Labor Certification Program (PERM). PERM allows US employers to file for foreign workers on visas to become permanent US residents. As part of the PERM process, employers are required to prominently advertise open positions so that anyone can apply to them regardless of citizenship status.
The DOJ said that Apple violated these rules by not advertising PERM positions on their recruiting website, and also made it harder for people to apply by requiring mailed-in paper applications, something that it did not do for regular, non-PERM positions. As a result, a DOJ investigation found that Apple received few or no applications for these positions from US citizens or permanent residents who do not require work visas.
As part of the settlement, Apple will pay $6.75 million in civil penalties and set up a $18.25 million fund to pay back eligible discrimination victims, the DOJ's statement said.
Apple disagreed with the DOJ’s characterization. “Apple proudly employs more than 90,000 people in the United States and continues to invest nationwide, creating millions of jobs,” a company spokesperson told CNBC. “When we realized we had unintentionally not been following the DOJ standard, we agreed to a settlement addressing their concerns. We have implemented a robust remediation plan to comply with the requirements of various government agencies as we continue to hire American workers and grow in the US”
This article originally appeared on Engadget at https://www.engadget.com/apple-reaches-25m-settlement-with-the-doj-for-discriminating-against-us-residents-during-hiring-225857162.html?src=rss
Apple will pay $25 million in backpay and civil penalties to settle allegations that it favored visa holders and discriminated against US citizens and permanent residents during its hiring process, the Department of Justice said in a statement on Thursday. This is the largest amount that the DOJ has collected under the anti-discrimination provision of the Immigration and Nationality Act.
At the heart of the issue is a federal program administered by the Department of Labor and the Department of Homeland Security called the Permanent Labor Certification Program (PERM). PERM allows US employers to file for foreign workers on visas to become permanent US residents. As part of the PERM process, employers are required to prominently advertise open positions so that anyone can apply to them regardless of citizenship status.
The DOJ said that Apple violated these rules by not advertising PERM positions on their recruiting website, and also made it harder for people to apply by requiring mailed-in paper applications, something that it did not do for regular, non-PERM positions. As a result, a DOJ investigation found that Apple received few or no applications for these positions from US citizens or permanent residents who do not require work visas.
As part of the settlement, Apple will pay $6.75 million in civil penalties and set up a $18.25 million fund to pay back eligible discrimination victims, the DOJ's statement said.
Apple disagreed with the DOJ’s characterization. “Apple proudly employs more than 90,000 people in the United States and continues to invest nationwide, creating millions of jobs,” a company spokesperson told CNBC. “When we realized we had unintentionally not been following the DOJ standard, we agreed to a settlement addressing their concerns. We have implemented a robust remediation plan to comply with the requirements of various government agencies as we continue to hire American workers and grow in the US”
This article originally appeared on Engadget at https://www.engadget.com/apple-reaches-25m-settlement-with-the-doj-for-discriminating-against-us-residents-during-hiring-225857162.html?src=rss
Just days after President Joe Biden unveiled a sweeping executive order retasking the federal government with regards to AI development, Vice President Kamala Harris announced at the UK AI Safety Summit on Tuesday a half dozen more machine learning initiatives that the administration is undertaking. Among the highlights: the establishment of the United States AI Safety Institute, the first release of draft policy guidance on the federal government's use of AI and a declaration on the responsible military applications for the emerging technology.
"President Biden and I believe that all leaders, from government, civil society, and the private sector have a moral, ethical, and societal duty to make sure AI is adopted and advanced in a way that protects the public from potential harm and ensures that everyone is able to enjoy its benefits,” Harris said in her prepared remarks.
"Just as AI has the potential to do profound good, it also has the potential to cause profound harm, from AI-enabled cyber-attacks at a scale beyond anything we have seen before to AI-formulated bioweapons that could endanger the lives of millions," she said. The existential threats that generative AI systems present was a central theme of the summit.
"To define AI safety we must consider and address the full spectrum of AI risk — threats to humanity as a whole, threats to individuals, to our communities and to our institutions, and threats to our most vulnerable populations," she continued. "To make sure AI is safe, we must manage all these dangers."
To that end, Harris announced Wednesday that the White House, in cooperation with the Department of Commerce, is establishing the United States AI Safety Institute (US AISI) within the NIST. It will be responsible for actually creating and publishing the all of the guidelines, benchmark tests, best practices and such for testing and evaluating potentially dangerous AI systems.
These tests could include the red-team exercises that President Biden had mentioned in his EO. The AISI would also be tasked in providing technical guidance to lawmakers and law enforcement on a wide range of AI-related topics, including identifying generated content, authenticating live-recorded content, mitigating AI-driven discrimination, and ensuring transparency in its use.
Additionally, the Office of Management and Budget (OMB) is set to release for public comment the administration's first draft policy guidance on government AI use later this week. Like the Blueprint for an AI Bill of Rights that it builds upon, the draft policy guidance outlines steps that the national government can take to "advance responsible AI innovation" while maintaining transparency and protecting federal workers from increased surveillance and job displacement. This draft guidance will eventually be used to establish safeguards for the use of AI in a broad swath of public sector applications including transportation, immigration, health and education so it is being made available for public comment at ai.gov/input.
Harris also announced during her remarks that the Political Declaration on the Responsible Use of Artificial Intelligence and Autonomy the US issued in February has collected 30 signatories to date, all of whom have agreed to a set of norms for responsible development and deployment of military AI systems. Just 165 nations to go! The administration is also launching a a virtual hackathon in efforts to blunt the harm AI-empowered phone and internet scammers can inflict. Hackathon participants will work to build AI models that can counter robocalls and robotexts, especially those targeting elderly folks with generated voice scams.
Content authentication is a growing focus of the Biden-Harris administration. President Biden's EO explained that the Commerce Department will be spearheading efforts to validate content produced by the White House through a collaboration with the C2PA and other industry advocacy groups. They'll work to establish industry norms, such as the voluntary commitments previously extracted from 15 of the largest AI firms in Silicon Valley. In her remarks, Harris extended that call internationally, asking for support from all nations in developing global standards in authenticating government-produced content.
“These voluntary [company] commitments are an initial step toward a safer AI future, with more to come," she said. "As history has shown in the absence of regulation and strong government oversight, some technology companies choose to prioritize profit over: The wellbeing of their customers; the security of our communities; and the stability of our democracies."
"One important way to address these challenges — in addition to the work we have already done — is through legislation — legislation that strengthens AI safety without stifling innovation," Harris continued.
This article originally appeared on Engadget at https://www.engadget.com/kamala-harris-announces-ai-safety-institute-to-protect-american-consumers-060011065.html?src=rss
The Biden Administration unveiled its ambitious next steps in addressing and regulating artificial intelligence development on Monday. Its expansive new executive order (EO) seeks to establish further protections for the public as well as improve best practices for federal agencies and their contractors.
"The President several months ago directed his team to pull every lever," a senior administration official told reporters on a recent press call. "That's what this order does, bringing the power of the federal government to bear in a wide range of areas to manage AI's risk and harness its benefits ... It stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world and like all executive orders, this one has the force of law."
These actions will be introduced over the next year with smaller safety and security changes happening in around 90 days and with more involved reporting and data transparency schemes requiring nine to 12 months to fully deploy. The administration is also creating an “AI council,” chaired by White House Deputy Chief of Staff Bruce Reed, who will meet with federal agency heads to ensure that the actions are being executed on schedule.
ASSOCIATED PRESS
Public safety
"In response to the President's leadership on the subject, 15 major American technology companies have begun their voluntary commitments to ensure that AI technology is safe, secure and trustworthy before releasing it to the public," the senior administration official said. "That is not enough."
The EO directs the establishment of new standards for AI safety and security, including reporting requirements for developers whose foundation models might impact national or economic security. Those requirements will also apply in developing AI tools to autonomously implement security fixes on critical software infrastructure.
By leveraging the Defense Production Act, this EO will "require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests," per a White House press release. That information must be shared prior to the model being made available to to the public, which could help reduce the rate at which companies unleash half-baked and potentially deadly machine learning products.
In addition to the sharing of red team test results, the EO also requires disclosure of the system’s training runs (essentially, its iterative development history). “What that does is that creates a space prior to the release… to verify that the system is safe and secure,” officials said.
Administration officials were quick to point out that this reporting requirement will not impact any AI models currently available on the market, nor will it impact independent or small- to medium-size AI companies moving forward, as the threshold for enforcement is quite high. It's geared specifically for the next generation of AI systems that the likes of Google, Meta and OpenAI are already working on with enforcement on models starting at 10^26 petaflops, a capacity currently beyond the limits of existing AI models. "This is not going to catch AI systems trained by graduate students, or even professors,” the administration official said.
What's more, the EO will encourage the Departments of Energy and Homeland Security to address AI threats "to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks," per the release. "Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI." In short, any developers found in violation of the EO can likely expect a prompt and unpleasant visit from the DoE, FDA, EPA or other applicable regulatory agency, regardless of their AI model’s age or processing speed.
In an effort to proactively address the decrepit state of America's digital infrastructure, the order also seeks to establish a cybersecurity program, based loosely on the administration's existing AI Cyber Challenge, to develop AI tools that can autonomously root out and shore up security vulnerabilities in critical software infrastructure. It remains to be seen whether those systems will be able to address the concerns of misbehaving models that SEC head Gary Gensler recently raised.
AI watermarking and cryptographic validation
We're already seeing the normalization of deepfake trickery and AI-empowered disinformation on the campaign trail. So, the White House is taking steps to ensure that the public can trust the text, audio and video content that it publishes on its official channels. The public must be able to easily validate whether the content they see is AI-generated or not, argued White House officials on the press call.
Adobe
The Department of Commerce is in charge of the latter effort and is expected to work closely with existing industry advocacy groups like the C2PA and its sister organization, the CAI, to develop and implement a watermarking system for federal agencies. “We aim to support and facilitate and help standardize that work [by the C2PA],” administration officials said. “We see ourselves as plugging into that ecosystem.”
Officials further explained that the government is supporting the underlying technical standards and practices that will lead to digital watermarking’ wider adoption — similar to the work it did around developing the HTTPS ecosystem and in getting both developers and the public on-board with it. This will help federal officials achieve their other goal of ensuring that the government's official messaging can be relied upon.
Civil rights and consumer protections
The first Blueprint for an AI Bill of Rights that the White House released last October directed agencies to “combat algorithmic discrimination while enforcing existing authorities to protect people's rights and safety,” the administration official said. “But there's more to do.”
The new EO will require guidance be extended to “landlords, federal benefits programs and federal contractors” to prevent AI systems from exacerbating discrimination within their spheres of influence. It will also direct the Department of Justice to develop best practices for investigating and prosecuting civil rights violations related to AI, as well as, according to the announcement, “the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis."
Additionally, the EO calls for prioritizing federal support to accelerate development of privacy-preserving techniques that would enable future large language models to be trained on large datasets without the current risk of leaking personal details that those datasets might contain. These solutions could include “cryptographic tools that preserve individuals’ privacy,” developed with assistance from the Research Coordination Network and National Science Foundation. The executive order also reiterates its calls for bipartisan legislation from Congress addressing the broader privacy issues that AI systems present for consumers.
In terms of healthcare, the EO states that the Department of Health and Human Services will establish a safety program that tracks and remedies unsafe, AI-based medical practices. Educators will also see support from the federal government in using AI-based educational tools like personalized chatbot tutoring.
Worker protections
The Biden administration concedes that while the AI revolution is a decided boon for business, its capabilities make it a threat to worker security through job displacement and intrusive workplace surveillance. The EO seeks to address these issues with “the development of principles and employer best practices that mitigate the harms and maximize the benefit of AI for workers,” an administration official said. “We encourage federal agencies to adopt these guidelines in the administration of their programs.”
Richard Shotwell/Invision/AP
The EO will also direct the Department of Labor and the Council of Economic Advisors to both study how AI might impact the labor market and how the federal government might better support workers “facing labor disruption” moving forward. Administration officials also pointed to the potential benefits that AI might bring to the federal bureaucracy including cutting costs, and increasing cybersecurity efficacy. “There's a lot of opportunity here, but we have to to ensure the responsible government development and deployment of AI,” an administration official said.
To that end, the administration is launching on Monday a new federal jobs portal, AI.gov, which will offer information and guidance on available fellowship programs for folks looking for work with the federal government. “We're trying to get more AI talent across the board,” an administration official said. “Programs like the US Digital Service, the Presidential Innovation Fellowship and USA jobs — doing as much as we can to get talent in the door.” The White House is also looking to expand existing immigration rules to streamline visa criteria, interviews and reviews for folks trying to move to and work in the US in these advanced industries.
The White House reportedly did not brief the industry on this particular swath of radical policy changes, though administration officials did note that they had already been collaborating extensively with AI companies on many of these issues. The Senate held its second AI Insight Forum event last week on Capitol Hill, while Vice President Kamala Harris is scheduled to speak at the UK Summit on AI Safety, hosted by Prime Minister Rishi Sunak on Tuesday.
Chip Somodevilla via Getty Images
At an event hosted by The Washington Post on Thursday, Senate Majority Leader Chuck Schumer (D-NY) was already arguing that the executive order did not go far enough and could not be considered an effective replacement for congressional action, which to date, has been slow in coming.
“There’s probably a limit to what you can do by executive order,” Schumer told WaPo, “They [the Biden Administration] are concerned, and they’re doing a lot regulatorily, but everyone admits the only real answer is legislative.”
This article originally appeared on Engadget at https://www.engadget.com/sweeping-white-house-ai-executive-order-takes-aim-at-the-technologys-toughest-challenges-090008655.html?src=rss