ISPs are fighting to raise the price of low-income broadband

A new government program is trying to encourage Internet service providers (ISPs) to offer lower rates for lower income customers by distributing federal funds through states. The only problem is the ISPs don’t want to offer the proposed rates.

Ars Technica obtained a letter sent to US Commerce Secretary Gina Raimondo signed by more than 30 broadband industry trade groups like ACA Connects and the Fiber Broadband Association as well as several state based organizations. The letter raises “both a sense of alarm and urgency” about their ability to participate in the Broadband Equity, Access and Deployment (BEAD) program. The newly formed BEAD program provides over $42 billion in federal funds to “expand high-speed internet access by funding planning, infrastructure, deployment and adoption programs” in states across the country, according to the National Telecommunications and Information Administration (NTIA).

The money first goes to the NTIA and then it’s distributed to states after they obtain approval from the NTIA by presenting a low-cost broadband Internet option. The ISP industries’ letter claims a fixed rate of $30 per month for high speed Internet access is “completely unmoored from the economic realities of deploying and operating networks in the highest-cost, hardest-to-reach areas.”

The letter urges the NTIA to revise the low-cost service option rate proposed or approved so far. Twenty-six states have completed all of the BEAD program’s phases.

Americans pay an average of $89 a month for Internet access. New Jersey has the highest average bill at $126 per month, according to a survey conducted by U.S. News and World Report. A 2021 study from the Pew Research Center found that 57 percent of households with an annual salary of $30,000 or less have a broadband connection.

This article originally appeared on Engadget at https://www.engadget.com/isps-are-fighting-to-raise-the-price-of-low-income-broadband-220620369.html?src=rss

Apple agrees to stick by Biden administration’s voluntary AI safeguards

Apple has joined several other tech companies in agreeing to abide by voluntary AI safeguards laid out by the Biden administration. Those who make the pledge have committed to abide by eight guidelines related to safety, security and social responsibility, including flagging societal risks such as biases; testing for vulnerabilities, watermarking AI-generated images and audio; and sharing trust and safety details with the government and other companies.

Amazon, Google, Microsoft and OpenAI were among the initial adoptees of the pact, which the White House announced last July. The voluntary agreement, which is not enforceable, will expire after Congress passes laws to regulate AI.

Since the guidelines were announced, Apple unveiled a suite of AI-powered features under the umbrella name of Apple Intelligence. The tools will work across the company's key devices and are set to start rolling out in the coming months. As part of that push, Apple has teamed up with OpenAI to incorporate ChatGPT into Apple Intelligence. In joining the voluntary code of practice, Apple may be hoping to ward off regulatory scrutiny of its AI tools.

Although President Joe Biden has talked up the potential benefits of AI, he has warned of the dangers posed by the technology as well. His administration has been clear that it wants AI companies to develop their tech in a responsible manner.

Meanwhile, the White House said in a statement that federal agencies have met all of the 270-day targets laid out in a sweeping Executive Order related to AI that Biden issued last October. The EO covers issues such as safety and security measures, as well as reporting and data transparency schemes. The White House says that agencies have met all the stipulated deadlines to date.

This article originally appeared on Engadget at https://www.engadget.com/apple-agrees-to-stick-by-biden-administrations-voluntary-ai-safeguards-144653327.html?src=rss

California Supreme Court upholds classification of gig workers as independent contractors

Ride-share companies scored a victory in the California Supreme Court, allowing them to continue classifying gig workers as independent contractors rather than employees. Uber, Lyft, DoorDash and other gig-economy companies invested around $200 million in the passage of Proposition 22, which voters approved in 2020. The state’s highest court rejected a legal challenge from a drivers’ group and a labor union, ending their quest to bring full employee benefits to the state’s gig workers.

The California Supreme Court ruling affirms the state’s definition of drivers and other gig workers as independent contractors. Proposition 22, which received the support of 59 percent of voters in 2020, gives gig workers limited benefits like a baseline income and health insurance for those working at least 15 hours a week. However, it also allows the companies to avoid providing the broad swath of benefits full employees receive.

The Service Employees International Union and a drivers’ group sued to challenge the law after it went into effect in early 2021. Their lawsuit got an early boost from lower courts: An Alameda County Superior Court Justice ruled that year that Proposition 22 was “unconstitutional and unenforceable,” as the LA Times reported. The lower-court judge determined that the law diminished the state Legislature’s power to regulate injury compensation for workers.

However, in 2023, a state appeals court ruled the opposite, that Proposition 22 didn’t impede on the Legislature’s authority. Thursday’s decision upholds that ruling, ending the long saga and leaving the state’s gig workers with fewer benefits than they’d otherwise have. Proposition 22 remained in effect during the legal challenges, so nothing will change in how they’re treated.

Uber, Lyft, DoorDash and other gig-economy companies fought tooth and nail to pass and uphold the law. Four years ago, they invested upwards of $200 million in campaigning for it. They even threatened to pull their businesses from the state if they were forced to classify drivers as employees.

The LA Times reports the decision could influence other states’ laws. Uber has lobbied for similar legislation in other parts of the US. A law in Washington state closely parallels it, and the companies recently settled with the Massachusetts attorney general to provide similar (minimal) benefits to gig workers in that state.

Uber framed the ruling as a victory for upholding the will of the people (well, apart from the gig workers who wanted more benefits and protections). The company described the Supreme Court’s decision as “affirming the will of the nearly 10 million Californians who voted to deliver historic benefits and protections to drivers, while protecting their independence.”

This article originally appeared on Engadget at https://www.engadget.com/california-supreme-court-upholds-classification-of-gig-workers-as-independent-contractors-210735586.html?src=rss

Police in Scottsdale, AZ will start using drones as first responders

Police departments across Arizona plan to implement the use of drones as part of its first responders to emergency situations. Scottsdale’s police department will be the first in the state to use a special fleet of drones that can be sent to potential crime scenes and emergencies by special detection cameras.

The drone technology will come from a new drone startup called Aerodome and the public safety tech firm Flock Safety, which makes gunshot sensors, analytic software and cameras that can monitor neighborhoods and read license plates. Scottsdale PD’s drones will respond to emergencies in real time to provide first responders with a bird’s eye view of emergencies as first responders make their way to the area.

The drones can be dispatched by police officers and emergency dispatchers as well as Flock cameras that detect unlawful activity such as stolen vehicles or cars that match descriptions from an AMBER alert. They can even silently follow a suspect while officers handle multiple 911 calls and keep an aerial view of a runaway vehicle without risking the safety of officers and bystanders.

The use of drones by law enforcement has been growing over the years. More than 1,500 police departments use them in some capacity, according to Axios. First responders may see these drones as a useful tool but there are also serious concerns about protecting citizens’ Constitutional privacy rights.

Arizona police officers will use the first responder drones to monitor emergency situations and calls as they respond to it.
Screenshot from YouTube/Flock Safety

The American Civil Liberties Union (ACLU) has raised concerns about Flock’s license plate reader cameras. Last year, the ACLU expressed concerns with law enforcement’s use of “eye-in-the-sky policing” calling for communities to “put in place guardrails that will prevent those operations from expanding,” according to an editorial written by ACLU senior policy analyst Jay Stanley.

“It’s not clear where the courts will draw lines, and there’s a very real prospect that other, more local uses of drones become so common and routine that without strong privacy protections, we end up with the functional equivalent of a mass surveillance regime in the skies,” Stanley wrote.

There are some federal regulations currently in place that prevent police departments from misusing drones and maintain some level of safety. The Federal Aviation Administration (FAA) limits police’s drone use to the operator’s line of sight. The drone cannot be over 55 pounds including attached equipment or goods it may be carrying to emergency sites and they can’t fly any higher than 400 feet above the ground or structures.

This article originally appeared on Engadget at https://www.engadget.com/police-in-scottsdale-az-will-start-using-drones-as-first-responders-195503311.html?src=rss

Three senators introduce bill to protect artists and journalists from unauthorized AI use

Three US Senators introduced a bill that aims to rein in the rise and use of AI generated content and deepfakes by protecting the work of artists, songwriters and journalists.

The Content Original Protection and Integrity from Edited and Deepfaked Media (COPIED) Act was introduced to the Senate Friday morning. The bill is a bipartisan effort authorized by Sen. Marsha Blackburn (R-Tenn.), Sen. Maria Cantwell (D-Wash.) and Sen. Martin Heinrich (D-N.M.), according to a press alert issued by Blackburn’s office.

The COPIED ACT would, if enacted, create transparency standards through the National Institutes of Standards and Technology (NIST) to set guidelines for “content provenance information, watermarking, and synthetic content detection,” according to the press release.

The bill would also prohibit the unauthorized use of creative or journalistic content to train AI models or created AI content. The Federal Trade Commission and state attorneys general would also gain the authority to enforce these guidelines and individuals who had their legally created content used by AI to create new content without their consent or proper compensation would also have the right to take those companies or entities to court.

The bill would even expand the prohibition of tampering or removing content provenance information by internet platforms, search engines and social media companies.

A slew of content and journalism advocacy groups are already voicing their support for the COPIED Act to become law. They include groups like SAG-AFTRA, the Recording Industry Association of America, the National Association of Broadcasters, the Songwriters Guild of America and the National Newspaper Association.

This is not the Senate’s first attempt to create guidelines and laws for the rising use of AI content and it certainly won’t be the last. In April, Rep. Adam Schiff (D-Calif.) submitted a bill called the Generative AI Copyright Disclosure Act that would force AI companies to list their copyrighted sources in their datasets. The bill has not moved out of the House Committee on the Judiciary since its introduction, according to Senate records.

This article originally appeared on Engadget at https://www.engadget.com/three-senators-introduce-bill-to-protect-artists-and-journalists-from-unauthorized-ai-use-205603263.html?src=rss

Texas court blocks the FTC’s ban on noncompete agreements

The Federal Trade Commission's (FTC) ban on noncompete agreements was supposed to take effect on September 4, but a Texan court has postponed its implementation by siding with the plaintiffs in a lawsuit that seeks to block the rule. Back in April, the FTC banned noncompetes, which have been widely used in the tech industry for years, to drive innovation and protect workers' rights and wages. A lot of companies are unsurprisingly unhappy with the agency's rule — as NPR notes, Dallas tax services firm Ryan LLC sued the FTC hours after its announcement. The US Chamber of Commerce and other groups of American businesses eventually joined the lawsuit. 

"Noncompete clauses keep wages low, suppress new ideas, and rob the American economy of dynamism," FTC Chair Lina M. Khan said when the rule was announced. They prevent employees from moving to another company or from building businesses of their own in the same industry, so they may be stuck working in a job with lower pay or in an environment they don't like. But the Chamber of Commerce’s chief counsel Daryl Joseffer called the ban an attempt by the government to micromanage business decisions in a statement sent to Bloomberg

"The FTC’s blanket ban on noncompetes is an unlawful power grab that defies the agency’s constitutional and statutory authority and sets a dangerous precedent where the government knows better than the markets," Joseffer said. The FTC disagrees and told NPR that its "authority is supported by both statute and precedent."

US District Judge Ada Brown, an appointee of former President Donald Trump, wrote in her decision that "the text, structure, and history of the FTC Act reveal that the FTC lacks substantive rulemaking authority with respect to unfair methods of competition." Brown also said that the plaintiffs are "likely to succeed" in getting the rule struck down and that it's in the public's best interest to grant the plaintiff's motion for preliminary injunction. The judge added that the court will make a decision "on the ultimate merits of this action on or before August 30."

This article originally appeared on Engadget at https://www.engadget.com/texas-court-blocks-the-ftcs-ban-on-noncompete-agreements-150020601.html?src=rss

Texas age-verification law for pornography websites is going to the Supreme Court

Texas will be the main battleground for a case about porn websites that is now headed to the Supreme Court. The Free Speech Coalition, a nonprofit group that represents the adult industry, petitioned the top court in April to review a state law that requires websites with explicit material to collect proof of users' ages. SCOTUS today agreed to take on the case challenging a previous ruling by the US Court of Appeals for the 5th Circuit as a part of its next term beginning in October.

Texas was one of many states over the last year to pass this type of age-verification legislation aimed at porn websites. While supporters of these bills have said they are intended to protect minors from seeing inappropriate content, their critics have called the laws an overreach that could create new privacy risks. In response to the laws, Pornhub ended its operation in those states, a move that attracted public attention to the situation.

"While purportedly seeking to limit minors' access to online sexual content, the Act imposes significant burdens on adults' access to constitutionally protected expression," the FSC petition says. "Of central relevance here, it requires every user, including adults, to submit personally identifying information to access sensitive, intimate content over a medium — the internet — that poses unique security and privacy concerns."

This case is one of the latest First Amendment rights questions to go before the Supreme Court. Earlier this month, the court remanded a case about social media content moderation back to lower courts and passed judgment on how closely social media companies can engage with federal officials about misinformation.

This article originally appeared on Engadget at https://www.engadget.com/texas-age-verification-law-for-pornography-websites-is-going-to-the-supreme-court-233511418.html?src=rss

Supreme Court ruling may allow officials to coordinate with social platforms again

The US Supreme Court has ruled on controversial attempt by two states, Missouri and Louisiana, to limit Biden Administration officials and other government agencies from engaging with workers at social media companies about misinformation, election interference and other policies. Rather than set new guidelines on acceptable communication between these parties, the Court held that the plaintiffs lacked standing to bring the issue at all. 

In Murthy, the states (as well as five individual social media users) alleged that, in the midst of the COVID pandemic and the 2020 election, officials at the CDC, FBI and other government agencies "pressured" Meta, Twitter and Google "to censor their speech in violation of the First Amendment."

The Court wrote, in an opinion authored by Justice Barrett, that "the plaintiffs must show a substantial risk that, in the near future, at least one platform will restrict the speech of at least one plaintiff in response to the actions of at least one Government defendant. Here, at the preliminary injunction stage, they must show that they are likely to succeed in carrying that burden." She went on to describe this as "a tall order." 

Though a Louisiana District Court order blocking contact between social media companies and Biden Administration officials has been on hold, the case has still had a significant impact on relationships between these parties. Last year, Meta revealed that its security researchers were no longer receiving their usual briefings from the FBI or CISA (Cybersecurity and Infrastructure Security Agency) regarding foreign election interference. FBI officials had also warned that there were instances in which they discovered election interference attempts but didn’t warn social media companies due to additional layers of legal scrutiny implemented following the lawsuit. With today's ruling it seems possible such contact might now be allowed to continue. 

In part, it seems the Court was reluctant to rule on the case because of the potential for far-reaching First Amendment implications. Among the arguments made by the Plaintiffs was an assertion of a "right to listen" theory, that social media users have a Constitutional right to engage with content. "This theory is startlingly broad," Barrett wrote, "as it would grant all social-media users the right to sue over someone else’s censorship." The opinion was joined by Justices Roberts, Sotomayor, Kagan, Kavanaugh and Jackson. Justice Alito dissented, and was joined by Justices Thomas and Gorsuch. 

The case was one of a handful involving free speech and social media to come before the Supreme Court this term. The court is also set to rule on two linked cases involving state laws from Texas and Florida that could upend the way social media companies handle content moderation.

This article originally appeared on Engadget at https://www.engadget.com/supreme-court-ruling-may-allow-officials-to-coordinate-with-social-platforms-again-144045052.html?src=rss

New York Governor signs two new bills into law protecting kids from social media

New York has passed two new laws restricting how social media companies interact with and collect data from users under the age of 18.

New York Governor Kathy Hochul signed two bills into law on Thursday including the Stop Addictive Feeds Exploitation (SAFE) for Kids Act and the New York Child Data Protection Act.

SAFE requires social media companies like Facebook and X to restrict addictive feeds to minors on its platforms. These include feeds that are “algorithmically driven” to prevent “unhealthy levels of engagement,” according to a press release.

The New York Child Data Protection Act also prevents online sites and devices from collecting, sharing or selling the personal data of anyone under the age of 18.

Both laws require companies to obtain consent from parents before allowing kids to access feeds driven by algorithms or collecting data from them. The new laws also require social media companies to create age verification and parental consent controls for its platforms based on guidelines set by New York’s Attorney General.

New York passed two new laws restricting how social media companies interact with and collect data from users under the age of 18. Governor Hochul said in a released statement that these new policies will “provide a safer digital environment, give parents more peace of mind and create a brighter future for young people across New York.”

Other parts of the country have passed laws restricting or limiting children’s access to phones and online platforms. The California State Senate approved a bill similar to New York’s SAFE Act that would also prevent social media apps from sending notifications to minors during school hours and from midnight to 6 a.m. throughout the year. The Los Angeles Unified School District instituted a ban that restricts students’ phone usage during school hours. California Governor Gavin Newson responded to the decision by promising to work with lawmakers on a similar statewide law.

These new policies and laws aren’t just about keeping kids off of their phone while they’re in school. They are designed to address mental health issues caused by social media platforms. The New York Times published an op-ed on Monday from US Surgeon General Vivek Murthy calling social media an “important contributor” to the detriment of mental health in teenagers and called for social media companies to post a warning label for adolescents on its platforms and apps.

This article originally appeared on Engadget at https://www.engadget.com/new-york-governor-signs-two-new-bills-into-law-protecting-kids-from-social-media-211935749.html?src=rss

EU delays decision over scanning encrypted messages for CSAM

European Union officials have delayed talks over proposed legislation that could lead to messaging services having to scan photos and links to detect possible child sexual abuse material (CSAM). Were the proposal to become law, it may require the likes of WhatsApp, Messenger and Signal to scan all images that users upload — which would essentially force them to break encryption.

For the measure to pass, it would need to have the backing of at least 15 of the member states representing at least 65 percent of the bloc's entire population. However, countries including Germany, Austria, Poland, the Netherlands and the Czech Republic were expected to abstain from the vote or oppose the plan due to cybersecurity and privacy concerns, Politico reports. If EU members come to an agreement on a joint position, they'll have to hash out a final version of the law with the European Commission and European Parliament.

The legislation was first proposed in 2022 and it could result in messaging services having to scan all images and links with the aim of detecting CSAM and communications between minors and potential offenders. Under the proposal, users would be informed about the link and image scans in services' terms and conditions. If they refused, they would be blocked from sharing links and images on those platforms. However, as Politico notes, the draft proposal includes an exemption for “accounts used by the State for national security purposes."

EU Council leaders are said to have been trying for six months to break the impasse and move forward negotiations to finalize the law. Belgium's presidency of the Council is set to end on June 30, and it's unclear if the incoming leadership will continue to prioritize the proposal.

Patrick Breyer, a digital rights activist who was a member of the previous European Parliament before this month's elections, has argued that proponents of the so-called "chat control" plan aimed to take advantage of a power vacuum before the next parliament is constituted. Breyer says that the delay of the vote, prompted in part by campaigners, "should be celebrated," but warned that "surveillance extremists among the EU governments" could again attempt to advance chat control in the coming days.

Other critics and privacy advocates have slammed the proposal. Signal president Meredith Whittaker said in a statement that "mass scanning of private communications fundamentally undermines encryption," while Edward Snowden described it as a "terrifying mass surveillance measure."

Advocates, on the other hand, have suggested that breaking encryption would be acceptable in order to tackle CSAM. "The Commission proposed the method or the rule that even encrypted messaging can be broken for the sake of better protecting children," Vice President of the European Commission for Values and Transparency Věra Jourová said on Thursday, per EuroNews.

The EU is not the only entity to attempt such a move. In 2021, Apple revealed a plan to scan iCloud Photos for known CSAM. However, it scrapped that controversial effort following criticism from the likes of customers, advocacy groups and researchers.

This article originally appeared on Engadget at https://www.engadget.com/eu-delays-decision-over-scanning-encrypted-messages-for-csam-142208548.html?src=rss