FCC proposes new rules for AI-generated robocalls and robotexts

The Federal Communications Commission has proposed new rules governing the use of AI-generated phone calls and texts. Part of the proposal centers on create a clear definition for AI-generated calls, with the rest focuses on consumer protection by making companies disclose when AI is being used in calls or texts.

"This provides consumers with an opportunity to identify and avoid those calls or texts that contain an enhanced risk of fraud and other scams," the FCC said. The agency is also looking ensure that legitimate uses of AI to assist people with disabilities to communicate remains protected.

Today's proposal is the latest action by the FCC to regulate how AI is used in robocalls and robotexts. The commission has already moved to place a ban on AI-generated voices in robocalls and has called on telecoms to crack down on the practice. Ahead of this year's November election, there has already been one notable use of AI robocalls attempting to spread misinformation to New Hampshire voters.

This article originally appeared on Engadget at https://www.engadget.com/ai/fcc-proposes-new-rules-for-ai-generated-robocalls-and-robotexts-200013807.html?src=rss

Senators introduce bill to protect individuals against AI-generated deepfakes

Today, a group of senators introduced the NO FAKES Act, a law that would make it illegal to create digital recreations of a person's voice or likeness without that individual's consent. It's a bipartisan effort from Senators Chris Coons (D-Del.), Marsha Blackburn (R-Tenn.), Amy Klobuchar (D-Minn.) and Thom Tillis (R-N.C.), fully titled the Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2024.

If it passes, the NO FAKES Act would create an option for people to seek damages when their voice, face or body are recreated by AI. Both individuals and companies would be held liable for producing, hosting or sharing unauthorized digital replicas, including ones made by generative AI.

We've already seen many instances of celebrities finding their imitations of themselves out in the world. "Taylor Swift'' was used to scam people with a fake Le Creuset cookware giveaway. A voice that sounded a lot like Scarlet Johannson's showed up in a ChatGPT voice demo. AI can also be used to make political candidates appear to make false statements, with Kamala Harris the most recent example. And it's not only celebrities who can be victims of deepfakes.

"Everyone deserves the right to own and protect their voice and likeness, no matter if you’re Taylor Swift or anyone else," Senator Coons said. "Generative AI can be used as a tool to foster creativity, but that can’t come at the expense of the unauthorized exploitation of anyone’s voice or likeness."

The speed of new legislation notoriously flags behind the speed of new tech development, so it's encouraging to see lawmakers taking AI regulation seriously. Today's proposed act follows the Senate's recent passage of the DEFIANCE Act, which would allow victims of sexual deepfakes to sue for damages. 

Several entertainment organizations have lent their support to the NO FAKES Act, including SAG-AFTRA, the RIAA, the Motion Picture Association, and the Recording Academy. Many of these groups have been pursuing their own actions to get protection against unauthorized AI recreations. SAG-AFTRA recently went on strike against several game publishers to try and secure a union agreement for likenesses in video games.

Even OpenAI is listed among the act's backers. "OpenAI is pleased to support the NO FAKES Act, which would protect creators and artists from unauthorized digital replicas of their voices and likenesses," said Anna Makanju, OpenAI's vice president of global affairs. "Creators and artists should be protected from improper impersonation, and thoughtful legislation at the federal level can make a difference."

This article originally appeared on Engadget at https://www.engadget.com/senators-introduce-bill-to-protect-individuals-against-ai-generated-deepfakes-202809816.html?src=rss

The Senate just passed two landmark bills aimed at protecting minors online

The Senate has passed two major online safety bills amid years of debate over social media’s impact on teen mental health. The Kids Online Safety Act (KOSA) and the Children and Teens' Online Privacy Protection Act, also known as COPPA 2.0, passed the Senate in a vote of 91 - T3.

The bills will next head to the House, though it’s unclear if the measures will have enough support to pass. If passed into law, the bills would be the most significant pieces of legislation regulating tech companies in years.

KOSA requires social media companies like Meta to offer controls to disable algorithmic feeds and other “addictive” features for children under the age of 16. It also requires companies to provide parental supervision features and safeguard minors from content that promotes eating disorders, self harm, sexual exploitation and other harmful content.

One of the most controversial provisions in the bill creates what’s known as a “duty of care.” This means platforms are required to prevent or mitigate certain harmful effects of their products, like “addictive” features or algorithms that promote dangerous content. The Federal Trade Commission would be in charge of enforcing the standard.

The bill was originally introduced in 2022 but stalled amid pushback from digital rights and other advocacy groups who said the legislation would force platforms to spy on teens. A revised version, meant to address some of those concerns, was introduced last year, though the ACLU, EFF and other free speech groups still oppose the bill. In a statement last week, the ACLU said that KOSA would encourage social media companies “to censor protected speech” and “incentivize the removal of anonymous browsing on wide swaths of the internet.”

COPPA 2.0, on the other hand, has been less controversial among privacy advocates. An expansion of the 1998 Children and Teens' Online Privacy Protection Act, it aims to revise the nearly 30-year-old law to better reflect the modern internet and social media landscape. If passed, the law would prohibit companies from targeting advertising to children and collecting personal data on teens between 13 and 16 without consent. It also requires companies to offer an “eraser button” for personal data to delete children and teens’ personal information from a platform when “technologically feasible.”

The vote underscores how online safety has become a rare source of bipartisan agreement in the Senate, which has hosted numerous hearings on teen safety issues in recent years. The CEOs of Meta, Snap, Discord, X and TikTok testified at one such hearing earlier this year, during which South Carolina Senator Lindsey Graham accused the executives of having “blood on their hands” for numerous safety lapses.

This article originally appeared on Engadget at https://www.engadget.com/the-senate-just-passed-two-landmark-bills-aimed-at-protecting-minors-online-170935128.html?src=rss

ISPs are fighting to raise the price of low-income broadband

A new government program is trying to encourage Internet service providers (ISPs) to offer lower rates for lower income customers by distributing federal funds through states. The only problem is the ISPs don’t want to offer the proposed rates.

Ars Technica obtained a letter sent to US Commerce Secretary Gina Raimondo signed by more than 30 broadband industry trade groups like ACA Connects and the Fiber Broadband Association as well as several state based organizations. The letter raises “both a sense of alarm and urgency” about their ability to participate in the Broadband Equity, Access and Deployment (BEAD) program. The newly formed BEAD program provides over $42 billion in federal funds to “expand high-speed internet access by funding planning, infrastructure, deployment and adoption programs” in states across the country, according to the National Telecommunications and Information Administration (NTIA).

The money first goes to the NTIA and then it’s distributed to states after they obtain approval from the NTIA by presenting a low-cost broadband Internet option. The ISP industries’ letter claims a fixed rate of $30 per month for high speed Internet access is “completely unmoored from the economic realities of deploying and operating networks in the highest-cost, hardest-to-reach areas.”

The letter urges the NTIA to revise the low-cost service option rate proposed or approved so far. Twenty-six states have completed all of the BEAD program’s phases.

Americans pay an average of $89 a month for Internet access. New Jersey has the highest average bill at $126 per month, according to a survey conducted by U.S. News and World Report. A 2021 study from the Pew Research Center found that 57 percent of households with an annual salary of $30,000 or less have a broadband connection.

This article originally appeared on Engadget at https://www.engadget.com/isps-are-fighting-to-raise-the-price-of-low-income-broadband-220620369.html?src=rss

Apple agrees to stick by Biden administration’s voluntary AI safeguards

Apple has joined several other tech companies in agreeing to abide by voluntary AI safeguards laid out by the Biden administration. Those who make the pledge have committed to abide by eight guidelines related to safety, security and social responsibility, including flagging societal risks such as biases; testing for vulnerabilities, watermarking AI-generated images and audio; and sharing trust and safety details with the government and other companies.

Amazon, Google, Microsoft and OpenAI were among the initial adoptees of the pact, which the White House announced last July. The voluntary agreement, which is not enforceable, will expire after Congress passes laws to regulate AI.

Since the guidelines were announced, Apple unveiled a suite of AI-powered features under the umbrella name of Apple Intelligence. The tools will work across the company's key devices and are set to start rolling out in the coming months. As part of that push, Apple has teamed up with OpenAI to incorporate ChatGPT into Apple Intelligence. In joining the voluntary code of practice, Apple may be hoping to ward off regulatory scrutiny of its AI tools.

Although President Joe Biden has talked up the potential benefits of AI, he has warned of the dangers posed by the technology as well. His administration has been clear that it wants AI companies to develop their tech in a responsible manner.

Meanwhile, the White House said in a statement that federal agencies have met all of the 270-day targets laid out in a sweeping Executive Order related to AI that Biden issued last October. The EO covers issues such as safety and security measures, as well as reporting and data transparency schemes. The White House says that agencies have met all the stipulated deadlines to date.

This article originally appeared on Engadget at https://www.engadget.com/apple-agrees-to-stick-by-biden-administrations-voluntary-ai-safeguards-144653327.html?src=rss

California Supreme Court upholds classification of gig workers as independent contractors

Ride-share companies scored a victory in the California Supreme Court, allowing them to continue classifying gig workers as independent contractors rather than employees. Uber, Lyft, DoorDash and other gig-economy companies invested around $200 million in the passage of Proposition 22, which voters approved in 2020. The state’s highest court rejected a legal challenge from a drivers’ group and a labor union, ending their quest to bring full employee benefits to the state’s gig workers.

The California Supreme Court ruling affirms the state’s definition of drivers and other gig workers as independent contractors. Proposition 22, which received the support of 59 percent of voters in 2020, gives gig workers limited benefits like a baseline income and health insurance for those working at least 15 hours a week. However, it also allows the companies to avoid providing the broad swath of benefits full employees receive.

The Service Employees International Union and a drivers’ group sued to challenge the law after it went into effect in early 2021. Their lawsuit got an early boost from lower courts: An Alameda County Superior Court Justice ruled that year that Proposition 22 was “unconstitutional and unenforceable,” as the LA Times reported. The lower-court judge determined that the law diminished the state Legislature’s power to regulate injury compensation for workers.

However, in 2023, a state appeals court ruled the opposite, that Proposition 22 didn’t impede on the Legislature’s authority. Thursday’s decision upholds that ruling, ending the long saga and leaving the state’s gig workers with fewer benefits than they’d otherwise have. Proposition 22 remained in effect during the legal challenges, so nothing will change in how they’re treated.

Uber, Lyft, DoorDash and other gig-economy companies fought tooth and nail to pass and uphold the law. Four years ago, they invested upwards of $200 million in campaigning for it. They even threatened to pull their businesses from the state if they were forced to classify drivers as employees.

The LA Times reports the decision could influence other states’ laws. Uber has lobbied for similar legislation in other parts of the US. A law in Washington state closely parallels it, and the companies recently settled with the Massachusetts attorney general to provide similar (minimal) benefits to gig workers in that state.

Uber framed the ruling as a victory for upholding the will of the people (well, apart from the gig workers who wanted more benefits and protections). The company described the Supreme Court’s decision as “affirming the will of the nearly 10 million Californians who voted to deliver historic benefits and protections to drivers, while protecting their independence.”

This article originally appeared on Engadget at https://www.engadget.com/california-supreme-court-upholds-classification-of-gig-workers-as-independent-contractors-210735586.html?src=rss

Police in Scottsdale, AZ will start using drones as first responders

Police departments across Arizona plan to implement the use of drones as part of its first responders to emergency situations. Scottsdale’s police department will be the first in the state to use a special fleet of drones that can be sent to potential crime scenes and emergencies by special detection cameras.

The drone technology will come from a new drone startup called Aerodome and the public safety tech firm Flock Safety, which makes gunshot sensors, analytic software and cameras that can monitor neighborhoods and read license plates. Scottsdale PD’s drones will respond to emergencies in real time to provide first responders with a bird’s eye view of emergencies as first responders make their way to the area.

The drones can be dispatched by police officers and emergency dispatchers as well as Flock cameras that detect unlawful activity such as stolen vehicles or cars that match descriptions from an AMBER alert. They can even silently follow a suspect while officers handle multiple 911 calls and keep an aerial view of a runaway vehicle without risking the safety of officers and bystanders.

The use of drones by law enforcement has been growing over the years. More than 1,500 police departments use them in some capacity, according to Axios. First responders may see these drones as a useful tool but there are also serious concerns about protecting citizens’ Constitutional privacy rights.

Arizona police officers will use the first responder drones to monitor emergency situations and calls as they respond to it.
Screenshot from YouTube/Flock Safety

The American Civil Liberties Union (ACLU) has raised concerns about Flock’s license plate reader cameras. Last year, the ACLU expressed concerns with law enforcement’s use of “eye-in-the-sky policing” calling for communities to “put in place guardrails that will prevent those operations from expanding,” according to an editorial written by ACLU senior policy analyst Jay Stanley.

“It’s not clear where the courts will draw lines, and there’s a very real prospect that other, more local uses of drones become so common and routine that without strong privacy protections, we end up with the functional equivalent of a mass surveillance regime in the skies,” Stanley wrote.

There are some federal regulations currently in place that prevent police departments from misusing drones and maintain some level of safety. The Federal Aviation Administration (FAA) limits police’s drone use to the operator’s line of sight. The drone cannot be over 55 pounds including attached equipment or goods it may be carrying to emergency sites and they can’t fly any higher than 400 feet above the ground or structures.

This article originally appeared on Engadget at https://www.engadget.com/police-in-scottsdale-az-will-start-using-drones-as-first-responders-195503311.html?src=rss

Three senators introduce bill to protect artists and journalists from unauthorized AI use

Three US Senators introduced a bill that aims to rein in the rise and use of AI generated content and deepfakes by protecting the work of artists, songwriters and journalists.

The Content Original Protection and Integrity from Edited and Deepfaked Media (COPIED) Act was introduced to the Senate Friday morning. The bill is a bipartisan effort authorized by Sen. Marsha Blackburn (R-Tenn.), Sen. Maria Cantwell (D-Wash.) and Sen. Martin Heinrich (D-N.M.), according to a press alert issued by Blackburn’s office.

The COPIED ACT would, if enacted, create transparency standards through the National Institutes of Standards and Technology (NIST) to set guidelines for “content provenance information, watermarking, and synthetic content detection,” according to the press release.

The bill would also prohibit the unauthorized use of creative or journalistic content to train AI models or created AI content. The Federal Trade Commission and state attorneys general would also gain the authority to enforce these guidelines and individuals who had their legally created content used by AI to create new content without their consent or proper compensation would also have the right to take those companies or entities to court.

The bill would even expand the prohibition of tampering or removing content provenance information by internet platforms, search engines and social media companies.

A slew of content and journalism advocacy groups are already voicing their support for the COPIED Act to become law. They include groups like SAG-AFTRA, the Recording Industry Association of America, the National Association of Broadcasters, the Songwriters Guild of America and the National Newspaper Association.

This is not the Senate’s first attempt to create guidelines and laws for the rising use of AI content and it certainly won’t be the last. In April, Rep. Adam Schiff (D-Calif.) submitted a bill called the Generative AI Copyright Disclosure Act that would force AI companies to list their copyrighted sources in their datasets. The bill has not moved out of the House Committee on the Judiciary since its introduction, according to Senate records.

This article originally appeared on Engadget at https://www.engadget.com/three-senators-introduce-bill-to-protect-artists-and-journalists-from-unauthorized-ai-use-205603263.html?src=rss

Texas court blocks the FTC’s ban on noncompete agreements

The Federal Trade Commission's (FTC) ban on noncompete agreements was supposed to take effect on September 4, but a Texan court has postponed its implementation by siding with the plaintiffs in a lawsuit that seeks to block the rule. Back in April, the FTC banned noncompetes, which have been widely used in the tech industry for years, to drive innovation and protect workers' rights and wages. A lot of companies are unsurprisingly unhappy with the agency's rule — as NPR notes, Dallas tax services firm Ryan LLC sued the FTC hours after its announcement. The US Chamber of Commerce and other groups of American businesses eventually joined the lawsuit. 

"Noncompete clauses keep wages low, suppress new ideas, and rob the American economy of dynamism," FTC Chair Lina M. Khan said when the rule was announced. They prevent employees from moving to another company or from building businesses of their own in the same industry, so they may be stuck working in a job with lower pay or in an environment they don't like. But the Chamber of Commerce’s chief counsel Daryl Joseffer called the ban an attempt by the government to micromanage business decisions in a statement sent to Bloomberg

"The FTC’s blanket ban on noncompetes is an unlawful power grab that defies the agency’s constitutional and statutory authority and sets a dangerous precedent where the government knows better than the markets," Joseffer said. The FTC disagrees and told NPR that its "authority is supported by both statute and precedent."

US District Judge Ada Brown, an appointee of former President Donald Trump, wrote in her decision that "the text, structure, and history of the FTC Act reveal that the FTC lacks substantive rulemaking authority with respect to unfair methods of competition." Brown also said that the plaintiffs are "likely to succeed" in getting the rule struck down and that it's in the public's best interest to grant the plaintiff's motion for preliminary injunction. The judge added that the court will make a decision "on the ultimate merits of this action on or before August 30."

This article originally appeared on Engadget at https://www.engadget.com/texas-court-blocks-the-ftcs-ban-on-noncompete-agreements-150020601.html?src=rss

Texas age-verification law for pornography websites is going to the Supreme Court

Texas will be the main battleground for a case about porn websites that is now headed to the Supreme Court. The Free Speech Coalition, a nonprofit group that represents the adult industry, petitioned the top court in April to review a state law that requires websites with explicit material to collect proof of users' ages. SCOTUS today agreed to take on the case challenging a previous ruling by the US Court of Appeals for the 5th Circuit as a part of its next term beginning in October.

Texas was one of many states over the last year to pass this type of age-verification legislation aimed at porn websites. While supporters of these bills have said they are intended to protect minors from seeing inappropriate content, their critics have called the laws an overreach that could create new privacy risks. In response to the laws, Pornhub ended its operation in those states, a move that attracted public attention to the situation.

"While purportedly seeking to limit minors' access to online sexual content, the Act imposes significant burdens on adults' access to constitutionally protected expression," the FSC petition says. "Of central relevance here, it requires every user, including adults, to submit personally identifying information to access sensitive, intimate content over a medium — the internet — that poses unique security and privacy concerns."

This case is one of the latest First Amendment rights questions to go before the Supreme Court. Earlier this month, the court remanded a case about social media content moderation back to lower courts and passed judgment on how closely social media companies can engage with federal officials about misinformation.

This article originally appeared on Engadget at https://www.engadget.com/texas-age-verification-law-for-pornography-websites-is-going-to-the-supreme-court-233511418.html?src=rss