House and Senate bills aim to protect journalists’ data from government surveillance

News gatherers in the US may soon have safeguards against government attempts to comb through their data. Bipartisan House and Senate groups have reintroduced legislation, the PRESS Act (Protect Reporters from Exploitive State Spying), that limits the government's ability to compel data disclosures that might identify journalists' sources. The Senate bill, would extend disclosure exemptions and standards to cover email, phone records, and other info third parties hold.

The PRESS Act would also require that the federal government gives journalists a chance to respond to data requests. Courts could still demand disclosure if it's necessary to prevent terrorism, identify terrorists or prevent serious "imminent" violence. The Senate bill is the work of Richard Durbin, Mike Lee and Ron Wyden, while the House equivalent comes from representatives Kevin Kiley and Jamie Raskin.

Sponsors characterize the bill as vital to protecting First Amendment press freedoms. Anonymous source leaks help keep the government accountable, Wyden says. He adds that surveillance like this can deter reporters and sources worried about retaliation. Lee, meanwhile, says the Act will also maintain the public's "right to access information" and help it participate in a representative democracy.

The senators point to instances from both Democratic and Republican administrations where law enforcement subpoenaed data in a bid to catch sources. Most notably, the Justice Department under Trump is known to have seized call records and email logs from major media outlets like CNN and The New York Times following an April 2017 report on how former FBI director James Comey handled investigations during the 2016 presidential election.

Journalist shield laws exist in 48 states and the District of Columbia, but there's no federal law. That void lets the Justice Department and other government bodies quietly grab data from telecoms and other providers. The PRESS Act theoretically patches that hole and minimizes the chances of abuse.

There's no guarantee the PRESS Act will reach President Biden's desk and become law. However, both Congress camps are betting that bipartisan support will help. The House version passed "unanimously" in the previous session of Congress, Wyden's office says.

This article originally appeared on Engadget at https://www.engadget.com/house-and-senate-bills-aim-to-protect-journalists-data-from-government-surveillance-192907280.html?src=rss

Lawmakers seek ‘blue-ribbon commission’ to study impacts of AI tools

The wheels of government have finally begun to turn on the issue of generative AI regulation. US Representatives Ted Lieu (D-CA) and Ken Buck (R-CO) introduced legislation on Monday that would establish a 20-person commission to study ways to “mitigate the risks and possible harms” of AI while “protecting” America's position as a global technology power. 

The bill would require the Executive branch to appoint experts from throughout government, academia and industry to conduct the study over the course of two years, producing three reports during that period. The president would appoint eight members of the committee, while Congress, in an effort "to ensure bipartisanship," would split the remaining 12 positions evenly between the two parties (thereby ensuring the entire process devolves into a partisan circus).

"[Generative AI] can be disruptive to society, from the arts to medicine to architecture to so many different fields, and it could also potentially harm us and that's why I think we need to take a somewhat different approach,” Lieu told the Washington Post. He views the commission as a way to give lawmakers — the same folks routinely befuddled by TikTok — a bit of "breathing room" in understanding how the cutting-edge technology functions.

Senator Brian Schatz (D-HI) plans to introduce the bill's upper house counterpart, Lieu's team told WaPo, though no timeline for that happening was provided. Lieu also noted that Congress as a whole would do well to avoid trying to pass major legislation on the subject until the commission has had its time. “I just think we need some experts to inform us and just have a little bit of time pass before we put something massive into law,” Lieu said.

Of course, that would then push the passage any sort of meaningful Congressional regulation on generative AI out to 2027, at the very earliest, rather than right now, when we actually need it. Given how rapidly both the technology and the use cases for it have evolved in just the last six months, this study will have its work cut out just keeping pace with the changes, much less convincing the octogenarians running our nation of the potential dangers AI poses to our democracy.

This article originally appeared on Engadget at https://www.engadget.com/lawmakers-seek-blue-ribbon-commission-to-study-impacts-of-ai-tools-152550502.html?src=rss

Senators reintroduce COPPA 2.0 bill to tighten child safety online

Yet more senators are trying to resurrect legislation aimed at protecting kids' online privacy. Senators Bill Cassidy and Ed Markey have reintroduced a "COPPA 2.0" (Children and Teens' Online Privacy Protection Act) bill that would expand and revise the 1998 law to deal with the modern internet, particularly social media.

COPPA 2.0 would bar companies from gathering personal data from teens aged 13 to 16 without their consent. It would ban all targeted advertising to children and teens, and create a "bill of rights" that limits personal info gathering for marketing purposes. The measure would also require a button to let kids and parents delete personal data when it's "technologically feasible."

The sequel potentially makes it easier to take action in the first place. Where COPPA requires direct knowledge that companies are collecting data from kids under 13, 2.0 would cover apps and services that are "reasonably likely" to have children as users. The Federal Trade Commission, meanwhile, would have to establish a division committed to regulating youth marketing and privacy.

Cassidy and Markey portray the bill as necessary to tackle a "mental health crisis" where tech giants allegedly play a role. The politicians argue that social networks amplify teens' negative feelings, pointing to Facebook's own research as evidence.

Social networks have tried to clamp down on misuses of child data. Meta's Facebook and Instagram have limited ad targeting for teens, for instance. However, there have also been concerns that online platforms haven't gone far enough. On top of earlier calls for bans on ad targeting, states like Arkansas and Utah have already passed laws respectively requiring age verification and parental permission for social media. Another Senate bill, the Protecting Kids on Social Media Act, would require parents' approval across the US.

Whether or not COPPA 2.0 makes it to the President's desk for signature isn't clear. The first attempt got stuck in committee ahead of the current Congress session. It also comes right as other senators are making attempts to revive the EARN IT Act (aimed at curbing child sexual abuse material) and the Kids Online Safety Act (meant to fight toxic online content as a whole). All three reintroductions are bipartisan, but they'll need considerably stronger support in the Senate, plus successful equivalents in the House, to become law.

This article originally appeared on Engadget at https://www.engadget.com/senators-reintroduce-coppa-20-bill-to-tighten-child-safety-online-165043087.html?src=rss

Senators reintroduce COPPA 2.0 bill to tighten child safety online

Yet more senators are trying to resurrect legislation aimed at protecting kids' online privacy. Senators Bill Cassidy and Ed Markey have reintroduced a "COPPA 2.0" (Children and Teens' Online Privacy Protection Act) bill that would expand and revise the 1998 law to deal with the modern internet, particularly social media.

COPPA 2.0 would bar companies from gathering personal data from teens aged 13 to 16 without their consent. It would ban all targeted advertising to children and teens, and create a "bill of rights" that limits personal info gathering for marketing purposes. The measure would also require a button to let kids and parents delete personal data when it's "technologically feasible."

The sequel potentially makes it easier to take action in the first place. Where COPPA requires direct knowledge that companies are collecting data from kids under 13, 2.0 would cover apps and services that are "reasonably likely" to have children as users. The Federal Trade Commission, meanwhile, would have to establish a division committed to regulating youth marketing and privacy.

Cassidy and Markey portray the bill as necessary to tackle a "mental health crisis" where tech giants allegedly play a role. The politicians argue that social networks amplify teens' negative feelings, pointing to Facebook's own research as evidence.

Social networks have tried to clamp down on misuses of child data. Meta's Facebook and Instagram have limited ad targeting for teens, for instance. However, there have also been concerns that online platforms haven't gone far enough. On top of earlier calls for bans on ad targeting, states like Arkansas and Utah have already passed laws respectively requiring age verification and parental permission for social media. Another Senate bill, the Protecting Kids on Social Media Act, would require parents' approval across the US.

Whether or not COPPA 2.0 makes it to the President's desk for signature isn't clear. The first attempt got stuck in committee ahead of the current Congress session. It also comes right as other senators are making attempts to revive the EARN IT Act (aimed at curbing child sexual abuse material) and the Kids Online Safety Act (meant to fight toxic online content as a whole). All three reintroductions are bipartisan, but they'll need considerably stronger support in the Senate, plus successful equivalents in the House, to become law.

This article originally appeared on Engadget at https://www.engadget.com/senators-reintroduce-coppa-20-bill-to-tighten-child-safety-online-165043087.html?src=rss

House bill would demand disclosure of AI-generated content in political ads

At least one politician wants more transparency in the wake of an AI-generated attack ad. New York Democrat House Representative Yvette Clarke has introduced a bill, the REAL Political Ads Act, that would require political ads to disclose the use of generative AI through conspicuous audio or text. The amendment to the Federal Election Campaign Act would also have the Federal Election Commission (FEC) create regulations to enforce this, although the measure would take effect January 1st, 2024 regardless of whether or not rules are in place.

The proposed law would help fight misinformation. Clarke characterizes this as an urgent matter ahead of the 2024 election — generative AI can "manipulate and deceive people on a large scale," the representative says. She believes unchecked use could have a "devastating" effect on elections and national security, and that laws haven't kept up with the technology.

The bill comes just days after Republicans used AI-generated visuals in a political ad speculating what might happen during a second term for President Biden. The ad does include a faint disclaimer that it's "built entirely with AI imagery," but there's a concern that future advertisers might skip disclaimers entirely or lie about past events.

Politicians already hope to regulate AI. California's Rep. Ted Lieu put forward a measure that would regulate AI use on a broader scale, while the National Telecoms and Information Administration (NTIA) is asking for public input on potential AI accountability rules. Clarke's bill is more targeted and clearly meant to pass quickly.

Whether or not it does isn't certain. The act has to pass a vote in a Republican-led House, and the Senate jsd to develop and pass an equivalent bill before the two bodies of Congress reconcile their work and send a law to the President's desk. Success also won't prevent unofficial attempts to fool voters. Still, this might discourage politicians and action committees from using AI to fool voters.

This article originally appeared on Engadget at https://www.engadget.com/house-bill-would-demand-disclosure-of-ai-generated-content-in-political-ads-190524733.html?src=rss

House bill would demand disclosure of AI-generated content in political ads

At least one politician wants more transparency in the wake of an AI-generated attack ad. New York Democrat House Representative Yvette Clarke has introduced a bill, the REAL Political Ads Act, that would require political ads to disclose the use of generative AI through conspicuous audio or text. The amendment to the Federal Election Campaign Act would also have the Federal Election Commission (FEC) create regulations to enforce this, although the measure would take effect January 1st, 2024 regardless of whether or not rules are in place.

The proposed law would help fight misinformation. Clarke characterizes this as an urgent matter ahead of the 2024 election — generative AI can "manipulate and deceive people on a large scale," the representative says. She believes unchecked use could have a "devastating" effect on elections and national security, and that laws haven't kept up with the technology.

The bill comes just days after Republicans used AI-generated visuals in a political ad speculating what might happen during a second term for President Biden. The ad does include a faint disclaimer that it's "built entirely with AI imagery," but there's a concern that future advertisers might skip disclaimers entirely or lie about past events.

Politicians already hope to regulate AI. California's Rep. Ted Lieu put forward a measure that would regulate AI use on a broader scale, while the National Telecoms and Information Administration (NTIA) is asking for public input on potential AI accountability rules. Clarke's bill is more targeted and clearly meant to pass quickly.

Whether or not it does isn't certain. The act has to pass a vote in a Republican-led House, and the Senate jsd to develop and pass an equivalent bill before the two bodies of Congress reconcile their work and send a law to the President's desk. Success also won't prevent unofficial attempts to fool voters. Still, this might discourage politicians and action committees from using AI to fool voters.

This article originally appeared on Engadget at https://www.engadget.com/house-bill-would-demand-disclosure-of-ai-generated-content-in-political-ads-190524733.html?src=rss

The EARN IT Act will be introduced to Congress for the third time

The controversial EARN IT Act, first introduced in 2020, is returning to Congress after failing twice to land on the president’s desk. The Eliminating Abusive and Rampant Neglect of Interactive Technologies Act, (EARN IT) Act is intended to minimize the proliferation of Child Sexual Abuse Material (CSAM) throughout the web, but detractors say it goes too far and risks further eroding online privacy protections.

Here's how it would work, according to the language of the bill's reintroduction last year. Upon passing, EARN IT would create a national commission composed of politically-appointed law enforcement specialists. This body would be tasked with making a list of best practices to ostensibly curb the digital distribution of CSAM. If online service providers do not abide by these best practices, they would potentially lose blanket immunity under Section 230 of the Communications Decency Act, opening them up to all kinds of legal hurdles — including civil lawsuits and criminal charges.

Detractors say EARN IT places a whole lot of power to regulate the internet in the hands of the commission the bill would create as well as state legislatures. Additionally, language in last year's bill suggests that these guidelines would likely extend to encrypted information, so if an encrypted transmission runs afoul of any guidelines, the platform is on the hook. This will force providers to monitor encrypted communications, which goes against the whole point of encryption in the first place. Additionally, end-to-end encryption is designed so that not even the platform can read the contents. In other words, providers might not be able to offer those protections. 

“This was a dangerous bill two years ago, and because it’s doubled down on its anti-encryption stance, it’s even more dangerous now,” The Center for Internet and Society at Stanford Law School wrote in a blog post last year, a stance also mirrored by the Center for Democracy and Technology. The American Civil Liberties Union, pushing back on a prior version of the bill, said that it "threatens our online speech and privacy rights in ways that will disproportionately harm LGBTQ people, sex workers and others who use the internet to privately communicate and share information and resources."

The Rape, Abuse & Incest National Network (RAINN) has come out in defense of the bill, saying that it will “incentivize technology companies to proactively search for and remove” CSAM materials. “Tech companies have the technology to detect, remove, and stop the distribution of child sexual abuse material. However, there is no incentive to do so because they are subject to no consequences for their inaction,” wrote Erin Earp, RAINN’s interim vice president for public policy.

The bipartisan Senate bills have consistently been introduced by Republican Senator Lindsay Graham and Democrat Senator Richard Blumenthal, and their companion bills in the House likewise have been sponsored by Republican Representative Ann Wagner and Democrat Representative Sylvia Garcia. The full text of H.R.2732 is not publicly available yet, so it's unclear if anything has changed since last year's attempt, though when reintroduced last year it was more of the same. (We've reached out to the offices of Reps. Wagner and Garcia for a copy of the bill's text.) A member of Senator Graham's office confirmed to Engadget that the companion bill will be introduced within the next week. It also remains to be seen if and when this will come up for a vote. Both prior versions of EARN IT died in committee before ever coming to a vote.

This article originally appeared on Engadget at https://www.engadget.com/the-earn-it-act-will-be-introduced-to-congress-for-the-third-time-192619083.html?src=rss

The EARN IT Act will be introduced to Congress for the third time

The controversial EARN IT Act, first introduced in 2020, is returning to Congress after failing twice to land on the president’s desk. The Eliminating Abusive and Rampant Neglect of Interactive Technologies Act, (EARN IT) Act is intended to minimize the proliferation of Child Sexual Abuse Material (CSAM) throughout the web, but detractors say it goes too far and risks further eroding online privacy protections.

Here's how it would work, according to the language of the bill's reintroduction last year. Upon passing, EARN IT would create a national commission composed of politically-appointed law enforcement specialists. This body would be tasked with making a list of best practices to ostensibly curb the digital distribution of CSAM. If online service providers do not abide by these best practices, they would potentially lose blanket immunity under Section 230 of the Communications Decency Act, opening them up to all kinds of legal hurdles — including civil lawsuits and criminal charges.

Detractors say EARN IT places a whole lot of power to regulate the internet in the hands of the commission the bill would create as well as state legislatures. Additionally, language in last year's bill suggests that these guidelines would likely extend to encrypted information, so if an encrypted transmission runs afoul of any guidelines, the platform is on the hook. This will force providers to monitor encrypted communications, which goes against the whole point of encryption in the first place. Additionally, end-to-end encryption is designed so that not even the platform can read the contents. In other words, providers might not be able to offer those protections. 

“This was a dangerous bill two years ago, and because it’s doubled down on its anti-encryption stance, it’s even more dangerous now,” The Center for Internet and Society at Stanford Law School wrote in a blog post last year, a stance also mirrored by the Center for Democracy and Technology. The American Civil Liberties Union, pushing back on a prior version of the bill, said that it "threatens our online speech and privacy rights in ways that will disproportionately harm LGBTQ people, sex workers and others who use the internet to privately communicate and share information and resources."

The Rape, Abuse & Incest National Network (RAINN) has come out in defense of the bill, saying that it will “incentivize technology companies to proactively search for and remove” CSAM materials. “Tech companies have the technology to detect, remove, and stop the distribution of child sexual abuse material. However, there is no incentive to do so because they are subject to no consequences for their inaction,” wrote Erin Earp, RAINN’s interim vice president for public policy.

The bipartisan Senate bills have consistently been introduced by Republican Senator Lindsay Graham and Democrat Senator Richard Blumenthal, and their companion bills in the House likewise have been sponsored by Republican Representative Ann Wagner and Democrat Representative Sylvia Garcia. The full text of H.R.2732 is not publicly available yet, so it's unclear if anything has changed since last year's attempt, though when reintroduced last year it was more of the same. (We've reached out to the offices of Reps. Wagner and Garcia for a copy of the bill's text.) A member of Senator Graham's office confirmed to Engadget that the companion bill will be introduced within the next week. It also remains to be seen if and when this will come up for a vote. Both prior versions of EARN IT died in committee before ever coming to a vote.

This article originally appeared on Engadget at https://www.engadget.com/the-earn-it-act-will-be-introduced-to-congress-for-the-third-time-192619083.html?src=rss

Legislation to ban government use of facial recognition hits Senate for the third time

Biometric technology may make it easy to unlock your phone, but democratic lawmakers have long cautioned against the use of facial recognition and biometrics by law enforcement. Not only have researchers documented instances of racial and gender bias in such systems, false positives have even led to real instances of wrongful arrest. That's why lawmakers have re-introduced the Facial Recognition and Biometric Technology Act. This actually marks the third time the bill was introduced to the Senate — despite being introduced in 2020 and 2021, the act was never advanced to a vote.

If passed, the Facial Recognition and Biometric Technology Act would outright ban any use of facial recognition or biometric surveillance by the federal government unless that use is explicitly approved by an Act of Congress. That approval itself would be pretty limited: It would need to define who was allowed to use biometric surveillance, the exact type of biometric surveillance they would be using and the specific purpose it would be used for. Approval would also have the burden of further restrictions, such as adhering to minimum accuracy rates that would hopefully avoid false positives in the rare instances when use of the technology is approved.

The bill also hopes to encourage local and state governments to follow its lead, including a clause that would tie some federal funding for local law enforcement to complying with a "substantially similar" ban on facial recognition and biometrics.

While the bill hasn't had much luck making it to the floor of either chamber of congress, some states and local governments have been banning facial recognition technology on their own. In 2020, Portland Oregon put strict guardrails on the use of facial recognition technology. New York State and Massachusetts have also put restrictions on the use of biometrics. Even the IRS walked back plans to use facial recognition for identity verification purposes.

That sounds encouraging for the re-introduced bill, but that momentum isn't universal: Law enforcement still sees biometrics as a useful tool for investigating crime, and the TSA has been testing systems that compare travelers to the photo on their passport or driver's license.

This article originally appeared on Engadget at https://www.engadget.com/legislation-to-ban-government-use-of-facial-recognition-hits-senate-for-the-third-time-194547733.html?src=rss

Legislation to ban government use of facial recognition hits Senate for the third time

Biometric technology may make it easy to unlock your phone, but democratic lawmakers have long cautioned against the use of facial recognition and biometrics by law enforcement. Not only have researchers documented instances of racial and gender bias in such systems, false positives have even led to real instances of wrongful arrest. That's why lawmakers have re-introduced the Facial Recognition and Biometric Technology Act. This actually marks the third time the bill was introduced to the Senate — despite being introduced in 2020 and 2021, the act was never advanced to a vote.

If passed, the Facial Recognition and Biometric Technology Act would outright ban any use of facial recognition or biometric surveillance by the federal government unless that use is explicitly approved by an Act of Congress. That approval itself would be pretty limited: It would need to define who was allowed to use biometric surveillance, the exact type of biometric surveillance they would be using and the specific purpose it would be used for. Approval would also have the burden of further restrictions, such as adhering to minimum accuracy rates that would hopefully avoid false positives in the rare instances when use of the technology is approved.

The bill also hopes to encourage local and state governments to follow its lead, including a clause that would tie some federal funding for local law enforcement to complying with a "substantially similar" ban on facial recognition and biometrics.

While the bill hasn't had much luck making it to the floor of either chamber of congress, some states and local governments have been banning facial recognition technology on their own. In 2020, Portland Oregon put strict guardrails on the use of facial recognition technology. New York State and Massachusetts have also put restrictions on the use of biometrics. Even the IRS walked back plans to use facial recognition for identity verification purposes.

That sounds encouraging for the re-introduced bill, but that momentum isn't universal: Law enforcement still sees biometrics as a useful tool for investigating crime, and the TSA has been testing systems that compare travelers to the photo on their passport or driver's license.

This article originally appeared on Engadget at https://www.engadget.com/legislation-to-ban-government-use-of-facial-recognition-hits-senate-for-the-third-time-194547733.html?src=rss