YouTube changes misinformation policy to allow videos falsely claiming fraud in the 2020 US election

In a Friday afternoon news dump, YouTube inexplicably announced today that 2020 election denialism is a-okay. The company says it “carefully deliberated this change” without offering any specifics on its reasons for the about-face. YouTube initially banned content disputing the results of the 2020 election in December of that year.

In a feeble attempt to explain its decision (first reported byAxios), YouTube wrote that it “recognized it was time to reevaluate the effects of this policy in today's changed landscape. In the current environment, we find that while removing this content does curb some misinformation, it could also have the unintended effect of curtailing political speech without meaningfully reducing the risk of violence or other real-world harm. With that in mind, and with 2024 campaigns well underway, we will stop removing content that advances false claims that widespread fraud, errors, or glitches occurred in the 2020 and other past US Presidential elections.”

Misinformation and disinformation are harmful on a societal level. They lure people into a false-reality bubble of “alternative facts” where the despots are the “good guys” and those supporting democracy are corrupt or untrustworthy. Failing that, it can leave people too confused to know what is and isn’t real; that type of gaslighting is nearly as beneficial to authoritarian movements as drawing in rabid supporters.

The change comes as 2024 Republican front-runner Donald Trump and others continue to spread false claims about the results of the 2020 election. In addition to misleading voters, bogus statements about election integrity can also lead to the adoption of laws making it harder for people to vote: essentially voter-suppression legislation passed under the guise of “election security.”

If YouTube found some data that somehow reveals the dissemination of election denialism isn’t harmful after all, it would seem appropriate for the company to reveal that. But short of that, all we have is YouTube’s claim that it “carefully deliberated” this move.

This article originally appeared on Engadget at https://www.engadget.com/youtube-changes-misinformation-policy-to-allow-videos-falsely-claiming-fraud-in-the-2020-us-election-184319851.html?src=rss

YouTube changes misinformation policy to allow videos falsely claiming fraud in the 2020 US election

In a Friday afternoon news dump, YouTube inexplicably announced today that 2020 election denialism is a-okay. The company says it “carefully deliberated this change” without offering any specifics on its reasons for the about-face. YouTube initially banned content disputing the results of the 2020 election in December of that year.

In a feeble attempt to explain its decision (first reported byAxios), YouTube wrote that it “recognized it was time to reevaluate the effects of this policy in today's changed landscape. In the current environment, we find that while removing this content does curb some misinformation, it could also have the unintended effect of curtailing political speech without meaningfully reducing the risk of violence or other real-world harm. With that in mind, and with 2024 campaigns well underway, we will stop removing content that advances false claims that widespread fraud, errors, or glitches occurred in the 2020 and other past US Presidential elections.”

Misinformation and disinformation are harmful on a societal level. They lure people into a false-reality bubble of “alternative facts” where the despots are the “good guys” and those supporting democracy are corrupt or untrustworthy. Failing that, it can leave people too confused to know what is and isn’t real; that type of gaslighting is nearly as beneficial to authoritarian movements as drawing in rabid supporters.

The change comes as 2024 Republican front-runner Donald Trump and others continue to spread false claims about the results of the 2020 election. In addition to misleading voters, bogus statements about election integrity can also lead to the adoption of laws making it harder for people to vote: essentially voter-suppression legislation passed under the guise of “election security.”

If YouTube found some data that somehow reveals the dissemination of election denialism isn’t harmful after all, it would seem appropriate for the company to reveal that. But short of that, all we have is YouTube’s claim that it “carefully deliberated” this move.

This article originally appeared on Engadget at https://www.engadget.com/youtube-changes-misinformation-policy-to-allow-videos-falsely-claiming-fraud-in-the-2020-us-election-184319851.html?src=rss

House bill would demand disclosure of AI-generated content in political ads

At least one politician wants more transparency in the wake of an AI-generated attack ad. New York Democrat House Representative Yvette Clarke has introduced a bill, the REAL Political Ads Act, that would require political ads to disclose the use of generative AI through conspicuous audio or text. The amendment to the Federal Election Campaign Act would also have the Federal Election Commission (FEC) create regulations to enforce this, although the measure would take effect January 1st, 2024 regardless of whether or not rules are in place.

The proposed law would help fight misinformation. Clarke characterizes this as an urgent matter ahead of the 2024 election — generative AI can "manipulate and deceive people on a large scale," the representative says. She believes unchecked use could have a "devastating" effect on elections and national security, and that laws haven't kept up with the technology.

The bill comes just days after Republicans used AI-generated visuals in a political ad speculating what might happen during a second term for President Biden. The ad does include a faint disclaimer that it's "built entirely with AI imagery," but there's a concern that future advertisers might skip disclaimers entirely or lie about past events.

Politicians already hope to regulate AI. California's Rep. Ted Lieu put forward a measure that would regulate AI use on a broader scale, while the National Telecoms and Information Administration (NTIA) is asking for public input on potential AI accountability rules. Clarke's bill is more targeted and clearly meant to pass quickly.

Whether or not it does isn't certain. The act has to pass a vote in a Republican-led House, and the Senate jsd to develop and pass an equivalent bill before the two bodies of Congress reconcile their work and send a law to the President's desk. Success also won't prevent unofficial attempts to fool voters. Still, this might discourage politicians and action committees from using AI to fool voters.

This article originally appeared on Engadget at https://www.engadget.com/house-bill-would-demand-disclosure-of-ai-generated-content-in-political-ads-190524733.html?src=rss

House bill would demand disclosure of AI-generated content in political ads

At least one politician wants more transparency in the wake of an AI-generated attack ad. New York Democrat House Representative Yvette Clarke has introduced a bill, the REAL Political Ads Act, that would require political ads to disclose the use of generative AI through conspicuous audio or text. The amendment to the Federal Election Campaign Act would also have the Federal Election Commission (FEC) create regulations to enforce this, although the measure would take effect January 1st, 2024 regardless of whether or not rules are in place.

The proposed law would help fight misinformation. Clarke characterizes this as an urgent matter ahead of the 2024 election — generative AI can "manipulate and deceive people on a large scale," the representative says. She believes unchecked use could have a "devastating" effect on elections and national security, and that laws haven't kept up with the technology.

The bill comes just days after Republicans used AI-generated visuals in a political ad speculating what might happen during a second term for President Biden. The ad does include a faint disclaimer that it's "built entirely with AI imagery," but there's a concern that future advertisers might skip disclaimers entirely or lie about past events.

Politicians already hope to regulate AI. California's Rep. Ted Lieu put forward a measure that would regulate AI use on a broader scale, while the National Telecoms and Information Administration (NTIA) is asking for public input on potential AI accountability rules. Clarke's bill is more targeted and clearly meant to pass quickly.

Whether or not it does isn't certain. The act has to pass a vote in a Republican-led House, and the Senate jsd to develop and pass an equivalent bill before the two bodies of Congress reconcile their work and send a law to the President's desk. Success also won't prevent unofficial attempts to fool voters. Still, this might discourage politicians and action committees from using AI to fool voters.

This article originally appeared on Engadget at https://www.engadget.com/house-bill-would-demand-disclosure-of-ai-generated-content-in-political-ads-190524733.html?src=rss

Court rules that Uber and Lyft can keep treating drivers as contractors in California

Uber and Lyft don't have to worry about reclassifying its workers in California for now. An appeals court has just ruled that gig workers, such as rideshare drivers, can continue to be classified as independent contractors under Proposition 22

If you'll recall, California passed Assembly Bill 5 (AB5) in September 2019 that legally obligates companies to treat their gig workers as full-time employees. That means providing them with all the appropriate benefits and protections, such as paying for their unemployment and health insurance. As a response, Uber, Lyft, Instacart and DoorDash poured over $220 million into campaigning for the Prop 22 ballot measure, which would allow them to treat app-based workers as independent contractors. It ended up passing by a wide margin in the state.

In 2021, a group of critics that included the Service Employees International Union and the SEIU California State Council filed a lawsuit in 2021 to overturn the proposition. The judge in charge of the case sided with them and called Prop 22 unconstitutional. He said back then that the proposition illegally "limits the power of a future legislature to define app-based drivers as workers subject to workers' compensation law." 

The three appeals court judges have now overturned that ruling, though according to The New York Times, one of them wanted to throw out Prop 22 entirely for the same reason the lower court judge gave when he handed down his decision. While the appeals court upheld the policy in the end, it ordered that a clause that makes it hard for workers in the state to unionize be severed from the rest of the proposition. That particular clause required a seven-eighths majority vote from the California legislature to be able to amend workers' rights to collective bargaining. 

David Huerta, the president of the Service Employees International Union in California, told The Times in a statement: "Every California voter should be concerned about corporations’ growing influence in our democracy and their ability to spend millions of dollars to deceive voters and buy themselves laws." The group is now expected to appeal this ruling and to take their fight to the Supreme Court, which could take months to decide whether to hear the case. 

This article originally appeared on Engadget at https://www.engadget.com/court-rules-uber-lyft-keep-contractors-classification-drivers-california-054040457.html?src=rss

Trump has reportedly asked Meta to reinstate his Facebook account

Former President Donald Trump has reportedly petitioned Meta to restore his Facebook account. According to NBC News, the Trump campaign sent a letter to the company on Tuesday, pushing for a meeting to discuss his “prompt reinstatement to the platform.” In 2020, Facebook banned Trump following the aftermath of the January 6th Capitol riot. At first, the suspension was set to last 24 hours, but the company made the ban indefinite less than a day later. In June 2021, following a recommendation from the Oversight Board, Meta said it would revisit the suspension after two years and “evaluate” the “risk to public safety” to determine if Trump should get his account back.

Meta did not immediately respond to Engadget’s comment request. The company told NBC News it would announce a decision “in the coming weeks in line with the process we laid out.” In 2021, Meta signaled Trump’s ban wouldn’t last forever. “When the suspension is eventually lifted, Mr Trump’s account will be subject to new enhanced penalties if he violates our policies, up to and including permanent removal of his accounts,” Nick Clegg, Meta’s president of global affairs, said at the time.

The letter is likely a bid by Trump to regain control of his Facebook account ahead of the 2024 presidential election. Trump has more than 34 million Facebook followers, and the platform was critical to his 2016 run. According to a Bloomberg report published after the election, the Trump campaign ran 5.9 million different versions of ads to test the ones that got the most engagement from the company’s users. Meta subsequently put a limit on high-volume advertising. One Trump Organization employee told NBC News that change prevented Trump’s 2020 campaign from using Facebook the way it did in 2016.

Trump has reportedly asked Meta to reinstate his Facebook account

Former President Donald Trump has reportedly petitioned Meta to restore his Facebook account. According to NBC News, the Trump campaign sent a letter to the company on Tuesday, pushing for a meeting to discuss his “prompt reinstatement to the platform.” In 2020, Facebook banned Trump following the aftermath of the January 6th Capitol riot. At first, the suspension was set to last 24 hours, but the company made the ban indefinite less than a day later. In June 2021, following a recommendation from the Oversight Board, Meta said it would revisit the suspension after two years and “evaluate” the “risk to public safety” to determine if Trump should get his account back.

Meta did not immediately respond to Engadget’s comment request. The company told NBC News it would announce a decision “in the coming weeks in line with the process we laid out.” In 2021, Meta signaled Trump’s ban wouldn’t last forever. “When the suspension is eventually lifted, Mr Trump’s account will be subject to new enhanced penalties if he violates our policies, up to and including permanent removal of his accounts,” Nick Clegg, Meta’s president of global affairs, said at the time.

The letter is likely a bid by Trump to regain control of his Facebook account ahead of the 2024 presidential election. Trump has more than 34 million Facebook followers, and the platform was critical to his 2016 run. According to a Bloomberg report published after the election, the Trump campaign ran 5.9 million different versions of ads to test the ones that got the most engagement from the company’s users. Meta subsequently put a limit on high-volume advertising. One Trump Organization employee told NBC News that change prevented Trump’s 2020 campaign from using Facebook the way it did in 2016.

YouTube is still battling 2020 election misinformation as it prepares for the midterms

YouTube and Google are the latest platforms to share more about how they are preparing for the upcoming midterm elections, and the flood of misinformation that will come with it.

For YouTube, much of that strategy hinges on continuing to counter misinformation about the 2020 presidential election. The company’s election misinformation policies already prohibit videos that allege “widespread fraud, errors, or glitches” occurred in any previous presidential election. In a new blog post about its preparations for the midterms, the company says it's already removed “a number of videos related to the midterms” for breaking these rules, and that other channels have been temporarily suspended for videos related to the upcoming midterms.

The update comes as YouTube continues to face scrutiny for its handling of the 2020 election, and whether its recommendations pushed some people toward election fraud videos. (Of note, the Journal of Online Trust and Safety published a study on the topic today.)

In addition to taking down videos, YouTube also says it will launch “an educational media literacy campaign” aimed at educating viewers about “manipulation tactics used to spread misinformation.” The campaign will launch in the United States first, and will cover topics like “using emotional language” and “cherry picking information,” according to the company.

Google will highlight local news sources in search results related to the midterms.
Google

And Both Google and YouTube will promote authoritative election information in their services, including in search results. Before the midterms, YouTube will link to information about how to vote, and on Election day, videos related to the midterms will link to “timely context around election results.” Similarly, Google will surface election results directly in search, which it has done in previous elections as well.

The company is also trying to make it easier to find details about local and regional races. Beginning in “the coming weeks,” Google will highlight local news sources from different states in election-related searches.

YouTube is still battling 2020 election misinformation as it prepares for the midterms

YouTube and Google are the latest platforms to share more about how they are preparing for the upcoming midterm elections, and the flood of misinformation that will come with it.

For YouTube, much of that strategy hinges on continuing to counter misinformation about the 2020 presidential election. The company’s election misinformation policies already prohibit videos that allege “widespread fraud, errors, or glitches” occurred in any previous presidential election. In a new blog post about its preparations for the midterms, the company says it's already removed “a number of videos related to the midterms” for breaking these rules, and that other channels have been temporarily suspended for videos related to the upcoming midterms.

The update comes as YouTube continues to face scrutiny for its handling of the 2020 election, and whether its recommendations pushed some people toward election fraud videos. (Of note, the Journal of Online Trust and Safety published a study on the topic today.)

In addition to taking down videos, YouTube also says it will launch “an educational media literacy campaign” aimed at educating viewers about “manipulation tactics used to spread misinformation.” The campaign will launch in the United States first, and will cover topics like “using emotional language” and “cherry picking information,” according to the company.

Google will highlight local news sources in search results related to the midterms.
Google

And Both Google and YouTube will promote authoritative election information in their services, including in search results. Before the midterms, YouTube will link to information about how to vote, and on Election day, videos related to the midterms will link to “timely context around election results.” Similarly, Google will surface election results directly in search, which it has done in previous elections as well.

The company is also trying to make it easier to find details about local and regional races. Beginning in “the coming weeks,” Google will highlight local news sources from different states in election-related searches.

Meta’s anti-misinformation strategy for the 2022 midterms is mostly a repeat of 2020

Meta has outlined its strategy for combatting misinformation during the 2022 US midterm elections, and they'll mostly sound familiar if you remember the company's 2020 approach. The Facebook and Instagram owner said it will maintain policies and protections "consistent" with the presidential election, including policies barring vote misinformation and linking people to trustworthy information. It will once again ban political ads during the last week of the election campaign. This isn't quite a carbon copy, however, as Meta is fine-tuning its methods in response to lessons learned two years ago.

To start, Meta is "elevating" post comments from local elections officials to make sure reliable polling information surfaces in conversations. The company is also acknowledging concerns that it used info labels too often in 2020 — for the 2022 midterms, it's planning to show labels in a "targeted and strategic way."

Meta's update comes just days after Twitter detailed its midterm strategy, and echoes the philosophy of its social media rival. Both are betting that their 2020 measures were largely adequate, and that it's just a question of refining those systems for 2022.

Whether or not that's true is another matter. In a March 2021 study, advocacy group Avaaz said Meta didn't do enough to stem the flow of misinformation and allowed billions of views for known false content. Whistleblower Frances Haugen also maintains that Meta has generally struggled to fight bogus claims, and it's no secret that Meta had to extend its ban on political ads after the 2020 vote. Facebook didn't catch some false Brazilian election ads, according to Global Witness. Meta won't necessarily deal with serious problems during the midterms, but it's not guaranteed a smooth ride.