Meta’s Oversight Board separates death threats and ‘aspirational statements’ in Venezuela

Meta’s Oversight Board has weighed in on the company’s content moderation policies in Venezuela amid violent crackdowns and widespread protests following the country’s disputed presidential election. In its decision, the board said that Facebook users posting about the state-supported armed groups known as “colectivos” should have more leeway in making statements like “kill those damn colectivos.”

The company asked the Oversight Board for guidance on the issue last month, noting that its moderators had seen an “influx” of “anti-colectivos content” in the wake of the election. Meta specifically asked for the board’s input on two posts: an Instagram post with the words “Go to hell! I hope they kill you all!” that Meta says was directed at the colectivos, and a Facebook post criticizing Venezuela’s security forces that said “kill those damn colectivos.”

The Oversight Board said that neither post violated Meta’s rules around calls for violence and that both should be interpreted as “aspirational statements” from citizens of a country where state-supported violence has threatened free expression. “The targets of aspirational violence are state-backed forces that have contributed to the longstanding repression of civic space and other human rights violations in Venezuela, including in the present post-election crisis,” the board wrote in its decision. “By contrast, the civilian population has largely been the target of human rights abuses.”

The Oversight Board also criticized Meta’s practice of making political content less visible across its services. “The Board is also deeply concerned that in the context of Venezuela, the company’s policy to reduce the distribution of political content could undermine the ability of users expressing political dissent and raising awareness about the situation in Venezuela to reach the widest possible audience.” It recommended that Meta adapt its policies “to ensure that political content, especially around elections and post-electoral protests, is eligible for the same reach as non-political content” during times of crisis.

The case isn’t the first time the board has waded into the debate surrounding the role of political content on Meta’s apps. Earlier this year, the board accepted its first case related to a post on Threads, which is also expected to weigh in on Meta’s controversial decision to limit recommendations of political posts on the service. The board has yet to publish its decision in the case.

This article originally appeared on Engadget at https://www.engadget.com/social-media/metas-oversight-board-separates-death-threats-and-aspirational-statements-in-venezuela-100050434.html?src=rss

US charges Russian state media employees over a social media influence scheme

The Department of justice (DOJ) has indicted two employees of the Russian state-owned broadcaster RT over an alleged pro-Russia influence scheme on social media platforms. Kostiantyn Kalashnikov and Elena Afanasyeva have been accused of being involved in a plan to pay an unnamed Tennessee company almost $10 million to spread nearly 2,000 videos (most of which included disinformation and/or pro-Russia propaganda) in English across YouTube, TikTok, Instagram and X. The DOJ says the videos had been viewed more than 16 million times on YouTube alone.

Attorney General Merrick Garland said at a press conference that, following Russia's invasion of Ukraine, "RT’s editor-in-chief said the company had built an 'entire empire of covert projects' designed to shape public opinion in 'Western audiences.'" As part of that goal, RT and employees (including the two defendants) "implemented a nearly $10 million scheme to fund and direct a Tennessee-based company to publish and disseminate content deemed favorable to the Russian government."

"To implement this scheme, the defendants directed the company to contract with US-based social media influencers to share this content and their platforms. The subject matter and content of many of the videos published by the company were often consistent with Russia's interest in amplifying US domestic divisions in order to weaken US opposition to core Russian interests, particularly its ongoing war in Ukraine," Garland said.

The Tennessee company didn't inform the influencers or their millions of followers of its links to the Russian government, Garland added. It instead claimed to be sponsored by a fictitious "private investor," according to the DOJ. 

Kalashnikov and Afanasyeva have been charged with conspiracy to violate the Foreign Agents Registration Act (FARA) and conspiracy to commit money laundering. Both are at large. However, the charges do not signal the end of the case. Galand pointed out the investigation is ongoing.

The DOJ unsealed the indictment amid a broader push by the government to clamp down on Russian propaganda and disinformation ahead of November's general election. In a separate action, the DOJ seized 32 websites "that the Russian government and the Russian-sponsored actors have used to engage in a covert campaign to interfere and influence the outcome of our country's elections," Garland said.

The campaign, which Russia is said to have called "Doppelganger," included the creation of websites that "were designed to appear to American readers as if they were major US news sites, like The Washington Post or Fox News, but, in fact, they were fake sites," Garland said. "They were filled with Russian government propaganda that had been created by the Kremlin to reduce international support for Ukraine, bolster pro-Russian policies and interests and influence voters in the United States and in other countries."

Meanwhile, the Treasury and State departments announced parallel actions. The Treasury Department sanctioned ANO Dialog, a Russian nonprofit that's said to help orchestrate the Doppleganger campaign, along with RT editor-in-chief, Margarita Simonyan and other RT employees.

The State Department sanctioned RT and four other state-funded publishers. It is also offering a $10 million reward for information regarding to foreign interference over an American election.

After this story was originally published, CNN reported that the unnamed company that the Russian operatives were paying to spread disinformation was Tennessee-based Tenet Media, a company known for employing far-right commentators including Tim Pool and Benny Johnson, who have millions of subscribers on YouTube. As of now, there's no official confirmation from the government to verify CNN's report.

Update, September 5 2024, 10:05AM ET: this story has been updated to include CNN's report on Tenet Media being involved in the investigation.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/us-charges-russian-state-media-employees-over-a-social-media-influence-scheme-200028302.html?src=rss

Meta’s Oversight Board says phrase ‘From the River to the Sea’ should not be banned

A new ruling from Meta’s Oversight Board regarding the use of the phrase “From the River to the Sea” found that it does not violate the platforms’ policies on hate speech, violence and incitement or dangerous organizations and individuals. The board also said in its ruling that the three flagged cases that used the phrase highlight the need for greater access to Facebook’s Content Library for qualified researchers, civil society groups and journalists who previously had access to CrowdTangle.

The ruling looked at three pieces of Facebook content containing the phrase “From the River to the Sea,” a phrase considered by many to be pro-Palestinian that refers to the stretch of land between the Jordan River and the Mediterranean Sea. The rallying cry is a politically charged one with different interpretations and meanings. Critics of the phrase like the Anti-Defamation League call it an “anti-semitic slogan commonly featured in anti-Israel campaigns.” Others like US Rep. Rashida Tlaib, who the House censured last year for using the phrase in statements about the Israel-Gaza war, called it “an aspirational call for freedom, human rights and peaceful coexistence, not death, destruction or hate,” according to the New York Times.

The Oversight Board ruled that the phrase itself is not a “standalone phrase” calling for violence against a group of people, the exclusion of a particular group of people or a blanket stance of support for Hamas. The board also said it’s “vital” that Meta’s platforms assess the context surrounding the use of the phrase while assessing content from its users.

“Because the phrase does not have a single meaning, a blanket ban on content that includes the phrase, a default rule towards removal of such content, or even using it as a signal to trigger enforcement or review, would hinder protected political speech in unacceptable ways,” the ruling reads.

The board also raised concerns about Meta’s decision to shut down the CrowdTangle data analysis tool in August in its research on content and called for greater transparency regarding the new system. CrowdTangle was a free research tool used by news outlets, researchers and other groups to learn about the dissemination of information on platforms like Facebook and Instagram.

Meta replaced the tool with the Meta Content Library, a much more tightly controlled data examination system with stricter access rules. The Content Library restricts access to those who work with “a qualified academic institution or a qualified research institution” committed to “a not-for-profit endeavor,” according to Facebook’s guidelines.

The Oversight Board recommended that Meta onboard qualified researchers, groups and journalists within three weeks of submitting an application. The board also recommended that Meta “ensure its Content Library is a suitable replacement for CrowdTangle,” according to the ruling.

This article originally appeared on Engadget at https://www.engadget.com/social-media/metas-oversight-board-says-phrase-from-the-river-to-the-sea-should-not-be-banned-174506090.html?src=rss

Google is rolling out more election-related safeguards in YouTube, search and AI

As the US speeds toward one of the most consequential elections in its 248-year history, Google is rolling out safeguards to ensure users get reliable information. In addition to the measures it announced late last year, the company said on Friday that it’s adding election-related guardrails to YouTube, Search, Google Play and AI products.

YouTube will add information panels above the search results for at least some federal election candidates. The modules, likely similar to those you see when searching the web for prominent figures, will include the candidates’ basic details like their political party and a link to Google Search for more info. The company says the panels may also include a link to the person’s official website (or other channel). As Election Day (November 5) approaches, YouTube’s homepage will also show reminders on where and how to vote.

Google Search will include aggregated voter registration resources from state election offices for all users. Google is sourcing that data through a partnership with Democracy Works, a nonpartisan nonprofit that works with various companies and organizations “to help voters whenever and wherever they need it.”

Meanwhile, the Google Play Store will add a new badge that indicates an app is from an official government agency. The company outlines its requirements for apps that “communicate government information” in a developer help document. Approved applications that have submitted the required forms are eligible for the “official endorsement signified by a clear visual treatment on the Play Store.”

As for generative AI, which can be prone to hallucinations that would make Jerry Garcia blush, Google is expanding its election-related restrictions, which were announced late last year. They’ll include disclosures for ads created or generated using AI, content labels for generated content and embedded SynthID digital watermarking for AI-made text, audio, images and video. Initially described as being for Gemini (apps and on the web), the election guardrails will apply to Search AI Overviews, YouTube AI-generated summaries for Live Chat, Gems (custom chatbots with user-created instructions) and Gemini image generation.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/google-is-rolling-out-more-election-related-safeguards-in-youtube-search-and-ai-190422568.html?src=rss

X labeled an unflattering NPR story about Donald Trump as ‘unsafe’

X briefly discouraged users from viewing a link to an NPR story about Donald Trump's recent visit to Arlington National Cemetery, raising questions about whether the Elon Musk-owned platform is putting its thumb on the scale for the former president.

On Thursday, NPR reporter Stephen Fowler posted a link to a story in which he quoted an Army official who said that an employee at Arlington National Cemetery was “abruptly pushed aside” during an event attended by Trump and members of his campaign earlier this week. The outlet had previously reported that there was a “physical altercation” at the event with campaign staff over federal laws barring campaign activities at the cemetery.

Some users on X who attempted to click a link to the story were greeted with a warning message saying that X deemed that “this link may be unsafe.” It stated that it could be malicious, violent, spammy or otherwise violate the platform’s rules, but didn't explain why the link was flagged. Fowler posted a thread on X, each tweet of which contained a link to his story — the warning appeared to affect the first two instances of the link but not others, for reasons unknown. It’s highly unusual for such a warning to appear before a link to a mainstream website. Other links to NPR, as well as other coverage of Trump’s visit to Arlington, don’t appear to have such a label.

In a statement to an NPR reporter, an X spokesperson claimed the warning appeared due to a "false positive" and that it had been corrected. The company didn't explain further.

Notably, Musk has been a vocal supporter of Trump this election, and recently held a lengthy live streamed conversation with him on X. Musk has also publicly feuded with NPR in the past, adding a “state affiliated media” label to its account for several months last year. NPR hasn’t posted from its main account on X since the label was added last April.

Update August 29, 2024, 2:35 PM ET: This story was updated to add additional details from an X spokesperson and to indicate that the link is no longer labeled as "unsafe."

This article originally appeared on Engadget at https://www.engadget.com/social-media/x-is-labeling-an-unflattering-npr-story-about-donald-trump-as-unsafe-163732236.html?src=rss

X’s Grok chatbot now directs election queries to Vote.gov

Misinformation is all over the internet, including the — at times — chaos that is X (formerly Twitter). AI bots have a habit of adding to it. Now, with barely two months left until the presidential election, an update to Grok, X's premium chatbot, could curb some of it (after being called out for said election misinformation). Grok will now direct anyone with an election-related query to Vote.org, a non-partisan website operated through a partnership between the US government, the US Election Assistance Commission and the Cybersecurity and Infrastructure Security Agency.

The catalyst for change came on July 21, only hours after President Biden announced his decision not to seek reelection, when Grok falsely posted that the ballot deadline had passed in nine states, implying officials couldn't change the democratic candidate. Minnesota Secretary of State Steve Simon had staff attempt to contact X about the error, to which they received the response, "Busy now, please check back later." Grok continued to share the response for ten days. 

Secretary Simon joined the Michigan, New Mexico, Pennsylvania and Washington Secretaries of State — all states wrongly named by Grok — in writing an open letter to X and xAI CEO Elon Musk calling for Grok to direct any election queries to CanIVote.org, another non-partisan resource. They claimed Grok's response, though only available to X Premium and Premium+ subscribers, reached "millions of people" due to screenshots and shares. 

The letter also shamed Grok and xAI a bit further, explaining how its competitor, OpenAI, had teamed up with the National Association of Secretaries of State to provide accurate, up-to-date election information. It also mentioned that OpenAI's bot, ChatGPT, was already programmed to direct users to CanIVote.org if it received questions about the US election.

The update is a start. The bot has also created misleading images of the top party candidates. "We appreciate X's action to improve their platform and hope they continue to make improvements that will ensure their users have access to accurate information from trusted sources in this critical election year," the Secretaries of State said in response to the update. "Elections are a team effort, and we need and welcome any partners who are committed to ensuring free, fair, secure, and accurate elections." 

This article originally appeared on Engadget at https://www.engadget.com/ai/xs-grok-chatbot-now-directs-election-queries-to-votegov-114516549.html?src=rss

Meta took down WhatsApp accounts connected to Iranian hackers targeting the US election

Meta has blocked WhatsApp accounts involved in "a small cluster of likely social engineering activity" on the service. In its report, it has revealed that it traced the activity to APT42 (also called UNC788 and Mint Sandstorm), which the FBI previously linked to a phishing campaign that targeted members of the Trump and Harris camps. The company said that the suspicious activity on WhatsApp "attempted to target individuals in Israel, Palestine, Iran, the United States and the UK." It also seemed to have focused on political and diplomatic officials, which included people associated with both presidential candidates. 

The bad actors on WhatsApp pretended to be technical support representatives from AOL, Google, Yahoo and Microsoft, though Meta didn't say how they tried to compromise their targets' accounts. Some of those targets reported the activity to the company, which compelled it to start an investigation. Meta said it believes the perpetrators' efforts were unsuccessful and that it has not seen any evidence that the targets' accounts had been compromised. It still reported the malicious activity to law enforcement, though, and shared information with both presidential campaigns. 

Earlier this month, Google also published a report detailing how APT42 has been targeting high-profile users in Israel and the US for years. The company said it observed "unsuccessful attempts" to compromise the "accounts of individuals affiliated with President Biden, Vice President Harris and former President Trump." While Google described APT42's attacks as "unsuccessful," the group had successfully infiltrated the account of at least one high-profile victim: Roger Stone, who is a close political confidante of Trump. The FBI previously reported that he had fallen victim to the phishing emails sent by the Iranian hackers, who then used his account to send more phishing emails to his contacts. 

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/meta-took-down-whatsapp-accounts-connected-to-iranian-hackers-targeting-the-us-election-140039124.html?src=rss

DeepMind workers urge Google to drop military contracts

Google DeepMind workers have signed a letter calling on the company to drop contracts with military organizations, according to a report by Time. The document was drafted on May 16 of this year. Around 200 people signed the document, which amounts to five percent of the total headcount of DeepMind. 

For the uninitiated, DeepMind is one of Google’s AI divisions and the letter states that adopting military contracts runs afoul of the company’s own AI rules. The letter was sent out as internal concerns began circulating within the AI lab that the tech was allegedly being sold to military organizations via cloud contracts.

According to Time, Google’s contracts with the United States military and the Israeli military allow access to services via the cloud, and this reportedly includes AI technology developed by DeepMind. The letter doesn’t linger on any specific military organization, with workers emphasizing that it’s “not about the geopolitics of any particular conflict.” 

Reporting since 2021 has slowly revealed the scope of tech supplied by Google (and Amazon) to the Israeli government via a partnership known as Project Nimbus. This is far from the first instance of Google employees openly protesting their work being used to support politically fraught military aims — the company fired dozens of staffers who spoke out against Project Nimbus earlier this year.

“Any involvement with military and weapon manufacturing impacts our position as leaders in ethical and responsible AI, and goes against our mission statement and stated AI principles,” the DeepMind letter says. It’s worth noting that Google’s slogan used to be “don’t be evil.”

The letter goes on to ask DeepMind’s leaders to deny military users access to its AI technology and to set up a new in-house governance body to prevent the tech from being used by future militaries. According to four unnamed employees, Google has yet to offer a tangible response to the letter. “We have received no meaningful response from leadership,” one said, “and we are growing increasingly frustrated.”

Google did respond to Time’s reporting, saying that it complies with its AI principles. The company says that the contract with the Israeli government “is not directed at highly sensitive, classified or military workloads relevant to weapons or intelligence services.” However, its partnership with the Israeli government has fallen under plenty of scrutiny in recent months

Google purchased DeepMind back in 2014, but under the promise that its AI technology would never be used for military or surveillance purposes. For many years, DeepMind was allowed to operate with a good amount of independence from its parent company, but the burgeoning AI race looks to have changed that. The lab's leaders spent years seeking greater autonomy from Google, but were rebuffed in 2021.

This article originally appeared on Engadget at https://www.engadget.com/ai/deepmind-workers-urge-google-to-drop-military-contracts-190544509.html?src=rss

FCC fines telecoms operator $1 million for transmitting Biden deepfake

In January, calls using an AI-generated voice imitating President Biden instructed voters not to take part in the New Hampshire Primary. Now, as the 2024 election nears, the Federal Communications Commission is sending a message by further cracking down on those responsible for the Biden deepfake. Lingo Telecom, which transmitted the fraudulent calls, will pay the FCC a $1 million civil penalty and must demonstrate and implement a compliance plan. 

In response to the settlement, The Enforcement Bureau Chief Loyaan A. Egal stated, "..the potential combination of the misuse of generative AI voice-cloning technology and caller ID spoofing over the U.S. communications network presents a significant threat. This settlement sends a strong message that communications service providers are the first line of defense against these threats and will be held accountable to ensure they do their part to protect the American public."

This step follows the FCC's proposed $6 million fine for Steven Kramer, the political consultant who directed the calls. The FCC alleges he also violated the Truth in Caller ID Act by spoofing a local politician's phone number. The enforcement action in Kramer's case is still pending. 

This article originally appeared on Engadget at https://www.engadget.com/fcc-fines-telecoms-operator-1-million-for-transmitting-biden-deepfake-120010234.html?src=rss

Google strikes a deal with California lawmakers to fund local news

Google has reached a deal with California lawmakers to fund local news in the state after previously protesting a proposed law that would have required it to pay media outlets. Under the terms of the deal, Google will commit tens of millions of dollars to a fund supporting local news as well as an AI “accelerator program” in the state.

The agreement ends a months-long dispute between lawmakers and Google over the California Journalism Preservation Act, a bill that would have required Google, Meta and other large platforms to pay California publishers in exchange for linking to their websites. Google strongly opposed the measure, which was similar to laws passed in Canada and Australia.

Earlier this year, Google began a “short-term test” in the state that removed links to local news for some users in California. The company also halted some of its own spending on local news in the state.

Now, under the new agreement, Google will direct “at least $55 million” to “a nonprofit public charity housed at UC Berkeley’s journalism school,” Politico reports. The university will distribute the fund, which also includes “at least $70 million” from the state of California. Google will also “commit $50 million over five years to unspecified ‘existing journalism programs.’”

The agreement also includes funding for a “National AI Innovation Accelerator.” Details of that program are unclear, but Cal Matters reports that Google will dedicate “at least $17.5 million” to the effort, which will fund AI experiments for local businesses and other organizations, including newsrooms. That aspect of the deal, which is so far unique to Google's agreement in California, could end up being more controversial as it could exacerbate existing tensions between publishers and AI companies. 

In a statement, Alphabet’s President of Global Affairs, Kent Walker, credited the “thoughtful leadership” of California Governor Gavin Newsom and other state officials in reaching the agreement. “California lawmakers have worked with the tech and news sectors to develop a collaborative framework to accelerate AI innovation and support local and national businesses and nonprofit organizations,” he said. “This public-private partnership builds on our long history of working with journalism and the local news ecosystem in our home state, while developing a national center of excellence on AI policy.”

This article originally appeared on Engadget at https://www.engadget.com/big-tech/google-strikes-a-deal-with-california-lawmakers-to-fund-local-news-000522484.html?src=rss