Congress asks Mark Zuckerberg to explain why drug dealers are advertising on Facebook and Instagram

Nineteen members of Congress are pushing Mark Zuckerberg to explain why Meta has allowed ads for cocaine, ecstasy and other drugs to be shown on Facebook and Instagram. The letter comes after the Tech Transparency Project (TTP) uncovered hundreds of such ads on the company’s platform.

The letter points to the TTP’s report last month, which used Meta’s ad library to find 450 Instagram and Facebook ads “selling an array of pharmaceutical and other drugs.” Many of those ads included “photos of prescription drug bottles, piles of pills and powders, or bricks of cocaine,” and directed viewers to outside apps like Telegram. Since then, the TTP has been posting additional examples of such ads on X, including one it found yesterday.

“Meta appears to have continued to shirk its social responsibility and defy its own community guidelines,” the lawmakers write in the letter, which is addressed directly to Zuckerberg. “What is particularly egregious about this instance is that this was not user generated content on the dark web or on private social media pages, but rather they were advertisements approved and monetized by Meta. Many of these ads contained blatant references to illegal drugs in their titles, descriptions, photos, and advertiser account names, which were easily found by the researchers and journalists at the Wall Street Journal and Tech Transparency Project using Meta’s Ad Library. However, they appear to have passed undetected or been ignored by Meta’s own internal processes.”

The letter requests details about Meta’s policies for enforcing rules against drug-related ads, as well as information about how many times the reported ads were viewed and interacted with. It gives Meta a deadline of September 6 to reply. A spokesperson for Meta said the company plans to respond to the letter and directed Engadget to a prior statement, published by The Wall Street Journal, in which the company said it rejects “hundreds of thousands of ads for violating our drug policies.”

This article originally appeared on Engadget at https://www.engadget.com/social-media/congress-asks-mark-zuckerberg-to-explain-why-drug-dealers-are-advertising-on-facebook-and-instagram-200541467.html?src=rss

A fake political group that recruited a real candidate in Montana got banned on Facebook

Meta’s latest round of account takedowns includes a fake political group that ran dozens of dummy accounts in an attempt to recruit Americans to run for office. The social network detailed the scheme in its latest report on coordinated inauthentic behavior on its platform.

According to Meta, the fake accounts, pages and Facebook groups were trying to prop up a fictitious political group called “Patriots Run Project,” that encouraged people to challenge Republican and Democratic “elites” by running for office. In all, Meta uncovered 124 Facebook accounts, pages and groups as well as three Instagram accounts. The group primarily targeted people in Arizona, Michigan, Nevada, Ohio, Pennsylvania, Wisconsin, and North Carolina, and spent $50,000 in Facebook ads.

The Institute for Strategic Dialogue, a nonprofit that researches disinformation and extremism previously shared details about the Patriots Run Project and their Facebook presence. The group, they said, “called for followers to run for office on a pro-Trump, anti-establishment platform focused on many of the same issues that motivate the right-wing movement: gun rights, border security, ‘traditional values’ and combatting election fraud.”

It’s not clear exactly who was behind the bizarre campaign. Meta said in its report they “found links to individuals associated with a US-based on-platform entity called RT Group,” but didn’t elaborate. The company’s researchers noted the group was relatively adept at disguising themselves. They used fake accounts they “acquired” from Bangladesh, and relied on proxies to make it appear as if they lived in the states they targeted.

While Meta’s researchers said they were able to disrupt the group before it was able to establish a large audience on its platform, Politico has reported that the group was successful in recruiting one Montana man to run for Congress, though it’s unclear if he interacted with the group on Facebook. During a briefing with reporters, Meta noted that Patriots Run Project was also active on X and that its websites are still online.

The company’s researchers also shared more about what they are tracking ahead of the US presidential election. As with other recent elections, Russia-based groups are likely to target US audiences on Facebook, according to David Agranovich, Meta’s security policy director for threat disruption. “I think we should expect to see Russian attempts to target election-related debates, particularly when they touch on support for Ukraine,” Agranovich said. “We expect Russia-based campaigns to promote supportive commentary about candidates opposing aid to Ukraine, and criticize those who advocate for aiding Ukraine's defenses.”

This article originally appeared on Engadget at https://www.engadget.com/social-media/a-fake-political-group-that-recruited-a-real-candidate-in-montana-got-banned-on-facebook-150048558.html?src=rss

Meta killed CrowdTangle, an ‘invaluable’ research tool, because what it showed was inconvenient

It’s the end of an era for social media research. Meta has shut down CrowdTangle, the analytics tool that for years helped tens of thousands of researchers, journalists and civil society groups understand how information was spreading on Facebook and Instagram.

For a company that’s never been known for being transparent about its inner workings, CrowdTangle was an “invaluable” resource for those hoping to study Meta’s platform, says Brandi Geurkink, the executive director for the Coalition for Independent Technology Research. “It was one of the only windows that anybody had into how these platforms work,” Geurkink tells Engadget. “The fact that CrowdTangle was available for free and to such a wide variety of people working on public interest journalism and research means that it was just an invaluable tool.”

Over the years, CrowdTangle has powered a staggering amount of research and reporting on public health, misinformation, elections and media. Its data has been cited in thousands of journal articles, according to Google Scholar. News outlets have used the tool to track elections and changes in the publishing industry. It’s also provided unparalleled insight into Facebook itself. For years, CrowdTangle data has been used by journalists to track the origins of viral misinformation, hoaxes and conspiracy theories on the social network. Engadget relied on CrowdTangle to uncover the overwhelming amount of spam on Facebook Gaming.

Meta wasn't always quite as averse to transparency as it is now. The company acquired CrowdTangle in 2016, and for years encouraged journalists, researchers and other civil society groups to use its data. Facebook provided training to academics and newsrooms, and it regularly highlighted research projects that relied on its insights.

But the narrative began to shift in 2020. That’s when a New York Times reporter created an automated Twitter bot called “Facebook Top Ten.” It used CrowdTangle data to share the top Facebook pages based on engagement. At the time, right-wing figures and news outlets like Dan Bongino, Fox News and Ben Shapiro regularly dominated the lists. The Twitter account, which racked up tens of thousands of followers, was often cited in the long-simmering debate about whether Facebook’s algorithms exacerbated political polarization in the United States.

Meta repeatedly pushed back on those claims. Its executives argued that engagement — the number of times a post is liked, shared or commented on — is not an accurate representation of its total reach on the social network. In 2021, Meta began publishing its own reports on the most “widely viewed” content on its platform. Those reports suggested that spam is often more prevalent than political content, though researchers have raised significant questions about how those conclusions were reached.

More recently, Meta executives have suggested that CrowdTangle was never intended for research. “It was built for a wholly different purpose,” Meta’s President of Global Affairs, Nick Clegg, said earlier this year. “It just simply doesn't tell you remotely what is going on on Facebook at any time.” CrowdTangle founder Brandon Silverman, who has criticized Meta’s decision to shut down the service ahead of global elections, told Fast Company it was originally meant to be a community organizing tool, but quickly morphed into a service “to help publishers understand the flow of information across Facebook and social media more broadly.”

Clegg’s explanation is a “retcon,” according to Alice Marwick, principal researcher at the Center for Information Technology and Public Life at University of North Carolina. “We were trained on CrowdTangle by people who worked at Facebook," Marwick tells Engadget. “They were very enthusiastic about academics using it.”

In place of CrowdTangle, Meta has offered up a new set of tools for researchers called the Meta Content Library. It allows researchers to access data about public posts on Facebook and Instagram. It’s also much more tightly controlled than CrowdTangle. Researchers must apply and go through a vetting process in order to access the data. And while tens of thousands of people had access to CrowdTangle, only “several hundred” researchers have reportedly been let into the Meta Content Library. Journalists are ineligible to even apply unless they are part of a nonprofit newsroom or partnered with a research institution.

Advocates for the research community, including CrowdTangle’s former CEO, have also raised questions about whether Meta Content Library is powerful enough to replicate CrowdTangle’s functionality. “I've had researchers anecdotally tell me [that] for searches that used to generate hundreds of results on CrowdTangle, there are fewer than 50 on Meta Content Library,” Geurkink says. "There's been a question about what data source Meta Content Library is actually pulling from.”

The fact that Meta chose to shut down CrowdTangle less than three months before the US presidential election, despite pressure from election groups and a letter from lawmakers requesting a delay, is particularly telling. Ahead of the 2020 election, CrowdTangle created a dedicated hub for monitoring election-related content and provided its tools to state election officials.

But Marwick notes there has been a broader backlash against research into social media platforms. X no longer has a free API, and has made its data prohibitively expensive for all but the most well-funded research institutions. The company’s owner has also sued two small nonprofits that conducted research he disagreed with.

“There is no upside to most of these platforms to letting researchers muck around in their data, because we often find things that aren't PR-friendly, that don't fit the image of the platform that they want us to believe.”

While CrowdTangle never offered a complete picture of what was happening on Facebook, it provided an important window into a social network used by billions of people around the world. That window has now been slammed shut. And while researchers and advocates are worried about the immediate impact that will have on this election cycle, the consequences are much bigger and more far reaching. “The impact is far greater than just this year or just work related to elections,” Geurkink says. “When you think about a platform that large, with that much significance in terms of where people get their sources of information on a wide array of topics, the idea that nobody except for the company has insight into that, is crazy.”

This article originally appeared on Engadget at https://www.engadget.com/big-tech/meta-killed-crowdtangle-an-invaluable-research-tool-because-what-it-showed-was-inconvenient-121700584.html?src=rss

Instagram is failing to act on abuse targeting women lawmakers on both sides of the aisle

Instagram is failing to enforce its own rules and allowing some of its most high-profile accounts to be targeted with abusive comments “with impunity,” according to a new report from the Center for Countering Digital Hate. The anti-hate group claims that Meta failed to remove 93 percent of comments it reported to the company, including ones that contain racial slurs, violent threats and other disturbing language that would seem to clearly violate the social network’s rules.

CCDH’s researchers zeroed in on five Republican and five Democratic lawmakers who are up for election this year. The group included Vice President Kamala Harris, Representative Nancy Pelosi, Senator Elizabeth Warren, Representative Marjorie Taylor-Greene, Senator Marsha Blackburn and Representative Lauren Boebert.

The researchers reported 1,000 comments that appeared on the lawmakers’ Instagram posts between January and June of this year and found that Meta took “no action” against the vast majority of those comments, with 926 of them still visible in the app one week after being reported. The reported content included comments with racial slurs and other racist language, calls for violence and other abuse.

“We're simulating the moment at which someone reaches out their hand asking for help, and actually, Instagram's failure to act on that compounds the harm done,” CCDH CEO Imran Ahmed said in a briefing about the report.

The CCDH also found that many of the abusive comments came from “repeat offenders” which, according to Ahmed, has “created a culture of impunity” on the platform. The report comes less than three months before the US presidential election, and it notes that attacks targeting Harris, who is now campaigning for president seem to have “intensified” since she took over the ticket. “Instagram failed to remove 97 out of 105 abusive comments targeting Vice President Kamala Harris, equivalent to a failure to act on 92% of abusive comments targeting her,” the report says. It notes that Instagram failed to remove comments targeting Harris that used the n-word, as well as gender-based slurs.

In a statement, Meta said it would review the report. “We provide tools so that anyone can control who can comment on their posts, automatically filter out offensive comments, phrases or emojis, and automatically hide comments from people who don't follow them," Meta's Head of Women's Safety, said in a statement. "We work with hundreds of safety partners around the world to continually improve our policies, tools, detection and enforcement, and we will review the CCDH report and take action on any content that violates our policies.” 

This article originally appeared on Engadget at https://www.engadget.com/big-tech/instagram-is-failing-to-act-on-abuse-targeting-women-lawmakers-on-both-sides-of-the-aisle-103025621.html?src=rss

Elon Musk claims ‘massive DDOS attack’ delayed his live stream with Donald Trump

X’s live streaming infrastructure appears to have failed, once again, at a high-profile moment for the company. X owner Elon Musk was supposed to be interviewing Donald Trump live on Spaces, beginning at 8pm ET Monday. But the stream repeatedly crashed and was completely inaccessible to many users.

Musk claimed that the failure was due to a “massive DDOS [distributed denial of service] attack on X,” and that the company “tested the system with 8 million concurrent listeners earlier today.” Instead, only a “smaller number” of people will be able to listen to the conversation live. As of 8:30pm ET, the live stream had yet to begin. “Crashed,” “unable” and “Twitter blackout” trended on the platform.

Those who were able to join the stream were greeted with about a half hour of hold music followed by several minutes of total silence. The live stream finally started at 8:40pm ET. “All of our data lines, like basically hundreds of gigabits of data, were saturated,” Musk said. “We think we've overcome most of that.” Musk didn’t explain how a DDOS attack could target only one specific feature on the service without affecting other aspects of X’s app or website.

It’s not the first time a high-profile live stream on spaces has run into technical difficulties. Last year, Ron DeSantis attempted to announce his short-lived presidential bid during a live conversation with Musk on X, but that stream was also delayed after repeated crashes. Musk, at the time, said that Twitter’s servers were “kind of melting.” Musk’s biographer later reported that the issues were a result of months of instability within Twitter's systems after Musk instructed his cousins to hastily dismantle one of the company’s data centers.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/elon-musk-claims-massive-ddos-attack-delayed-his-live-stream-with-donald-trump-004457451.html?src=rss

One of the ad industry groups being sued by X is ‘discontinuing’

An ad industry group named in X’s antitrust lawsuit is “discontinuing,” two days after the social media company filed a lawsuit accusing major advertisers of an “illegal boycott” against the company. The Global Alliance for Responsible Media (GARM) is “discontinuing activities,” according to an email reported by Business Insider.

GARM was created in 2019 to help set brand safety guidelines for major advertisers, and is part of the World Federation of Advertisers (WFA), which was also named in X’s lawsuit. According to Business Insider, WFA CEO Stephan Loerke told members that GARM is a nonprofit with limited resources, but that the groups planned on fighting the lawsuit.

X CEO Linda Yaccarino said the news was “an important acknowledgement and a necessary step in the right direction” in a statement on X. The company’s lawsuit, which was filed in Texas, claims that the WFA, GARM and a handful of major advertisers “conspired … to collectively withhold billions of dollars in advertising revenue from Twitter.” X faced steep declines in its ad revenue over the last two years as advertisers have pulled back following multiple reports about hate speech and antisemitic content on the platform.

GRAM was previously named in a House Judiciary Committee report that alleged the group had an “anti-conservative bias” and engaged in "anti-competitive" behavior. It has called those allegations “unfounded.” In a statement on its website earlier this week, the group pointed out that it was formed in the wake of a mass shooting that was streamed live on Facebook, with the goal of addressing the monetization of harmful content online. “Suggestions that GARM practices may impinge on free speech are a deliberate misrepresentation of GARM’s work,” it wrote. “GARM is not a watchdog or lobby. GARM does not participate in or advocate for boycotts of any kind.”

This article originally appeared on Engadget at https://www.engadget.com/big-tech/one-of-the-ad-industry-groups-being-sued-by-x-is-discontinuing-192721024.html?src=rss

Facebook will let creators remove account warnings if they complete ‘educational training’

Meta is making it a little easier for creators to avoid the dreaded “Facebook jail.” The company announced a new policy that will allow people with professional accounts to complete in-app “educational training” in order to avoid a strike on their account for first-time violations of the platform’s community standards.

In a blog post announcing the change, Meta notes that it can be frustrating for creators to navigate the company’s penalty system, which restricts Facebook accounts from certain features, including monetization tools, after multiple offenses. Under the new rules, creators who receive a warning for a first-time offense will have the option to remove the warning if they view an in-app explanation of the rule they broke.

Particularly serious offenses, “such as posting content that includes sexual exploitation, the sale of high-risk drugs, or glorification of dangerous organizations and individuals” are not able to be removed. Instead, the system is geared toward helping creators correct “unintentional mistakes,” according to the company. “We believe focusing on helping people understand why we have removed their content will be more effective at preventing re-offending, giving us not just a fairer approach, but a more effective one,” Meta explains.

It’s not the first time Meta has tried to reform its penalty system, which has been criticized by the Oversight Board and is a frequent source of frustration to users who may get strikes for mundane comments taken out of context. Last year, the company said it was trying to focus more on educating users about its rules rather than restricting their ability to post. Though the latest policy change will only affect creators with professional accounts to start, the company says it is planning to expand it “more broadly in the coming months.”

This article originally appeared on Engadget at https://www.engadget.com/social-media/facebook-will-let-creators-remove-account-warnings-if-they-complete-educational-training-181503330.html?src=rss

Reddit CEO teases AI search features and paid subreddits

Reddit just wrapped up its second earnings call as a public company and CEO Steve Huffman hinted at some significant changes that could be coming to the platform. During the call, the Reddit co-founder said the company would begin testing AI-powered search results later this year.

“Later this year, we will begin testing new search result pages powered by AI to summarize and recommend content, helping users dive deeper into products, shows, games and discover new communities on Reddit,” Huffman said. He didn’t say when those tests would begin, but said it would use both first-party and third-party models.

Huffman noted that search on Reddit has “gone unchanged for a long time” but that it’s a significant opportunity to bring in new users. He also said that search could one day be a significant source of advertising revenue for the company.

Huffman hinted at other non-advertising sources of revenue as well. He suggested that the company might experiment with paywalled subreddits as it looks to monetize new features. “I think the existing, altruistic, free version of Reddit will continue to exist and grow and thrive just the way it has,” Huffman said. “But now we will unlock the door for new use cases, new types of subreddits that can be built that may have exclusive content or private areas, things of that nature.”

A Reddit spokesperson declined to elaborate on Huffman’s remarks. But it’s no secret the company has been eyeing new ways to expand its business since going public earlier this year. It’s struck multi million-dollar licensing deals with Google and OpenAI, and has blocked search engines that aren’t paying the company.

“Some players in the ecosystem have not been transparent with their use of Reddit’s content, and in those instances, we block access to protect Reddit content and user privacy,” Huffman said. “We want to know where Reddit data is going and what it's being used for, and so those are the terms of engagement.”

This article originally appeared on Engadget at https://www.engadget.com/social-media/reddit-ceo-teases-ai-search-features-and-paid-subreddits-225636988.html?src=rss

X sues advertisers for ‘illegal boycott’ of the platform

X, whose top executives have long railed against advertisers who fled the platform amid concerns over hate speech, is now also suing them. X has filed an antitrust lawsuit against the Global Alliance for Responsible Media (GARM) and several of its members, including Mars, Unilever and CVS Health, CEO Linda Yaccarino said in an open letter shared on X.

According to Yaccarino, the group engaged in an “illegal boycott” of X. “The consequence - perhaps the intent - of this boycott was to seek to deprive X’s users, be they sports fans, gamers, journalists, activists, parents or political and corporate leaders, of the Global Town Square,” she wrote.

As Axios points out, GARM is part of the World Federation of Advertisers (which is also named in the lawsuit) and was created to come up with brand safety guidelines for online advertisers. The lawsuit alleges that the group “conspired, along with dozens of non-defendant co-conspirators, to collectively withhold billions of dollars in advertising revenue from Twitter.”

GARM didn't immediately respond to a request for comment.

It’s not the first time X has filed a lawsuit against a group that Musk has accused of stoking an advertiser exodus from the platform. The company previously sued the Center Countering Digital Hate (CCDH), an anti-hate group that published research showing that X failed to take down hateful posts shared by premium subscribers. That lawsuit was later dismissed by a judge who said X was trying to “punish” the group for sharing unflattering research. X is also suing Media Matters, a watchdog group that published a report showing X had displayed ads alongside anti-Semitic content.

“We tried being nice for 2 years and got nothing but empty words,” Musk, who nearly a year ago publicly told advertisers to “go fuck themselves," wrote in a post on Tuesday. “Now, it is war.”

This article originally appeared on Engadget at https://www.engadget.com/big-tech/x-sues-advertisers-for-illegal-boycott-of-the-platform-173100888.html?src=rss

X is reportedly closing its San Francisco office

X will soon close its longtime San Francisco office and move employees to offices elsewhere in the Bay Area, according to an email from CEO Linda Yaccarino reported by The New York Times. Yaccarino’s note to employees comes several weeks after Elon Musk threatened to move X’s headquarters out of California and into Austin, Texas.

Yaccarino’s note, however, doesn’t seem to mention Texas. According to The New York Times, she told employees the closure will happen over the “next few weeks” and that employees will work out of “a shared engineering space in Palo Alto” that’s also used by xAI, as well as other “locations in San Jose.”

Twitter, and now X, has had a rocky relationship with its home base since Musk’s takeover of the company. Musk banned employees from working remotely shortly after taking over the company in 2022, and ordered many Twitter workers back to the office in the mid-Market neighborhood of San Francisco.

He later ran afoul of the city’s Department of Building Inspection for installing a giant flashing X on top of the building, and for reportedly converting office space into hotel rooms for employees to sleep in. The company’s landlord had also sued X over unpaid rent, The San Francisco Chronicle reported earlier this year. The lawsuit was later dismissed.

Despite Musk’s frequent complaints about San Francisco and its elected leaders, he had previously vowed to keep the company’s headquarters in the city. “Many have offered rich incentives for X (fka Twitter) to move its HQ out of San Francisco,” Musk tweeted last year.

“Moreover, the city is in a doom spiral with one company after another left or leaving. Therefore, they expect X will move too. We will not. You only know who your real friends are when the chips are down. San Francisco, beautiful San Francisco, though others forsake you, we will always be your friend.”

But, even before Musk’s recent posts about moving to Austin, there were other signs X may be getting ready to leave after all. The San Francisco Chronicle reported in July that X’s landlord was looking to sublease much of the company’s 800,000 square-foot headquarters.

X didn’t immediately respond to a request for comment.

This article originally appeared on Engadget at https://www.engadget.com/social-media/x-is-reportedly-closing-its-san-francisco-office-203650428.html?src=rss