Internal Facebook documents highlight its moderation and misinformation issues

The Facebook Papers, a vast trove of documents supplied by whistleblower Frances Haugen to a consortium of news organizations has been released. The reporting, by Reuters, Bloomberg, The Washington Post and others, paints a picture of a company that repeatedly sought to prioritize dominance and profit over user safety. This was, however, despite a large number of employees warning that the company’s focus on engagement put users at risk of real-world violence.

The Washington Post, for instance, claims that while Facebook CEO Mark Zuckerberg played down reports that the site amplified hate speech in testimony to Congress, he was aware that the problem was far broader than publicly declared. Internal documents seen by the Post claim that the social network had removed less than five percent of hate speech, and that executives — including Zuckerberg — were well aware that Facebook was polarizing people. The claims have already been rebutted by Facebook, which says that the documents have been misrepresented.

Zuckerberg is also accused of squashing a plan to run a Spanish-language voter-registration drive in the US before the 2020 elections. He said that the plan may have appeared “partisan,” with WhatsApp staffers subsequently offering a watered-down version partnering with outside agencies. The CEO was also reportedly behind the decision not to clamp down on COVID-19 misinformation in the early stages of the pandemic as there may be a “material tradeoff with MSI [Meaningful Social Interaction — an internal Facebook metric] impact.” Facebook has refuted the claim, saying that the documents have been mischaracterized.

Reuters reported that Facebook has serially neglected a number of developing nations, allowing hate speech and extremism to flourish. That includes not hiring enough staffers who can speak the local language, appreciate the cultural context and otherwise effectively moderate. The result is that the company has unjustified faith in its automatic moderation systems which are ineffective in non-English speaking countries. Again, Facebook has refuted the accusation that it is neglecting its users in those territories.

One specific region that is singled out for concern is Myanmar, where Facebook has been held responsible for amplifying local tensions. A 2020 document suggests that the company’s automatic moderation system could not flag problematic terms in (local language) Burmese. (It should be noted that, two years previously, Facebook’s failure to properly act to prevent civil unrest in Myanmar was highlighted in a report from Business for Social Responsibility.)

Similarly, Facebook reportedly did not have the tools in place to detect hate speech in the Ethiopian languages of Oromo or Amharic. Facebook has said that it is working to expand its content moderation team and, in the last two years, has recruited Oromo, Amharic and Burmese speakers (as well as a number of other languages).

The New York Times, reports that Facebook’s internal research was well-aware that the Like and Share functions — core elements of how the platform work — had accelerated the spread of hate speech. A document, titled What Is Collateral Damage, says that Facebook’s failure to remedy these issues will see the company “actively (if not necessarily consciously) promoting these types of activities.” Facebook says that, again, these statements are based on incorrect premises, and that it would be illogical for the company to try and actively harm its users.

Bloomberg, meanwhile, has focused on the supposed collapse in Facebook’s engagement metrics. Young people, a key target market for advertisers, are spending less time on Facebook’s platform, with fewer teens opting to sign up. At the same time, the number of users may be artificially inflated in these age groups, with users choosing to create multiple accounts — “Finstas” — to separate their online personas to cater to different groups. Haugen alleges that Facebook “has misrepresented core metrics to investors and advertisers,” and that duplicate accounts are leading to “extensive fraud” against advertisers. Facebook says that it already notifies advertisers of the risk that purchases will reach duplicate accounts in its Help Center, and lists the issue in its SEC filings.

Wired focuses on how Facebook’s employees regularly leave valedictions when they leave the company. And how these missives have become increasingly gloomy, with one departing employee writing that the platform has a “net negative influence on politics.” Another said that they felt that they had “blood on their hands,” while a third said that their ability to effect changes to Facebook’s systems to improve matters was hampered by internal roadblocks.

And, shortly after these reports were published, Frances Haugen sat down with the UK's select committee looking at its forthcoming Online Safety Bill. Much of what she said has already been expressed to regulators in the US, but her comments have been highly critical of Facebook. At one point, Haugen said that Facebook has been unwilling to sacrifice even "little slivers of profit" in order to make its product safer. She added that Facebook CEO Mark Zuckerberg "has unilateral control over three billion people, [and that] there's no will at the top [of the company] to make sure these systems are run in an adequately safe way." 

Over the weekend, Axios reported that Facebook’s Sir Nick Clegg warned that the site should expect “more bad headlines” in the coming weeks. It's likely that whatever happens at the company's third-quarter announcement later today, it won't be sufficient to dispel the tsunami of bad press it is currently weathering. 

Updated 10:41am ET to include comments from Frances Haugen made to the select committee.

Facebook leak shows inner turmoil over approach to conservative content

Facebook has long been accused of playing favorites on multiple sides of the political spectrum, and it's now clear just how much of that uproar extends to the company's ranks. A leak to The Wall Street Journal reportedly shows Facebook leaders and staff have clashed numerous times over the social network's approach to conservative content, particularly outlets like Breitbart. Rank-and-file employees have accused Facebook of making "special exceptions" from policies for right-wing outlets, while senior-level staff warned of potential pitfalls.

Workers argued that Facebook kept Breitbart in a second tier of the News Tab, a section meant to focus on reliable news, despite very low trust and quality scores as well as misinformation violations. Facebook was not only making exceptions, one employee said, but "explicitly" endorsing outlets like this by including them as trusted partners. Staff claimed Facebook was "scared of political backlash" if it enforced policies equally, and believed the site let conservative influencers Diamond and Silk lobby fact checkers to avoid punishment for spreading misinformation.

Higher-ups countered with justifications for those decisions. They argued that booting a news outlet for trust scores would risk booting more mainstream outlets like CNN, for instance. When staff asked Facebook to intervene over Breitbart's alleged attempts to dodge sites' advertising blocks, a director said Facebook had to resist the urge and "rely on our principles and policies."

Facebook repeated its familiar stance in a response to TheJournal, maintaining that limited access to low-quality material to "improve people's experiences," not due to political leanings. A spokesperson added that Facebook studied the effects of potential changes before implementing them, and that publishers like Breitbart still met requirements for honoring rules against misinformation and hate speech.

The revelations likely won't satisfy people on either side of the American political spectrum. Liberals may be concerned Facebook is knowingly allowing the spread of heavily spun and outright false claims, while the right wing may see it as evidence of a claimed anti-conservative bias. The insights reveal a more conflicted approach to material, though. They also underscore the importance of tools meant to automatically limit the reach of misinformation — they could minimize internal debates by curbing fake news without requiring as much human input.

Facebook’s misinformation and violence problems are worse in India

Facebook whistleblower Frances Haugen's leaks suggest its problems with extremism are particularly dire in some areas. Documents Haugen provided to the New York Times, Wall Street Journal and other outlets suggest Facebook is aware it fostered severe misinformation and violence in India. The social network apparently didn't have nearly enough resources to deal with the spread of harmful material in the populous country, and didn't respond with enough action when tensions flared.

A case study from early 2021 indicated that much of the harmful content from groups like Rashtriya Swayamsevak Sangh and Bajrang Dal wasn't flagged on Facebook or WhatsApp due to the lack of technical know-how needed to spot content written in Bengali and Hindi. At the same time, Facebook reportedly declined to mark the RSS for removal due to "political sensitivities," and Bajrang Dal (linked to Prime Minister Modi's party) hadn't been touched despite an internal Facebook call to take down its material. The company had a white list for politicians exempt from fact-checking.

Facebook was struggling to fight hate speech as recently as five months ago, according to the leaked data. And like an earlier test in the US, the research showed just how quickly Facebook's recommendation engine suggested toxic content. A dummy account following Facebook's recommendations for three weeks was subjected to a "near constant barrage" of divisive nationalism, misinformation and violence.

As with earlier scoops, Facebook said the leaks didn't tell the whole story. Spokesman Andy Stone argued the data was incomplete and didn't account for third-party fact checkers used heavily outside the US. He added that Facebook had invested heavily in hate speech detection technology in languages like Bengali and Hindi, and that the company was continuing to improve that tech.

The social media firm followed this by posting a lengthier defense of its practices. It argued that it had an "industry-leading process" for reviewing and prioritizing countries with a high risk of violence every six months. It noted that teams considered long-term issues and history alongside current events and dependence on its apps. The company added it was engaging with local communities, improving technology and continuously "refining" policies.

The response didn't directly address some of the concerns, however. India is Facebook's largest individual market, with 340 million people using its services, but 87 percent of Facebook's misinformation budget is focused on the US. Even with third-party fact checkers at work, that suggests India isn't getting a proportionate amount of attention. Facebook also didn't follow up on worries it was tip-toeing around certain people and groups beyond a previous statement that it enforced its policies without consideration for position or association. In other words, it's not clear Facebook's problems with misinformation and violence will improve in the near future.

Facebook’s misinformation and violence problems are worse in India

Facebook whistleblower Frances Haugen's leaks suggest its problems with extremism are particularly dire in some areas. Documents Haugen provided to the New York Times, Wall Street Journal and other outlets suggest Facebook is aware it fostered severe misinformation and violence in India. The social network apparently didn't have nearly enough resources to deal with the spread of harmful material in the populous country, and didn't respond with enough action when tensions flared.

A case study from early 2021 indicated that much of the harmful content from groups like Rashtriya Swayamsevak Sangh and Bajrang Dal wasn't flagged on Facebook or WhatsApp due to the lack of technical know-how needed to spot content written in Bengali and Hindi. At the same time, Facebook reportedly declined to mark the RSS for removal due to "political sensitivities," and Bajrang Dal (linked to Prime Minister Modi's party) hadn't been touched despite an internal Facebook call to take down its material. The company had a white list for politicians exempt from fact-checking.

Facebook was struggling to fight hate speech as recently as five months ago, according to the leaked data. And like an earlier test in the US, the research showed just how quickly Facebook's recommendation engine suggested toxic content. A dummy account following Facebook's recommendations for three weeks was subjected to a "near constant barrage" of divisive nationalism, misinformation and violence.

As with earlier scoops, Facebook said the leaks didn't tell the whole story. Spokesman Andy Stone argued the data was incomplete and didn't account for third-party fact checkers used heavily outside the US. He added that Facebook had invested heavily in hate speech detection technology in languages like Bengali and Hindi, and that the company was continuing to improve that tech.

The social media firm followed this by posting a lengthier defense of its practices. It argued that it had an "industry-leading process" for reviewing and prioritizing countries with a high risk of violence every six months. It noted that teams considered long-term issues and history alongside current events and dependence on its apps. The company added it was engaging with local communities, improving technology and continuously "refining" policies.

The response didn't directly address some of the concerns, however. India is Facebook's largest individual market, with 340 million people using its services, but 87 percent of Facebook's misinformation budget is focused on the US. Even with third-party fact checkers at work, that suggests India isn't getting a proportionate amount of attention. Facebook also didn't follow up on worries it was tip-toeing around certain people and groups beyond a previous statement that it enforced its policies without consideration for position or association. In other words, it's not clear Facebook's problems with misinformation and violence will improve in the near future.

Facebook sues programmer who allegedly scraped data for 178 million users

Facebook is taking legal action in response to another large-scale data heist. According to The Record, the social network has sued Ukraine national Alexander Solonchenko for allegedly scraping data for more than 178 million users. Solonchenko reportedly exploited Messenger's contact import feature by using an automated tool that mimicked Android devices. He fed Facebook millions of phone numbers and gathered data whenever the site returned info on accounts with phone numbers.

The attacker supposedly conducted the campaign between January 2018 and September 2019 (when Facebook shut down the importer), and started selling it on a black market forum in December 2020. Facebook tracked Solonchenko down after he used his forum username and contact details for email and job boards. The man has also scraped data from other targets, Facebook said, including a major Ukranian bank.

In its complaint, Facebook asked for undefined damages as well as bans preventing Solonchenko from accessing Facebook or selling its scraped data.

This isn't the largest such incident. Hackers scraped data for 533 million users through the same feature. However, this illustrates Facebook's determination to crack down on data scraping — it's willing to pursue attackers in civil court in hopes of discouraging similar data raiding campaigns.

Facebook sues programmer who allegedly scraped data for 178 million users

Facebook is taking legal action in response to another large-scale data heist. According to The Record, the social network has sued Ukraine national Alexander Solonchenko for allegedly scraping data for more than 178 million users. Solonchenko reportedly exploited Messenger's contact import feature by using an automated tool that mimicked Android devices. He fed Facebook millions of phone numbers and gathered data whenever the site returned info on accounts with phone numbers.

The attacker supposedly conducted the campaign between January 2018 and September 2019 (when Facebook shut down the importer), and started selling it on a black market forum in December 2020. Facebook tracked Solonchenko down after he used his forum username and contact details for email and job boards. The man has also scraped data from other targets, Facebook said, including a major Ukranian bank.

In its complaint, Facebook asked for undefined damages as well as bans preventing Solonchenko from accessing Facebook or selling its scraped data.

This isn't the largest such incident. Hackers scraped data for 533 million users through the same feature. However, this illustrates Facebook's determination to crack down on data scraping — it's willing to pursue attackers in civil court in hopes of discouraging similar data raiding campaigns.

Facebook researchers were warning about its recommendations fueling QAnon in 2019

Facebook officials have long known about how the platform’s recommendations can lead users into conspiracy theory-addled “rabbit holes.” Now, we know just how clear that picture was thanks to documents provided by Facebook whistleblower Frances Haugen.

During the summer of 2019, a Facebook researcher found that it took just five days for the company to begin recommending QAnon groups and other disturbing content to a fictional account, according to an internal report whose findings were reported by NBC News, The Wall Street Journal and others Friday. The document, titled “Carol's Journey to QAnon” was also in a cache of records provided by Haugen to the Securities and Exchange Commission as part of her whistleblower complaint.

It reportedly describes how a Facebook researcher set up a brand new account for “Carol,” who was described as a “conservative mom.” After liking a few conservative, but “mainstream” pages, Facebook’s algorithms began suggesting more fringe and conspiracy content. Within five days of joining Facebook, “Carol” was seeing “groups with overt QAnon affiliations,” conspiracy theories about “white genocide” and other content described by the researcher as “extreme, conspiratorial, and graphic content.”

The fact that Facebook’s recommendations were fueling QAnon conspiracy theories and other concerning movements has been well known outside of the company for some time. Researchers and journalists have also documented the rise of the once fringe conspiracy theory during the coronavirus pandemic in 2020. But the documents show that Facebook’s researchers were raising the alarm about the conspiracy theory prior to the pandemic. The Wall Street Journal notes that researchers suggested measures like preventing or slowing down re-shared content but Facebook officials largely opted no to take those steps.

Facebook didn’t immediately respond to questions about the document. “We worked since 2016 to invest in people, technologies, policies and processes to ensure that we were ready, and began our planning for the 2020 election itself two years in advance,” Facebook’s VP of Integrity wrote in a lengthy statement Friday evening. In the statement, Rosen recapped the numerous measures he said Facebook took in the weeks and months leading up to the 2020 election — including banning QAnon and militia groups — but didn’t directly address the company’s recommendations prior to QAnon’s ban in October 2020.

The documents come at a precarious moment for Facebook. There have now been two whistleblowers who have turned over documents to the SEC saying the company has misled investors and prioritized growth and profits over users’ safety. Scrutiny is likely to further intensify as more than a dozen media organizations now have access to some of those documents.

Facebook researchers were warning about its recommendations fueling QAnon in 2019

Facebook officials have long known about how the platform’s recommendations can lead users into conspiracy theory-addled “rabbit holes.” Now, we know just how clear that picture was thanks to documents provided by Facebook whistleblower Frances Haugen.

During the summer of 2019, a Facebook researcher found that it took just five days for the company to begin recommending QAnon groups and other disturbing content to a fictional account, according to an internal report whose findings were reported by NBC News, The Wall Street Journal and others Friday. The document, titled “Carol's Journey to QAnon” was also in a cache of records provided by Haugen to the Securities and Exchange Commission as part of her whistleblower complaint.

It reportedly describes how a Facebook researcher set up a brand new account for “Carol,” who was described as a “conservative mom.” After liking a few conservative, but “mainstream” pages, Facebook’s algorithms began suggesting more fringe and conspiracy content. Within five days of joining Facebook, “Carol” was seeing “groups with overt QAnon affiliations,” conspiracy theories about “white genocide” and other content described by the researcher as “extreme, conspiratorial, and graphic content.”

The fact that Facebook’s recommendations were fueling QAnon conspiracy theories and other concerning movements has been well known outside of the company for some time. Researchers and journalists have also documented the rise of the once fringe conspiracy theory during the coronavirus pandemic in 2020. But the documents show that Facebook’s researchers were raising the alarm about the conspiracy theory prior to the pandemic. The Wall Street Journal notes that researchers suggested measures like preventing or slowing down re-shared content but Facebook officials largely opted no to take those steps.

Facebook didn’t immediately respond to questions about the document. “We worked since 2016 to invest in people, technologies, policies and processes to ensure that we were ready, and began our planning for the 2020 election itself two years in advance,” Facebook’s VP of Integrity wrote in a lengthy statement Friday evening. In the statement, Rosen recapped the numerous measures he said Facebook took in the weeks and months leading up to the 2020 election — including banning QAnon and militia groups — but didn’t directly address the company’s recommendations prior to QAnon’s ban in October 2020.

The documents come at a precarious moment for Facebook. There have now been two whistleblowers who have turned over documents to the SEC saying the company has misled investors and prioritized growth and profits over users’ safety. Scrutiny is likely to further intensify as more than a dozen media organizations now have access to some of those documents.

Another former Facebook employee has filed a whistleblower complaint

Another former Facebook employee has filed a whistleblower complaint with the Securities and Exchange Commission. The latest complaint, which was first reported by The Washington Post, alleges Facebook misled its investors about “dangerous and criminal behavior on its platforms, including Instagram, WhatsApp and Messenger.”

In the complaint, the former employee described a conversation with one of Facebook’s top communication executives who, following disclosures about Russia’s use of the platform to meddle in the 2016 election, said the scandal would be a “flash in the pan” and that “we are printing money in the basement, and we are fine.”

Like Frances Haugen, the latest whistleblower is also a former member of Facebook’s integrity team, which was tasked with fighting misinformation, voting interference and other major problems facing the company. And, like Haugen, the former Facebook staffer said that the company has “routinely undermined efforts to fight misinformation, hate speech and other problematic content out of fear of angering then-President Trump and his political allies, or out of concern about potentially dampening the user growth.”

The SEC filing also describes illegal activity in secret Facebook Groups, and Facebook’s policy of allowing politicians and other high-profile users to skirt its rules. It names Mark Zuckerberg and Sheryl Sandberg as being aware of the problems and not reporting them to investors, according to The Post.

While many of the details sound similar to other complaints from former company insiders, news of another complaint adds to the pressure on Facebook, which has spent much of the last month trying to discredit Haugen and downplay the significance of its own research. Meanwhile, lawmakers have called on Zuckerberg to answer questions from Congress, and Haugen is expected to brief European officials as well.

A Facebook spokesperson didn’t immediately respond to a request for comment. Zuckerberg is expected to announce plans to rebrand the company with a new name next week.

Another former Facebook employee has filed a whistleblower complaint

Another former Facebook employee has filed a whistleblower complaint with the Securities and Exchange Commission. The latest complaint, which was first reported by The Washington Post, alleges Facebook misled its investors about “dangerous and criminal behavior on its platforms, including Instagram, WhatsApp and Messenger.”

In the complaint, the former employee described a conversation with one of Facebook’s top communication executives who, following disclosures about Russia’s use of the platform to meddle in the 2016 election, said the scandal would be a “flash in the pan” and that “we are printing money in the basement, and we are fine.”

Like Frances Haugen, the latest whistleblower is also a former member of Facebook’s integrity team, which was tasked with fighting misinformation, voting interference and other major problems facing the company. And, like Haugen, the former Facebook staffer said that the company has “routinely undermined efforts to fight misinformation, hate speech and other problematic content out of fear of angering then-President Trump and his political allies, or out of concern about potentially dampening the user growth.”

The SEC filing also describes illegal activity in secret Facebook Groups, and Facebook’s policy of allowing politicians and other high-profile users to skirt its rules. It names Mark Zuckerberg and Sheryl Sandberg as being aware of the problems and not reporting them to investors, according to The Post.

While many of the details sound similar to other complaints from former company insiders, news of another complaint adds to the pressure on Facebook, which has spent much of the last month trying to discredit Haugen and downplay the significance of its own research. Meanwhile, lawmakers have called on Zuckerberg to answer questions from Congress, and Haugen is expected to brief European officials as well.

A Facebook spokesperson didn’t immediately respond to a request for comment. Zuckerberg is expected to announce plans to rebrand the company with a new name next week.