Facebook researchers were warning about its recommendations fueling QAnon in 2019

Facebook officials have long known about how the platform’s recommendations can lead users into conspiracy theory-addled “rabbit holes.” Now, we know just how clear that picture was thanks to documents provided by Facebook whistleblower Frances Haugen.

During the summer of 2019, a Facebook researcher found that it took just five days for the company to begin recommending QAnon groups and other disturbing content to a fictional account, according to an internal report whose findings were reported by NBC News, The Wall Street Journal and others Friday. The document, titled “Carol's Journey to QAnon” was also in a cache of records provided by Haugen to the Securities and Exchange Commission as part of her whistleblower complaint.

It reportedly describes how a Facebook researcher set up a brand new account for “Carol,” who was described as a “conservative mom.” After liking a few conservative, but “mainstream” pages, Facebook’s algorithms began suggesting more fringe and conspiracy content. Within five days of joining Facebook, “Carol” was seeing “groups with overt QAnon affiliations,” conspiracy theories about “white genocide” and other content described by the researcher as “extreme, conspiratorial, and graphic content.”

The fact that Facebook’s recommendations were fueling QAnon conspiracy theories and other concerning movements has been well known outside of the company for some time. Researchers and journalists have also documented the rise of the once fringe conspiracy theory during the coronavirus pandemic in 2020. But the documents show that Facebook’s researchers were raising the alarm about the conspiracy theory prior to the pandemic. The Wall Street Journal notes that researchers suggested measures like preventing or slowing down re-shared content but Facebook officials largely opted no to take those steps.

Facebook didn’t immediately respond to questions about the document. “We worked since 2016 to invest in people, technologies, policies and processes to ensure that we were ready, and began our planning for the 2020 election itself two years in advance,” Facebook’s VP of Integrity wrote in a lengthy statement Friday evening. In the statement, Rosen recapped the numerous measures he said Facebook took in the weeks and months leading up to the 2020 election — including banning QAnon and militia groups — but didn’t directly address the company’s recommendations prior to QAnon’s ban in October 2020.

The documents come at a precarious moment for Facebook. There have now been two whistleblowers who have turned over documents to the SEC saying the company has misled investors and prioritized growth and profits over users’ safety. Scrutiny is likely to further intensify as more than a dozen media organizations now have access to some of those documents.

Facebook researchers were warning about its recommendations fueling QAnon in 2019

Facebook officials have long known about how the platform’s recommendations can lead users into conspiracy theory-addled “rabbit holes.” Now, we know just how clear that picture was thanks to documents provided by Facebook whistleblower Frances Haugen.

During the summer of 2019, a Facebook researcher found that it took just five days for the company to begin recommending QAnon groups and other disturbing content to a fictional account, according to an internal report whose findings were reported by NBC News, The Wall Street Journal and others Friday. The document, titled “Carol's Journey to QAnon” was also in a cache of records provided by Haugen to the Securities and Exchange Commission as part of her whistleblower complaint.

It reportedly describes how a Facebook researcher set up a brand new account for “Carol,” who was described as a “conservative mom.” After liking a few conservative, but “mainstream” pages, Facebook’s algorithms began suggesting more fringe and conspiracy content. Within five days of joining Facebook, “Carol” was seeing “groups with overt QAnon affiliations,” conspiracy theories about “white genocide” and other content described by the researcher as “extreme, conspiratorial, and graphic content.”

The fact that Facebook’s recommendations were fueling QAnon conspiracy theories and other concerning movements has been well known outside of the company for some time. Researchers and journalists have also documented the rise of the once fringe conspiracy theory during the coronavirus pandemic in 2020. But the documents show that Facebook’s researchers were raising the alarm about the conspiracy theory prior to the pandemic. The Wall Street Journal notes that researchers suggested measures like preventing or slowing down re-shared content but Facebook officials largely opted no to take those steps.

Facebook didn’t immediately respond to questions about the document. “We worked since 2016 to invest in people, technologies, policies and processes to ensure that we were ready, and began our planning for the 2020 election itself two years in advance,” Facebook’s VP of Integrity wrote in a lengthy statement Friday evening. In the statement, Rosen recapped the numerous measures he said Facebook took in the weeks and months leading up to the 2020 election — including banning QAnon and militia groups — but didn’t directly address the company’s recommendations prior to QAnon’s ban in October 2020.

The documents come at a precarious moment for Facebook. There have now been two whistleblowers who have turned over documents to the SEC saying the company has misled investors and prioritized growth and profits over users’ safety. Scrutiny is likely to further intensify as more than a dozen media organizations now have access to some of those documents.

Another former Facebook employee has filed a whistleblower complaint

Another former Facebook employee has filed a whistleblower complaint with the Securities and Exchange Commission. The latest complaint, which was first reported by The Washington Post, alleges Facebook misled its investors about “dangerous and criminal behavior on its platforms, including Instagram, WhatsApp and Messenger.”

In the complaint, the former employee described a conversation with one of Facebook’s top communication executives who, following disclosures about Russia’s use of the platform to meddle in the 2016 election, said the scandal would be a “flash in the pan” and that “we are printing money in the basement, and we are fine.”

Like Frances Haugen, the latest whistleblower is also a former member of Facebook’s integrity team, which was tasked with fighting misinformation, voting interference and other major problems facing the company. And, like Haugen, the former Facebook staffer said that the company has “routinely undermined efforts to fight misinformation, hate speech and other problematic content out of fear of angering then-President Trump and his political allies, or out of concern about potentially dampening the user growth.”

The SEC filing also describes illegal activity in secret Facebook Groups, and Facebook’s policy of allowing politicians and other high-profile users to skirt its rules. It names Mark Zuckerberg and Sheryl Sandberg as being aware of the problems and not reporting them to investors, according to The Post.

While many of the details sound similar to other complaints from former company insiders, news of another complaint adds to the pressure on Facebook, which has spent much of the last month trying to discredit Haugen and downplay the significance of its own research. Meanwhile, lawmakers have called on Zuckerberg to answer questions from Congress, and Haugen is expected to brief European officials as well.

A Facebook spokesperson didn’t immediately respond to a request for comment. Zuckerberg is expected to announce plans to rebrand the company with a new name next week.

Another former Facebook employee has filed a whistleblower complaint

Another former Facebook employee has filed a whistleblower complaint with the Securities and Exchange Commission. The latest complaint, which was first reported by The Washington Post, alleges Facebook misled its investors about “dangerous and criminal behavior on its platforms, including Instagram, WhatsApp and Messenger.”

In the complaint, the former employee described a conversation with one of Facebook’s top communication executives who, following disclosures about Russia’s use of the platform to meddle in the 2016 election, said the scandal would be a “flash in the pan” and that “we are printing money in the basement, and we are fine.”

Like Frances Haugen, the latest whistleblower is also a former member of Facebook’s integrity team, which was tasked with fighting misinformation, voting interference and other major problems facing the company. And, like Haugen, the former Facebook staffer said that the company has “routinely undermined efforts to fight misinformation, hate speech and other problematic content out of fear of angering then-President Trump and his political allies, or out of concern about potentially dampening the user growth.”

The SEC filing also describes illegal activity in secret Facebook Groups, and Facebook’s policy of allowing politicians and other high-profile users to skirt its rules. It names Mark Zuckerberg and Sheryl Sandberg as being aware of the problems and not reporting them to investors, according to The Post.

While many of the details sound similar to other complaints from former company insiders, news of another complaint adds to the pressure on Facebook, which has spent much of the last month trying to discredit Haugen and downplay the significance of its own research. Meanwhile, lawmakers have called on Zuckerberg to answer questions from Congress, and Haugen is expected to brief European officials as well.

A Facebook spokesperson didn’t immediately respond to a request for comment. Zuckerberg is expected to announce plans to rebrand the company with a new name next week.

Instagram is testing tools to make it easier for creators to find sponsors

Instagram is testing new tools to make it easier for creators to earn money through its service. The app is now testing affiliate shops, a feature it first previewed at its Creator Week event in June, and a dedicated “partnerships” inbox.

Affiliate shops are an extension of Facebook’s existing shopping features, which are already widely available. But the latest version of the storefronts allow creators to link to products that are already part of their affiliate arrangements. Creators will earn commission fees when their followers buy products from these shops (though the exact terms of these arrangements haven’t been detailed). The company says that for now the shopping feature will only be available to creators who are part of that affiliate program.

Instagram is also testing new inbox features it says will make it easier for brands to connect with creators for sponsorships. Instagram DMs will get a dedicated “partnerships” section just for messages from brands. The company says this will give those messages “priority placement” and will allow them to skip the “requests” section where incoming messages are often lost.

Instagram is testing a new inbox for messages from potential partners.
Instagram

Separately, the app is working on tools to match brands with creators looking for sponsorships. With the tools, creators can identify brands they are interested in working with directly from the app. While brands would be able to browse creators who fit their needs based on factors like age, gender and follower count.

The tools are still in an early stage, with only a handful of companies and creators participating for now. But the company has previously signaled such features could expand significantly. Mark Zuckerberg said earlier this year that Instagram is planning a “branded content marketplace” to help enable a bigger “creator middle class.”

Instagram is testing tools to make it easier for creators to find sponsors

Instagram is testing new tools to make it easier for creators to earn money through its service. The app is now testing affiliate shops, a feature it first previewed at its Creator Week event in June, and a dedicated “partnerships” inbox.

Affiliate shops are an extension of Facebook’s existing shopping features, which are already widely available. But the latest version of the storefronts allow creators to link to products that are already part of their affiliate arrangements. Creators will earn commission fees when their followers buy products from these shops (though the exact terms of these arrangements haven’t been detailed). The company says that for now the shopping feature will only be available to creators who are part of that affiliate program.

Instagram is also testing new inbox features it says will make it easier for brands to connect with creators for sponsorships. Instagram DMs will get a dedicated “partnerships” section just for messages from brands. The company says this will give those messages “priority placement” and will allow them to skip the “requests” section where incoming messages are often lost.

Instagram is testing a new inbox for messages from potential partners.
Instagram

Separately, the app is working on tools to match brands with creators looking for sponsorships. With the tools, creators can identify brands they are interested in working with directly from the app. While brands would be able to browse creators who fit their needs based on factors like age, gender and follower count.

The tools are still in an early stage, with only a handful of companies and creators participating for now. But the company has previously signaled such features could expand significantly. Mark Zuckerberg said earlier this year that Instagram is planning a “branded content marketplace” to help enable a bigger “creator middle class.”

Snap says Apple’s privacy changes hurt its ad business more than it expected

Snap is finally seeing the effects of Apple’s iOS 14 privacy changes on its ad business and the changes have had a bigger impact than it expected.

The company reported revenue of just over $1 billion for the third-quarter of 2021. But despite that being a new milestone for Snap, it was $3 million shy of what the company had previously estimated. Snap executives said Apple’s iOS changes that make it more difficult for advertisers to track users were largely to blame for the shortfall.

“Our advertising business was disrupted by changes to iOS ad tracking that were broadly rolled out by Apple in June and July,” CEO Evan Spiegel said during a call with analysts. “While we anticipated some degree of business disruption, the new Apple provided measurement solution did not scale as we had expected, making it more difficult for our advertising partners to measure and manage their ad campaigns for iOS.”

It wasn’t all bad news for Snap, though. The company once again beat expectations on user growth, adding 13 million new daily active users for the second quarter in a row. Snap now has 306 million DAUs, a new high for the company.

Still, Spiegel called it a “frustrating setback” for the company, but added that increased privacy protections are “really important for the long term health of the ecosystem and something we fully support.”

The iOS 14.5 update forced developers to ask users to explicitly agree to sharing their device identifier (known as IDFA), which is used by advertisers to track users across apps and services. Though Apple previewed the changes more than a year ago, the update wasn’t released until April. Since then, third-party analytics have estimated that a vanishingly small percentage of iOS users agreed to allow apps to track them.

Snap isn’t the only company that has warned about Apple’s iOS changes on its ad business. Facebook, which has been publicly slamming the changes for more than a year, saying the changes will have an outsize impact on developers and small businesses. But Facebook has also warned investors that the changes are likely to hurt its own ad revenue in 2021. The social network is reporting its third-quarter earnings Monday, when it will share just how significantly it's been affected. 

Twitter says its algorithms amplify the ‘political right’ but it doesn’t know why

Twitter said in April that it was undertaking a new effort to study algorithmic fairness on its platform and whether its algorithms contribute to “unintentional harms.” As part of that work, the company promised to study the political leanings of its content recommendations. Now, the company has published its initial findings. According to Twitter’s research team, the company’s timeline algorithm amplifies content from the “political right” in six of the seven countries it studied.

The research looked at two issues: whether the algorithmic timeline amplified political content from elected officials, and whether some political groups received a greater amount of amplification. The researchers used tweets from news outlets and elected officials in seven countries (Canada, France, Germany, Japan, Spain, the United Kingdom, and the United States) to conduct the analysis, which they said was the first of its kind for Twitter.

“Tweets about political content from elected officials, regardless of party or whether the party is in power, do see algorithmic amplification when compared to political content on the reverse chronological timeline,” Twitter’s Rumman Chowdhury wrote about the research. “In 6 out of 7 countries, Tweets posted by political right elected officials are algorithmically amplified more than the political left. Right-leaning news outlets (defined by 3rd parties), see greater amplification compared to left-leaning.”

Crucially, as Chowdhury points out to Protocol, it’s not yet clear why this is happening. In the paper, the researchers posit that the difference in amplification could be a result of political parties pursuing “different strategies on Twitter.” But the team said that more research would be needed to fully understand the cause.

While the findings are likely to raise some eyebrows, Chowdhury also notes that “algorithmic amplification is not problematic by default.” The researchers further point out that their findings “does not support the hypothesis that algorithmic personalization amplifies extreme ideologies more than mainstream political voices.”

But at the very least, the research would seem to further debunk the notion that Twitter is biased against conservatives. The research also offers an intriguing look at how a tech platform can study the unintentional effects of its algorithms. Facebook, which has come under pressure to make more of its own research public, has defended its algorithms even as a whistleblower has suggested the company should move back to a chronological timeline.

Twitter’s research is part of a broader effort by Twitter to uncover bias and other issues in its algorithms. The company has also published research about its image cropping algorithm and started a bug bounty program to find bias in its platform.

Sen. Blumenthal says Zuckerberg needs to testify about Instagram and kids

Senator Richard Blumenthal is again calling on Mark Zuckerberg to testify about Facebook’s research into Instagram and child safety. “It is urgent and necessary for you or Mr. Adam Mosseri to testify to set the record straight and provide members of Congress and parents with a plan on how you are going to protect our kids,” the Connecticut lawmaker wrote in a letter addressed to Zuckerberg.

Blumenthal is the chair of the Senate subcommittee on Consumer Protection, Product Safety, and Data Security that’s been holding hearings on social media and child safety in recent weeks. Earlier this month, Blumenthal said that a series of whistleblower disclosures about Facebook was the company’s “big tobacco moment.”

Since then, pressure has mounted on Facebook to address internal research that shows Instagram can have a negative impact on some teens’ mental health. The company has already “paused” work on a forthcoming Instagram Kids app, but lawmakers have said the company should end the project altogether.

In his letter, Blumenthal said that Facebook’s head of safety, Antigone Davis, who testified at a previous hearing, “appears to have provided false or inaccurate testimony to me regarding attempts to internally conceal its research.” He also said that Facebook “has continued to demean impactful and independent investigative reporting” and “downplayed its own research.”

Facebook didn’t immediately respond to a request for comment.

Sen. Blumenthal says Zuckerberg needs to testify about Instagram and kids

Senator Richard Blumenthal is again calling on Mark Zuckerberg to testify about Facebook’s research into Instagram and child safety. “It is urgent and necessary for you or Mr. Adam Mosseri to testify to set the record straight and provide members of Congress and parents with a plan on how you are going to protect our kids,” the Connecticut lawmaker wrote in a letter addressed to Zuckerberg.

Blumenthal is the chair of the Senate subcommittee on Consumer Protection, Product Safety, and Data Security that’s been holding hearings on social media and child safety in recent weeks. Earlier this month, Blumenthal said that a series of whistleblower disclosures about Facebook was the company’s “big tobacco moment.”

Since then, pressure has mounted on Facebook to address internal research that shows Instagram can have a negative impact on some teens’ mental health. The company has already “paused” work on a forthcoming Instagram Kids app, but lawmakers have said the company should end the project altogether.

In his letter, Blumenthal said that Facebook’s head of safety, Antigone Davis, who testified at a previous hearing, “appears to have provided false or inaccurate testimony to me regarding attempts to internally conceal its research.” He also said that Facebook “has continued to demean impactful and independent investigative reporting” and “downplayed its own research.”

Facebook didn’t immediately respond to a request for comment.