Facebook is still struggling to remove videos of the Buffalo mass shooting

Facebook is still struggling to contain the video of last weekend’s horrific mass shooting in Buffalo, New York. Now, not only are clips of the shooting accessible on the platform, reposted clips of the attack are sometimes appearing alongside Facebook ads, The New York Timesreports.

The Times notes that it’s not clear how often ads are appearing alongside clips of the shooting, but the paper said that “searches for terms associated with footage of the shooting have been accompanied by ads for a horror film, clothing companies and video streaming services,” in their own tests and tests conducted by the Tech Transparency Project.

While this isn’t a new problem for Facebook — the platform has made similar missteps in the wake of a 2019 shooting in Christchurch, New Zealand — the company is apparently in some cases actually recommending search terms associated with videos of the shooting, according to The New York Times, which said Facebook suggested some searches as being “popular now.”

As with previous mass shootings and violent events, footage originally streamed to Twitch by the gunman in Buffalo has proved difficult for social media platforms to contain. Facebook previously told Engadget that it had designated the event a terrorist attack, and that it was working to automatically detect new copies that are shared to its service.

But videos are still falling through the cracks. And the fact that Facebook is surfacing ads near those videos is likely to raise further questions about whether the company prioritizes profits over safety as a whistleblower has alleged.

In a statement, a company spokesperson told The Times it was trying to “to protect people using our services from seeing this horrific content even as bad actors are dead-set on calling attention to it.”

Elon Musk was accused of sexual misconduct by a SpaceX flight attendant: report

SpaceX paid $250,000 to settle a sexual misconduct claim against Elon Musk after a flight attendant on the company’s corporate jet accused the CEO of exposing himself to her mid-flight, Insider reports.

According to Insider, the incident happened in 2016 and the company settled with the unnamed flight attendant in 2018, according to a friend of the flight attendant who was told contemporaneously about the incident but is not bound by any non-disclosure agreements with the company. Musk allegedly asked for a “full body massage” and offered to buy the flight attendant a horse if she would “do more.” Insider also notes that the flight attendant was encouraged pay out of her own pocket for professional massage training so that she could better serve Musk during flights.

It was during a massage amid a flight to London when Musk allegedly exposed himself and "propositioned" her. "He touched her thigh and told her he would buy her a horse," the friend said in describing the incident. "And he basically tried to bribe her to perform some sort of sexual favor."

The flight attendant reportedly refused Musk's advances, and later felt like she was being "punished" with fewer shifts at SpaceX. She settled with Musk in 2018 "after a session with a mediator that Musk personally attended," Insider says.

According to the report, Musk responded to questions by saying there was “a lot more to this story,” but didn’t elaborate. "If I were inclined to engage in sexual harassment, this is unlikely to be the first time in my entire 30-year career that it comes to light," he told the publication.

We've reached out to SpaceX for comment.

Update 5/20/22 1:06am ET: Musk has since taken to Twitter to respond to the allegations, calling them "utterly untrue" and referring to the friend of the unnamed SpaceX flight attendant a "liar." This would not be the first time Musk has made potentially defamatory statements against his perceived enemies on the site. Notably, Musk's tweets do not include any denial that SpaceX paid the former employee $250,000 to settle the alleged misconduct claim, or that she is currently under what Insider described as "restrictive non-disclosure and non-disparagement clauses that bar the attendant from ever discussing the severance payment or disclosing any information of any kind about Musk and his businesses."

In a separate tweet, Musk states that the "attacks against me should be viewed through a political lens" — though it's unclear if this is meant to refer to Insider, the whistleblower, or both. 

It’s still really easy to game Facebook’s algorithm

Meta’s accounting of the most popular content on Facebook continues to be a confusing mess to untangle. The company released the latest version of its “widely viewed content report,” which details some of the most-viewed Facebook posts in the United States.

And, once again, the latest report raises questions about the company’s ability to limit the spread of what Meta euphemistically refers to as “lower-quality posts.” Between January and March of this year, six of the top 20 most popular links on Facebook were from a spammy website that has since been banned by the company for inauthentic behavior.

“In this report, there were pieces of content that have since been removed from Facebook for violating our policies of Inauthentic Behavior,” the company wrote in a blog post. “The removed links were all from the same domain, and links to that domain are no longer allowed on Facebook.”

The links all came from a Vietnam-based “news” site called Naye News. Unfortunately, Facebook didn’t share details about the actual URLs that went viral and were later removed, so there’s not much we can glean about the actual content. What we do know is that Naye News, which as Bloomberg reporter Davey Alba points out has never before appeared in a widely viewed content report, was able to reach a vast number of Facebook users before the company banned it. Links to Naye News appeared six times on the list of the top 20 URLs, including the two top spots. Together, these links got more than 112 million views, according to the report.

This website wasn’t the only source of questionable content that made it into the top most-viewed list. The fourth-most popular link on the list was a YouTube clip from a town hall meeting with Wisconsin Senator Ron Johnson, featuring a nurse making provablytly false claims about COVID-19 treatments.

During a call with reporters, head of Facebook Integrity Anna Stepanov, said that links to the YouTube video were demoted in News Feed after it was debunked by fact checkers. The company also added warning labels to discourage it from being reshared. “Without these features, this link would likely have received even more reach,” Stepanov said.

But even with those measures, the link was still viewed more than 22.1 million times on Facebook. That’s more than the number of views on the original YouTube video, which currently has about 6.5 million views.

Meanwhile, another URL on the report, which got 12.3 million views, is a link to a website called “heaveemotions.com,” that now redirects to a website that appears to be meant to trick visitors into installing malware. On Facebook though, the link originally rendered a preview with meme-style text that reads: “They told me the virus is everywhere. I told them so is God. Can I get an Amen? I Bet you won’t repost.”

This looks like a typical meme but it now links to a website that appears to be filled with malware.
Screenshot/ Facebook

It’s not the first time overtly spammy content has appeared in one of these reports. In the last version of this report, the top Facebook Page was one later removed by the company for breaking its rules. Reporter Ryan Broderick later identified the page’s origins as a Sri Lankan content farm.

The reports, which Meta began releasing in part to rebut data suggesting far-right personalities consistently dominate the platform, are one of the only windows the company offers into what’s popular on Facebook. That’s been a key question for researchers trying to study the platform and how information, and misinformation, spreads across it. But researchers have also raised questions about how Meta was compiling these reports, which in the past have surfaced bizarre results.

Notably, Meta now says it’s changing the way it evaluates what content is the most “widely viewed” on its platform. Previous reports identifying the top links on Facebook were based on any public post that contained a URL, even if the was just appended to the body of a text post. This meant that popular Pages could effectively spam their followers with random links — like to a website representing former Green Bay Packers football players — embedded in a text or photo post.

Researchers had widely criticized this approach as a widely distributed text post with a link at the end is a lot different than a link post in which the linked content is fully rendered as a preview. Now, Meta is reversing course. “Moving forward, links will need to render a preview in order to be counted as a view, as that more accurately represents what people are seeing.”

Even so, these reports are still only a limited look at what’s most popular on Facebook. The company says the list of the top 20 most-viewed links — the list that included Naye News and COVID-19 misinformation — “collectively accounted for 0.03% of all Feed content views in the US during Q1 2022.” But as always with Facebook, its sheer size means that even a fraction of a percent can equate to millions of views. At the very least, these reports show that it’s still relatively easy to game Facebook’s algorithms and spread “low quality” content.

Meta has nearly doubled the amount of violent content removed from Facebook

Meta has nearly doubled the amount of violent content it removes from Facebook. During the first quarter of 2022, the company took down 21.7 million pieces of content for breaking its rules around violence and incitement of violence, an increase from 12.4 million in the previous quarter.

Takedowns were also up for the quarter on Instagram, but only slightly. The company removed 2.7 million posts for breaking its rules around violence, up from 2.6 million during the last quarter of 2021.

The company shared the new metrics as part of its quarterly community standards enforcement report. In the report, Meta attributed the increase in takedowns to an “expansion of our proactive detection technology.” More than 98 percent of the posts it took down were removed before users reported them, according to the company.

The report comes at a moment when Meta is facing scrutiny for its response time following the recent mass shooting in Buffalo, New York. Live recordings of the shooting circulated on Facebook and other platforms and companies have been slow to take down all the new copies. One copy posted to Facebook was shared more than 46,000 times before it was removed more than nine hours after it was originally posted, according toThe Washington Post.

As with prior mass shootings like Christchurch, the ability for people to quickly download and make new copies of live recordings has tested Meta’s ability to enforce its policies.

“One of the challenges we see through events like this is people create new content, new versions, new external links to try to evade our policies [and] evade our enforcement,” Guy Rosen, Meta’s VP of Integrity, said during a call with reporters. “As in any incident, we're going to continue to learn to refine our processes, refine our systems to ensure that we can detect we can take down violating content more quickly in the future.”

Meta also shared updated stats around content it mistakenly takes down. For violent content, the company said it eventually restored 756,000 Facebook posts that were appealed after they were initially removed. The company said it's also "working on developing robust measurements around mistakes," but didn't explain what it would measure beyond restores of appealed content.

Updated to clarify that Meta shared updated numbers on takedowns that were appealed and subsequently reinstated.

Twitter CEO says he expects Musk deal to close but is ‘prepared for all scenarios’

Hours after Elon Musk said his Twitter buyout is temporarily on hold, Twitter’s CEO has said he still expects the deal to close, but “we need to be prepared for all scenarios.” In a series of tweets, Parag Agrawal didn’t directly address Musk’s earlier comments but he weighed in on yesterday’s leadership shakeup, which resulted in the firing of two senior Twitter executives.

The move had raised eyebrows not just because the two were popular longtime leaders at the company, but because many don’t expect Agrawal to keep the CEO job after the acquisition is finalized. (Musk has said he has no confidence in Twitter's current leadership and reports suggest Musk intends to take over the CEO role at least temporarily.)

“Changes impacting people are always hard,” Agrawal said. “And some have been asking why a ‘lame-duck’ CEO would make these changes if we’re getting acquired anyway. The short answer is very simple: While I expect the deal to close, we need to be prepared for all scenarios and always do what’s right for Twitter.”

Notably, Agrawal’s comments would seem to acknowledge the possibility that Musk’s buyout may not actually go through. The Tesla CEO, who has said ridding Twitter of bots is one of his top goals, stated earlier in the day that the deal was “temporarily on hold pending details supporting calculation that spam/fake accounts do indeed represent less than 5% of users.” He later added that he was “still committed to the acquisition.”

Meanwhile, Twitter is also trying to navigate widespread uncertainty among employees, many of whom are uneasy about Musk’s plans for the company. In addition to cutting its top revenue and product executives Thursday, the company is also pausing all new hiring and rescinding some job offers, in an effort to cut costs.

Agrawal said Friday that he would continue “making hard decisions as needed.” “I won’t use the deal as an excuse to avoid making important decisions for the health of the company, nor will any leader at Twitter,” he tweeted.

Twitter’s CEO fires top product exec as company cuts costs

There's a new shakeup happening at the top of Twitter. CEO Parag Agrawal has fired the company’s general manager of consumer products Kayvon Beykpour in order to "take the team in a different direction." Bruce Falck, the company’s general manager for revenue, is also leaving, the company confirmed. Beykpour, who had been with the company for seven years, was on paternity leave at the time. 

The shakeup comes alongside a companywide pause on hiring as Twitter tries to cut costs. A spokesperson said the company is “pausing most hiring” and “pulling back on non-labor costs.” It will likely fuel more uncertainty at Twitter, which has been reeling since the company accepted Elon Musk’s offer to buy the company. Agrawal has reportedly told employees the company's current execs don’t know what direction Musk will take the platform. Musk has said he has no confidence in Twitter’s current management, and that he has a new CEO in mind for when the deal closes.

Despite all that, Agrawal is making big changes of his own. Most notably, by firing Beykpour, a longtime product executive who is well-liked in and outside of Twitter. “The truth is that this isn’t how and when I imagined leaving Twitter, and this wasn’t my decision,” he wrote in a thread about his departure. “Parag asked me to leave after letting me know that he wants to take the team in a different direction.”

In a memo, Agrawal cited the company’s failure to hit goals for revenue and user growth, The New York Timesreported. Musk has made clear he has even more aggressive goals for the platform. He recently stated that he intends to grow Twitter’s user base to nearly a billion users by 2028.

Twitter isn't the only major platform looking to cut costs. Meta has also said it intends to pull back on its hiring plans, and has ended some projects in its Reality Labs division. 

Twitter’s CEO fires top product exec as company cuts costs

There's a new shakeup happening at the top of Twitter. CEO Parag Agrawal has fired the company’s general manager of consumer products Kayvon Beykpour in order to "take the team in a different direction." Bruce Falck, the company’s general manager for revenue, is also leaving, the company confirmed. Beykpour, who had been with the company for seven years, was on the paternity leave at the time. 

The shakeup comes alongside a companywide pause on hiring as Twitter tries to cut costs. A said the company is “pausing most hiring” and “pulling back on non-labor costs.” It will likely fuel more uncertainty at Twitter, which has been reeling since the company accepted Elon Musk’s offer to buy the company. Agrawal has reportedly told employees the company's current execs don’t know what direction Musk will take the platform. Musk has said he has no confidence in Twitter’s current management, and that he has a new CEO in mind for when the deal closes.

Despite all that, Agrawal is making big changes of his own. Most notably, by firing Beykpour, a longtime product executive who is well-liked in and outside of Twitter. “The truth is that this isn’t how and when I imagined leaving Twitter, and this wasn’t my decision,” he wrote in a thread about his departure. “Parag asked me to leave after letting me know that he wants to take the team in a different direction.”

In a memo, Agrawal cited the company’s failure to hit goals for revenue and user growth, The New York Timesreported. Musk has made clear he has even more aggressive goals for the platform. He recently stated that he intends to grow Twitter’s user base to nearly a billion users by 2028.

Twitter isn't the only major platform looking to cut costs. Meta has also said it intends to pull back on its hiring plans, and has ended some projects in its Reality Labs division. 

Meta is reportedly axing some Reality Labs projects

Facebook’s pivot to the metaverse continues to be messy. Meta’s Reality Labs division, home to its hardware efforts and other metaverse initiatives, will be cutting some of its projects, according to Reuters. It’s not clear which projects will be affected, but Meta CTO Andrew Bosworth reportedly told employees the company is no longer able to afford some of the work it had once planned, and some other projects will be “postponed.”

The news is the latest to blow to Meta’s ambitious plan to re-orient the company around virtual reality and the metaverse rather than its social network. The company lost $10 billion on Reality Labs in 2021, and plans to hire fewer employees in 2022 than in previous years.

At the same time, the company is apparently still plugging away at Project Cambria, the “high-end” VR headset expected this fall. Meta CEO Mark Zuckerberg teased new details on “color passthrough technology” for the device that would “enable developers to build a whole new level of mixed reality experiences.” The company also just opened its first physical retail store outside of the headquarters for Reality Labs.

Meta is reportedly axing some Reality Labs projects

Facebook’s pivot to the metaverse continues to be messy. Meta’s Reality Labs division, home to its hardware efforts and other metaverse initiatives, will be cutting some of its projects, according to Reuters. It’s not clear which projects will be affected, but Meta CTO Andrew Bosworth reportedly told employees the company is no longer able to afford some of the work it had once planned, and some other projects will be “postponed.”

The news is the latest to blow to Meta’s ambitious plan to re-orient the company around virtual reality and the metaverse rather than its social network. The company lost $10 billion on Reality Labs in 2021, and plans to hire fewer employees in 2022 than in previous years.

At the same time, the company is apparently still plugging away at Project Cambria, the “high-end” VR headset expected this fall. Meta CEO Mark Zuckerberg teased new details on “color passthrough technology” for the device that would “enable developers to build a whole new level of mixed reality experiences.” The company also just opened its first physical retail store outside of the headquarters for Reality Labs.

Meta withdraws Oversight Board request for help with Ukraine policies

Meta has withdrawn a request it made to its oversight Board seeking guidance on shaping its content moderation policies amid Russia’s invasion of Ukraine. The company had originally asked the Oversight Board for a policy advisory opinion (PAO) in March, following controversy over its decision to “temporarily” relax some of its rules surrounding calls for violence.

Now, nearly two months later, Meta has taken the unusual step of taking back the request. The company cited unspecified security concerns as a reason, but didn’t elaborate. “This decision was not made lightly — the PAO was withdrawn due to ongoing safety and security concerns,” Meta wrote in a statement. It also linked to its blog post on its “ongoing” response to the war in Ukraine, which has not been updated since March 17th.

“While the Board understands these concerns, we believe the request raises important issues and are disappointed by the company’s decision to withdraw it,” the Oversight Board said in a statement. “The Board also notes the withdrawal of this request does not diminish Meta’s responsibility to carefully consider the ongoing content moderation issues which have arisen from this war, which the Board continues to follow.”

While it’s unclear exactly why Meta chose to withdraw its request, the move is nonetheless bizarre. Neither Meta nor the Oversight Board has ever publicly commented on what exactly the company’s original PAO request entailed. Meta’s policy chief Nick Clegg told employees the company would ask the board to review its updated guidance for content moderators, The Washington Post reported in March.

Regardless of the request, Meta is not obligated to implement any policy changes recommended by the Oversight Board, which used a past Policy Advisory request to influence the company’s policies around doxxing.

Meta’s actions are also likely to raise renewed questions about just how much influence the board is capable of wielding. Throughout its brief existence, a central criticism of the organization is that it is set up to take heat for the company’s more unpopular decisions while Meta is free to ignore its recommendations.