The FDA may have unintentionally made ‘NyQuil Chicken’ go viral on TikTok

If you’ve been anywhere near social media, local news, or late-night talk shows in the last few days, you’ve probably heard something about “NyQuil Chicken,” a supposedly viral TikTok “challenge” that’s exactly what it sounds like: cooking chicken in a marinade of cold medicine.

News about the supposed trend is usually accompanied by vomit-inducing photos of raw chicken simmering in dark green syrup. It’s both disgusting and, as the FDA recently reminded the public, just as toxic as it looks. But it turns out NyQuil Chicken was neither new, nor particularly viral, and the FDA’s bizarrely-timed warning may have backfired, making the meme more popular than ever.

First, a bit of history: As reporter Ryan Borderick points out in his newsletter Garbage Day, NyQuil Chicken originated as a joke on 4Chan in 2017. The meme briefly resurfaced in January where it got some traction on TikTok before once again fading away.

Then, last week, the FDA — inexplicably — issued a press release warning about the dangers of cooking chicken in NyQuil. In a notice titled “A Recipe for Danger: Social Media Challenges Involving Medicines,” the FDA refers to it as a “recent” trend. But they cite no recent examples, and it’s unclear why they opted to push out a warning more than eight months after the meme had first appeared on TikTok.

TikTok is blocking searches for the
Screenshot / TikTok

Now, in what we can only hope will be a valuable lesson on unintended consequences, we know that it was likely the FDA’s warning about NyQuil chicken that pushed this “challenge” to new levels of virality, at least on TikTok. TikTok has now confirmed that on September 14th, the day before the FDA notice, there were only five searches for “NyQuil chicken” in the app. But by September 21st, that number skyrocketed “by more than 1,400 times,” according to BuzzFeed News, which first reported the TikTok search data.

TikTok, which has recently taken steps to limit the spread of both dangerous “challenges” and “alarmist warnings” about hoaxes, is now blocking searches for “NyQuil Chicken.” Searches now direct users to resources encouraging users to “stop and take a moment to think” before pursuing a potentially dangerous “challenge.”

As both BuzzFeed and Gizmodo note, there’s little evidence that people are actually cooking chicken in NyQuil, much less actually ingesting it. That’s a good thing because, as the FDA makes very clear, doing so is not only extremely gross, but highly toxic. But the whole thing is yet another example of why we should all be more skeptical of panic-inducing viral “challenges.”

The FDA may have unintentionally made ‘NyQuil Chicken’ go viral on TikTok

If you’ve been anywhere near social media, local news, or late-night talk shows in the last few days, you’ve probably heard something about “NyQuil Chicken,” a supposedly viral TikTok “challenge” that’s exactly what it sounds like: cooking chicken in a marinade of cold medicine.

News about the supposed trend is usually accompanied by vomit-inducing photos of raw chicken simmering in dark green syrup. It’s both disgusting and, as the FDA recently reminded the public, just as toxic as it looks. But it turns out NyQuil Chicken was neither new, nor particularly viral, and the FDA’s bizarrely-timed warning may have backfired, making the meme more popular than ever.

First, a bit of history: As reporter Ryan Borderick points out in his newsletter Garbage Day, NyQuil Chicken originated as a joke on 4Chan in 2017. The meme briefly resurfaced in January where it got some traction on TikTok before once again fading away.

Then, last week, the FDA — inexplicably — issued a press release warning about the dangers of cooking chicken in NyQuil. In a notice titled “A Recipe for Danger: Social Media Challenges Involving Medicines,” the FDA refers to it as a “recent” trend. But they cite no recent examples, and it’s unclear why they opted to push out a warning more than eight months after the meme had first appeared on TikTok.

TikTok is blocking searches for the
Screenshot / TikTok

Now, in what we can only hope will be a valuable lesson on unintended consequences, we know that it was likely the FDA’s warning about NyQuil chicken that pushed this “challenge” to new levels of virality, at least on TikTok. TikTok has now confirmed that on September 14th, the day before the FDA notice, there were only five searches for “NyQuil chicken” in the app. But by September 21st, that number skyrocketed “by more than 1,400 times,” according to BuzzFeed News, which first reported the TikTok search data.

TikTok, which has recently taken steps to limit the spread of both dangerous “challenges” and “alarmist warnings” about hoaxes, is now blocking searches for “NyQuil Chicken.” Searches now direct users to resources encouraging users to “stop and take a moment to think” before pursuing a potentially dangerous “challenge.”

As both BuzzFeed and Gizmodo note, there’s little evidence that people are actually cooking chicken in NyQuil, much less actually ingesting it. That’s a good thing because, as the FDA makes very clear, doing so is not only extremely gross, but highly toxic. But the whole thing is yet another example of why we should all be more skeptical of panic-inducing viral “challenges.”

Facebook violated Palestinians’ right to free expression, says report commissioned by Meta

Meta has finally released the findings of an outside report that examined how its content moderation policies affected Israelis and Palestinians amid an escalation of violence in the Gaza Strip last May. The report, from Business for Social Responsibility (BSR), found that Facebook and Instagram violated Palestinians’ right to free expression.

“Based on the data reviewed, examination of individual cases and related materials, and external stakeholder engagement, Meta’s actions in May 2021 appear to have had an adverse human rights impact on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred,” BSR writes in its report.

The report also notes that “an examination of individual cases” showed that some Israeli accounts were also erroneously banned or restricted during this period. But the report's authors highlight several systemic issues they say disproportionately affected Palestinians.

According to the report, “Arabic content had greater over-enforcement,” and “proactive detection rates of potentially violating Arabic content were significantly higher than proactive detection rates of potentially violating Hebrew content.” The report also notes that Meta had an internal tool for detecting “hostile speech” in Arabic, but not in Hebrew, and that Meta’s systems and moderators had lower accuracy when assessing Palestinian Arabic.

As a result, many users’ accounts were hit with “false strikes,” and wrongly had posts removed by Facebook and Instagram. “These strikes remain in place for those users that did not appeal erroneous content removals,” the report notes.

Meta had commissioned the report following a recommendation from the Oversight Board last fall. In a response to the report, Meta says it will update some of its policies, including several aspects of its Dangerous Individuals and Organizations (DOI) policy. The company says it’s “started a policy development process to review our definitions of praise, support and representation in our DOI Policy,” and that it’s “working on ways to make user experiences of our DOI strikes simpler and more transparent.”

Meta also notes it has “begun experimentation on building a dialect-specific Arabic classifier” for written content, and that it has changed its internal process for managing keywords and “block lists” that affect content removals.

Notably, Meta says it’s “assessing the feasibility” of a recommendation that it notify users when it places “feature limiting and search limiting” on users’ accounts after they receive a strike. Instagram users have long complained that the app shadowbans or reduces the visibility of their account when they post about certain topics. These complaints increased last spring when users reported that they were barred from posting about Palestine, or that the reach of their posts was diminished. At the time, Meta blamed an unspecified “glitch.” BSR’s report notes that the company had also implemented emergency “break glass” measures that temporarily throttled all “repeatedly reshared content.”

Facebook violated Palestinians’ right to free expression, says report commissioned by Meta

Meta has finally released the findings of an outside report that examined how its content moderation policies affected Israelis and Palestinians amid an escalation of violence in the Gaza Strip last May. The report, from Business for Social Responsibility (BSR), found that Facebook and Instagram violated Palestinians’ right to free expression.

“Based on the data reviewed, examination of individual cases and related materials, and external stakeholder engagement, Meta’s actions in May 2021 appear to have had an adverse human rights impact on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred,” BSR writes in its report.

The report also notes that “an examination of individual cases” showed that some Israeli accounts were also erroneously banned or restricted during this period. But the report's authors highlight several systemic issues they say disproportionately affected Palestinians.

According to the report, “Arabic content had greater over-enforcement,” and “proactive detection rates of potentially violating Arabic content were significantly higher than proactive detection rates of potentially violating Hebrew content.” The report also notes that Meta had an internal tool for detecting “hostile speech” in Arabic, but not in Hebrew, and that Meta’s systems and moderators had lower accuracy when assessing Palestinian Arabic.

As a result, many users’ accounts were hit with “false strikes,” and wrongly had posts removed by Facebook and Instagram. “These strikes remain in place for those users that did not appeal erroneous content removals,” the report notes.

Meta had commissioned the report following a recommendation from the Oversight Board last fall. In a response to the report, Meta says it will update some of its policies, including several aspects of its Dangerous Individuals and Organizations (DOI) policy. The company says it’s “started a policy development process to review our definitions of praise, support and representation in our DOI Policy,” and that it’s “working on ways to make user experiences of our DOI strikes simpler and more transparent.”

Meta also notes it has “begun experimentation on building a dialect-specific Arabic classifier” for written content, and that it has changed its internal process for managing keywords and “block lists” that affect content removals.

Notably, Meta says it’s “assessing the feasibility” of a recommendation that it notify users when it places “feature limiting and search limiting” on users’ accounts after they receive a strike. Instagram users have long complained that the app shadowbans or reduces the visibility of their account when they post about certain topics. These complaints increased last spring when users reported that they were barred from posting about Palestine, or that the reach of their posts was diminished. At the time, Meta blamed an unspecified “glitch.” BSR’s report notes that the company had also implemented emergency “break glass” measures that temporarily throttled all “repeatedly reshared content.”

Twitter is logging out some users following password reset ‘incident’

Twitter has disclosed an “incident” affecting the accounts of an unspecified number of users who opted to reset their passwords. According to the company, a “bug” introduced sometime in the last year prevented Twitter users from being logged out of their accounts on all of their devices after initiating a password reset.

“if you proactively changed your password on one device, but still had an open session on another device, that session may not have been closed,” Twitter explains in a brief blog post. “Web sessions were not affected and were closed appropriately.”

Twitter says it is “proactively” logging some users out as a result of the bug. The company attributed the issue to “a change to the systems that power password resets” that occurred at some point in 2021. A Twitter spokesperson declined to elaborate on when this change was made or exactly how many users are affected. “I can share that for most people, this wouldn't have led to any harm or account compromise,” the spokesperson said. 

While Twitter states that “most people” wouldn’t have had their accounts compromised as a result, the news could be worrying for those who have used shared devices, or dealt with a lost or stolen device in the last year.

Notably, Twitter’s disclosure of the incident comes as the company is reeling from allegations from its former head of security who had filed a whistleblower complaint accusing the company of “grossly negligent” security practices. Twitter has so far declined to address the claims in detail, citing its ongoing litigation with Elon Musk. Musk is using the whistleblower allegations in his legal case to get out of his $44 billion deal to buy Twitter.

Twitter is logging out some users following password reset ‘incident’

Twitter has disclosed an “incident” affecting the accounts of an unspecified number of users who opted to reset their passwords. According to the company, a “bug” introduced sometime in the last year prevented Twitter users from being logged out of their accounts on all of their devices after initiating a password reset.

“if you proactively changed your password on one device, but still had an open session on another device, that session may not have been closed,” Twitter explains in a brief blog post. “Web sessions were not affected and were closed appropriately.”

Twitter says it is “proactively” logging some users out as a result of the bug. The company attributed the issue to “a change to the systems that power password resets” that occurred at some point in 2021. A Twitter spokesperson declined to elaborate on when this change was made or exactly how many users are affected. “I can share that for most people, this wouldn't have led to any harm or account compromise,” the spokesperson said. 

While Twitter states that “most people” wouldn’t have had their accounts compromised as a result, the news could be worrying for those who have used shared devices, or dealt with a lost or stolen device in the last year.

Notably, Twitter’s disclosure of the incident comes as the company is reeling from allegations from its former head of security who had filed a whistleblower complaint accusing the company of “grossly negligent” security practices. Twitter has so far declined to address the claims in detail, citing its ongoing litigation with Elon Musk. Musk is using the whistleblower allegations in his legal case to get out of his $44 billion deal to buy Twitter.

Meta is reportedly cutting staff and reorganizing teams

Meta has begun cutting staff and reorganizing teams in an effort to cut costs, according to a new report in The Wall Street Journal. The company apparently doesn’t want to frame the changes as layoffs, but is reportedly “quietly nudging out a significant number of staffers” as it prepares for more significant cuts.

It’s not clear how many Meta employees have been affected so far. According to the report. Meta has been allowing staffers to apply for new jobs within the company, but workers only have a 30-day window to do so. The result, according to The Journal, is that “workers with good reputations and strong performance reviews are being pushed out on a regular basis.”

Meta has been signaling for some time that it will reduce staff and cut projects as it deals with shrinking revenue amid what Mark Zuckerberg has described as an “economic downturn.” The CEO warned during the company’s most recent earnings call that Meta would slow hiring and would need to “get more done with fewer resources.”

Zuckerberg has recently told employees the company is facing “serious times” and managers have been asked to identify “low performers” to cut. The company has also axed some projects from its Reality Labs division, which has lost $10 billion in 2021. Dozens of Meta contractors employed by an outside firm were also recently told their jobs had been eliminated.

Meta is reportedly cutting staff and reorganizing teams

Meta has begun cutting staff and reorganizing teams in an effort to cut costs, according to a new report in The Wall Street Journal. The company apparently doesn’t want to frame the changes as layoffs, but is reportedly “quietly nudging out a significant number of staffers” as it prepares for more significant cuts.

It’s not clear how many Meta employees have been affected so far. According to the report. Meta has been allowing staffers to apply for new jobs within the company, but workers only have a 30-day window to do so. The result, according to The Journal, is that “workers with good reputations and strong performance reviews are being pushed out on a regular basis.”

Meta has been signaling for some time that it will reduce staff and cut projects as it deals with shrinking revenue amid what Mark Zuckerberg has described as an “economic downturn.” The CEO warned during the company’s most recent earnings call that Meta would slow hiring and would need to “get more done with fewer resources.”

Zuckerberg has recently told employees the company is facing “serious times” and managers have been asked to identify “low performers” to cut. The company has also axed some projects from its Reality Labs division, which has lost $10 billion in 2021. Dozens of Meta contractors employed by an outside firm were also recently told their jobs had been eliminated.

TikTok adds new rules for politicians’ accounts ahead of the midterm elections

TikTok is adding new rules for accounts belonging to politicians, government officials and political parties ahead of the midterm elections. The company says it will require these accounts to go through a “mandatory” verification process, and will restrict them from accessing advertising and other revenue generating features.

Up until now, verification for politicians and other officials was entirely optional. But that’s now changing, at least in the United States, as TikTok gears up for the midterm elections this fall. In a blog post, the company says the update is meant to help it enforce its rules, which bar political advertising of any kind, more consistently.

By verifying their accounts, TikTok will be able to block politicians and political parties from accessing the platform’s advertising tools or other revenue generating features like tipping. Accounts will also be barred from payouts from the company’s creator fund, and from in-app shopping features.

TikTok says it also plans to add further restrictions that will prevent politicians and political parties from using the platform to solicit campaign contributions or other donations, even on outside websites. That policy, which will take effect “in the coming weeks,” will bar videos that direct viewers to third-party fundraising sites. It also means that politicians will not be allowed to post videos asking for donations.

The new policies are the latest piece of TikTok’s strategy to prepare for the midterm elections. The company already began rolling out an in-app Elections Center to highlight voting resources and details about local races. But enforcing its ban on political ads has proved to be challenging for TikTok, which has had to contend with undisclosed branded content from creators. The new rules don’t address that issue specifically, but the added restrictions for campaigns and politicians will make it more difficult for candidates and other officials to evade its rules.

YouTube will share ad revenue with Shorts creators

YouTube just made a major change to its Partner Program that will allow its short-form video creators to make a lot more money from its platform. The company announced that it will share ad revenue with creators on its TikTok rival, YouTube Shorts.

The changes, which go into effect "early next year," could help YouTube draw creators away from TikTok, where stars have complained about low creator fund payouts. “This is the first time real revenue sharing is being offered for short-form video on any platform at scale,” YouTube Chief Product Officer Neal Mahon said during an event announcing the news.

With the new revenue sharing program, creators with at least 1,000 subscribers who get 10 million views on Shorts in a 90-day period can apply to join the Partner Program. Like TikTok, ads on Shorts appear between videos in the feed. (The company began experimenting with ads on Shorts in May.) Revenue from the ads will be pooled and split among creators, Mohan said. Creators will get a 45 percent cut of the ads, regardless of whether they use music.

“Each creator is paid on their share of total Shorts views, and this revenue share remains the same, even if they use music,” he explained. The company also said it would start testing its tipping feature, called Super Thanks, in Shorts "with a complete rollout expected next year."

YouTube Shorts creators can join the Partner Program.
YouTube

Up until now, YouTube had a dedicated $100 million creator fund for Shorts. But creators have long complained that these types of funds are insufficient, and don’t come close to what the most successful creators can make producing longer form videos where they get a share of the ad revenue.

For example, Jimmy Donaldson, the YouTuber known as Mr. Beast, shared earlier this year that he had made just $15,000 from TikTok despite more than a billion views in the app. Donaldson is widely credited as one of the top-earning creators on YouTube, and made $54 million on the platform in 2021. TikTok said in May that it was in the early stages of a revenue sharing program called TikTok Pulse.

YouTube also announced a new tier for the Partner Program that’s meant to make it easier for early-stage creators to start monetizing their content. The new tier, called "Fan Funding," will have “lower requirements” for accessing features like Super Thanks, Super Chat, stickers and channel memberships, Mahon said. YouTube said it would share more details about the requirements in 2023.

Finally, the company revealed Creator Music, a section of YouTube Studio where creators can purchase "affordable, high-quality music licenses that offer them full monetizing potential." Those who buy the licenses will "keep the same revenue share they’d usually make on videos without any music." Creator Music will also offer the option to use songs without paying up front, and instead the creator and the artist will share revenue from the video. 

The change could solve another major headache for YouTubers, who have long complained about copyright issues from overzealous music labels leading to takedowns and loss of revenue. In a blog post, YouTube says it hopes the feature will help "build a bridge between the music industry and creators on our platform."