Anti-trans hate is ‘widespread’ on Facebook, Instagram and Threads, report warns

Meta is failing to enforce its own rules against anti-trans hate speech on its platform, a new report from GLAAD warns. The LGBTQ advocacy group found that “extreme anti-trans hate content remains widespread across Instagram, Facebook, and Threads.”

The report documents dozens of examples of hate speech from Meta’s apps, which GLAAD says were reported to the company between June 2023 and March 2024. But though the posts appeared to be clear violations of the company’s policies, “Meta either replied that posts were not violative or simply did not take action on them,” GLAAD says.

The reported content included posts with anti-trans slurs, violent and dehumanizing language and promotions for conversion therapy, all of which are barred under Meta’s rules. GLAAD also notes that some of the posts it reported came from influential accounts with large audiences on Facebook and Instagram. GLAAD also shared two examples of posts from Threads, Meta’s newest app where the company has tried to tamp down “political” content and other “potentially sensitive” topics.

“The company’s ongoing failure to enforce their own policies against anti-LGBTQ, and especially anti-trans hate, is simply unacceptable,” GLAAD’s CEO and President Sarah Kate Ellis said in a statement.

Meta didn’t immediately respond to a request for comment. But GLAAD’s report isn’t the first time the company has faced criticism for its handling of content targeting the LGBTQ community. Last year the Oversight Board urged Meta to “improve the accuracy of its enforcement on hate speech towards the LGBTQIA+ community.”

This article originally appeared on Engadget at https://www.engadget.com/anti-trans-hate-is-widespread-on-facebook-instagram-and-threads-report-warns-215538151.html?src=rss

The FTC might sue TikTok over its handling of users’ privacy and security

TikTok, already fighting a proposed law that could lead to a ban of the app in the United States, may soon also find itself in the crosshairs of the Federal Trade Commission. The FTC is close to wrapping up a multiyear investigation into the company, which could result in a lawsuit or major fine, Politico reports.

The investigation is reportedly centered around the app’s privacy and security practices, including its handling of children’s user data. According to Politico, the FTC is looking into potential violations of the Children's Online Privacy Protection Act (COPPA), as well as “allegations that the company misled its users by stating falsely that individuals in China do not have access to U.S. user data.” TikTok could also be penalized for violating the terms of its 2019 settlement with regulators over data privacy.

While it’s not clear if the FTC’s investigation will result in a lawsuit or other action, the investigation is yet another source of pressure for the company as it tries to secure its future in its largest market.. After a quick passage in the House, the Senate is considering a bill that would force TikTok’s parent company, ByteDance, to sell the app or face an outright ban in the US. The Biden Administration, which has also tried to pressure ByteDance to divest TikTok, is backing the measure and US intelligence officials have briefed lawmakers on the alleged national security risks posed by the app.

TikTok didn’t immediately respond to a request for comment.

This article originally appeared on Engadget at https://www.engadget.com/the-ftc-might-sue-tiktok-over-its-handling-of-users-privacy-and-security-224911806.html?src=rss

The Oversight Board weighs in on Meta’s most-moderated word

The Oversight Board is urging Meta to change the way it moderates the word “shaheed,” an Arabic term that has led to more takedowns than any other word or phrase on the company’s platforms. Meta asked the group for help crafting new rules last year after attempts to revamp it internally stalled.

The Arabic word “shaheed” is often translated as “martyr,” though the board notes that this isn’t an exact definition and the word can have “multiple meanings.” But Meta’s current rules are based only on the “martyr” definition, which the company says implies praise. This has led to a “blanket ban” on the word when used in conjunction with people designated as “dangerous individuals” by the company.

However, this policy ignores the “linguistic complexity” of the word, which is “often used, even with reference to dangerous individuals, in reporting and neutral commentary, academic discussion, human rights debates and even more passive ways,” the Oversight Board says in its opinion. “There is strong reason to believe the multiple meanings of ‘shaheed’ result in the removal of a substantial amount of material not intended as praise of terrorists or their violent actions.”

In their recommendations to Meta, the Oversight Board says that the company should end its “blanket ban” on the word being used to reference “dangerous individuals,” and that posts should only be removed if there are other clear “signals of violence” or if the content breaks other policies. The board also wants Meta to better explain how it uses automated systems to enforce these rules.

If Meta adopts the Oversight Board’s recommendations, it could have a significant impact on the platform’s Arabic-speaking users. The board notes that the word, because it is so common, likely “accounts for more content removals under the Community Standards than any other single word or phrase,” across the company’s apps.

“Meta has been operating under the assumption that censorship can and will improve safety, but the evidence suggests that censorship can marginalize whole populations while not improving safety at all,” the board’s co-chair (and former Danish prime minister) Helle Thorning-Schmidt said in a statement. “The Board is especially concerned that Meta’s approach impacts journalism and civic discourse because media organizations and commentators might shy away from reporting on designated entities to avoid content removals.”

This is hardly the first time Meta has been criticized for moderation policies that disproportionately impact Arabic-speaking users. A 2022 report commissioned by the company found that Meta’s moderators were less accurate when assessing Palestinian Arabic, resulting in “false strikes” on users’ accounts. The company apologized last year after Instagram’s automated translations began inserting the word “terrorist” into the profiles of some Palestinian users.

The opinion is also yet another example of how long it can take for Meta’s Oversight Board to influence the social network’s policies. The company first asked the board to weigh in on the rules more than a year ago (the Oversight Board said it “paused” the publication of the policy after October 7 attacks in Israel to ensure its rules “held up” to the “extreme stress” of the conflict in Gaza). Meta will now have two months to respond to the recommendations, though actual changes to the company’s policies and practices could take several more weeks or months to implement.

“We want people to be able to use our platforms to share their views, and have a set of policies to help them do so safely,” a Meta spokesperson said in a statement. “We aim to apply these policies fairly but doing so at scale brings global challenges, which is why in February 2023 we sought the Oversight Board's guidance on how we treat the word ‘shaheed’ when referring to designated individuals or organizations. We will review the Board’s feedback and respond within 60 days.”

This article originally appeared on Engadget at https://www.engadget.com/the-oversight-board-weighs-in-on-metas-most-moderated-word-100003625.html?src=rss

Judge dismisses X’s lawsuit against anti-hate group

A judge has dismissed a lawsuit from X against the Center for Countering Digital Hate (CCDH), a nonprofit that researches hate speech on the Elon Musk-owned platform. In the decision, the judge said that the lawsuit was an attempt to “punish” the organization for criticizing the company.

X sued the CCDH last summer, accusing the group of “scraping” its platform as part of a “scare campaign” to hurt its advertising business. The group had published research claiming X was failing to act on reports of hate speech, and was in some cases boosting such content.

In a ruling, federal judge Charles Breyer said that “this case is about punishing” CCDH for publishing unflattering research. “It is clear to the Court that if X Corp. was indeed motived to spend money in response to CCDH’s scraping in 2023, it was not because of the harm such scraping posed to the X platform, but because of the harm it posed to X Corp.’s image,” Breyer wrote. “X Corp.’s motivation in bringing this case is evident. X Corp. has brought this case in order to punish CCDH for CCDH publications that criticized X Corp.—and perhaps in order to dissuade others.”

X said it planned to appeal the decision.

In a statement, CCDH CEO Imram Ahmed said that the ruling “affirmed our fundamental right to research, to speak, to advocate, and to hold accountable social media companies for decisions they make behind closed doors.” He added that “it is now abundantly clear that we need federal transparency laws” that would require online platforms to make data available to independent researchers.

This article originally appeared on Engadget at https://www.engadget.com/judge-dismisses-xs-lawsuit-against-anti-hate-group-173048754.html?src=rss

TikTok turns to teenage ‘youth council’ as part of its latest safety push

Last summer, TikTok said it planned to form a “youth council” of teens to advise the company as part of a broader push to beef up safety features for the app’s youngest users. That group is now official, and they have already started meeting with the company, including CEO Shou Chew, the company announced.

The announcement comes as TikTok is fighting a bill that would force parent company ByteDance to sell the app or face a ban in the United States. As part of that effort, the company has tried to mobilize its users, many of them teens, to oppose the measure. TikTok’s critics often cite youth safety as one of the most significant risks posed by the app.

It’s not clear if the newly-formed youth council will do much to counter that perception. But the company says the group has already influenced an upcoming media literacy campaign in the US that will “focus on misinformation, AI-generated content, and more.” The council, made up of 15 teens from the US, UK, Brazil, Indonesia, Ireland, Kenya, Mexico, and Morocco, has also weighed in on the app’s “youth portal” feature, which provides in-app privacy and security resources.

According to TikTok, the council is meant to advise on the safety policies and issues that often impact teens. The group also collaborates with UK online safety organization Praesidio Safeguarding, which helped select the council’s teenage members, all of whom are paid, according to TikTok. The company notes that CEO Shou Chew attended the most recent meeting in February, when the youth council asked TikTok to share more details about how reporting and blocking work in the app.

While it’s not yet clear how much, if any, influence TikTok’s youth council will ultimately wield over the company’s policies, it underscores just how important teens are to the platform. TikTok is one of the most dominant apps among teens in the US, currently the company’s largest market. The company has also leaned on them to oppose the bill that could lead to a ban of the app, though those efforts may have backfired.

This article originally appeared on Engadget at https://www.engadget.com/tiktok-turns-to-teenage-youth-council-as-part-of-its-latest-safety-push-130005305.html?src=rss

Senators ask intelligence officials to declassify details about TikTok and ByteDance

As the Senate considers the bill that would force a sale or ban of TikTok, lawmakers have heard directly from intelligence officials about the alleged national security threat posed by the app. Now, two prominent senators are asking the office of the Director of National Intelligence to declassify and make public what they have shared.

“We are deeply troubled by the information and concerns raised by the intelligence community in recent classified briefings to Congress,” Democratic Senators Richard Blumenthal and Republican Senator Marsha Blackburn write. “It is critically important that the American people, especially TikTok users, understand the national security issues at stake.”

The exact nature of the intelligence community's concerns about the app has long been a source of debate. Lawmakers in the House received a similar briefing just ahead of their vote on the bill. But while the briefing seemed to bolster support for the measure, some members said they left unconvinced, with one lawmaker saying that “not a single thing that we heard … was unique to TikTok.”

According to Axios, some senators described their briefing as “shocking,” though the group isn’t exactly known for their particularly nuanced understanding of the tech industry. (Blumenthal, for example, once pressed Facebook executives on whether they would “commit to ending finsta.”) In its report, Axios says that one lawmaker “said they were told TikTok is able to spy on the microphone on users' devices, track keystrokes and determine what the users are doing on other apps.” That may sound alarming, but it’s also a description of the kinds of app permissions social media services have been requesting for more than a decade.

TikTok has long denied that its relationship with parent company ByteDance would enable Chinese government officials to interfere with its service or spy on Americans. And so far, there is no public evidence that TikTok has ever been used in this way. If US intelligence officials do have evidence that is more than hypothetical, it would be a major bombshell in the long-running debate surrounding the app.

This article originally appeared on Engadget at https://www.engadget.com/senators-ask-intelligence-officials-to-declassify-details-about-tiktok-and-bytedance-180655697.html?src=rss

Researchers ask Meta to keep CrowdTangle online until after 2024 elections

The Mozilla Foundation and dozens of other research and advocacy groups are pushing back on Meta’s decisions to shut down its research tool, CrowdTangle, later this year. In an open letter, the group calls on Meta to keep CrowdTangle online until after 2024 elections, saying that it will harm their ability to track election misinformation in a year where “approximately half the world’s population” are slated to vote.

The letter, published by the Mozilla Foundation and signed by 90 groups as well as the former CEO of CrowdTangle, comes one week after Meta confirmed it would shut down the tool in August 2024. “Meta’s decision will effectively prohibit the outside world, including election integrity experts, from seeing what’s happening on Facebook and Instagram — during the biggest election year on record,” the letter writers say.

“This means almost all outside efforts to identify and prevent political disinformation, incitements to violence, and online harassment of women and minorities will be silenced. It’s a direct threat to our ability to safeguard the integrity of elections.” The group asks Meta to keep CrowdTangle online until January 2025, and to “rapidly onboard” election researchers onto its latest tools.

CrowdTangle has long been a source of frustration for Meta. It allows researchers, journalists and other groups to track how content is spreading across Facebook and Instagram. It’s also often cited by journalists in unflattering stories about Facebook and Instagram. For example, Engadget relied on CrowdTangle in an investigation into why Facebook Gaming was overrun with spam and pirated content in 2022. CrowdTangle was also the source for “Facebook’s Top 10,” a (now defunct) Twitter bot that posted daily updates on the most-interacted withFacebook posts containing links. The project, created by a New York Times reporter, regularly showed far-right and conservative pages over-performing, leading Facebook executives to argue the data wasn't an accurate representation of what was actually popular on the platform.

With CrowdTangle set to shut down, Meta is instead highlighting a new program called the Meta Content Library, which provides researchers with new tools to access publicly-accessible data in a streamlined way. The company has said it’s more powerful than what CrowdTangle enabled, but it’s also much more strictly controlled. Researchers from nonprofits and academic institutions must apply, and be approved, in order to access it. And since the vast majority of newsrooms are for-profit entities, most journalists will be automatically ineligible for access (it’s not clear if Meta would allow reporters at nonprofit newsrooms to use the Content Library.)

The other issue, according to Brandon Silverman, CrowdTangle’s former CEO who left Meta in 2021 is that the Meta Content Library isn’t currently powerful enough to be a full CrowdTangle replacement. “There are some areas where the MCL has way more data than CrowdTangle ever had, including reach and comments in particular,” Brandon Silverman, CrowdTangle’s former CEO who left Meta in 2021 wrote in a post on Substack last week. “But there are also some huge gaps in the tool, both for academics and civil society, and simply arguing that it has more data isn’t a claim that regulators or the press should take seriously.”

In a statement on X, Meta spokesperson Andy Stone said that “academic and nonprofit institutions pursuing scientific or public interest research can apply for access” to the Meta Content Library, including nonprofit election experts. “The Meta Content Library is designed to contain more comprehensive data than CrowdTangle.”

This article originally appeared on Engadget at https://www.engadget.com/researchers-ask-meta-to-keep-crowdtangle-online-until-after-2024-elections-211527731.html?src=rss

The case against the TikTok ban bill

A year ago, I visited TikTok’s US headquarters to preview its new “transparency center,” a central piece of its multibillion-dollar effort to convince the US its meme factory isn’t a national security threat. That effort has failed. The company’s negotiations with the government stalled out and the company is now facing its most serious threat to a future in the United States yet.

Last Wednesday, the House of Representatives overwhelmingly approved a bill that, if passed into law, would force ByteDance to sell TikTok or face an outright ban in the US. That lawmakers view TikTok with suspicion is nothing new. Because TikTok’s parent company, ByteDance, is based in China, they believe the Chinese government could manipulate TikTok’s algorithms or access its users’ data via ByteDance employees. But what has been surprising about the Protecting Americans from Foreign Adversary Controlled Applications Act is that it managed to gather so much support from both sides of the aisle seemingly out of nowhere.

After a surprise introduction, the bipartisan bill cleared committee in two days with a unanimous 50 - 0 vote, and was approved by the full House in a 352 - 65 vote less than a week later. Of the dozens of bills attempting to regulate the tech industry in recent years, including at least two to ban TikTok, none have gained nearly as much momentum.

But the renewed support for banning or forcing a sale of TikTok doesn’t seem to be tied to any newly uncovered information about TikTok, ByteDance or the Chinese Communist Party. Instead, lawmakers have largely been rehashing the same concerns that have been raised about the app for years.

One issue often raised is data access. TikTok, like many of its social media peers, scoops up large amounts of data from its users. The practice has gotten the company into hot water in the past when many of those users were discovered to be minors. Many lawmakers cite its large cache of user data, which they claim could be obtained by Chinese government officials, as one of the most significant risks posed by TikTok.

“Our bipartisan legislation would protect American social media users by driving the divestment of foreign adversary-controlled apps to ensure that Americans are protected from the digital surveillance and influence operations of regimes that could weaponize their personal data against them,” Representative Raja Krishnamoorthi, on the bill’s co-sponsors, said in a statement.

TikTok has repeatedly denied sharing any data with the Chinese government and says it would not comply if they were requested to do so. However, ByteDance has been caught mishandling TikTok user data in the past. In 2022, ByteDance fired four employees, including two based in China, for accessing the data of reporters who had written stories critical of the company. There’s no evidence those actions were directed by the Chinese government.

In fact the Protecting Americans from Foreign Adversary Controlled Applications Act would do little to address the data access issue, experts say. Even if the app was banned or controlled by a different company, Americans’ personal information would remain readily available from the largely unregulated data broker industry.

Data brokers gain access to vast troves of Americans’ personal data via scores of apps, websites, credit card companies and other businesses. Currently, there are few restrictions on what data can be collected or who can buy it. Biden Administration officials have warned that China is already buying up this data, much of it more revealing than anything TikTok collects.

“The data that's been collected about you will almost certainly live longer than you will, and there's really nothing you can do to delete it or get rid of it,” Justin Cappos, an NYU computer science professor and member of the NYU Center for Cybersecurity, told Engadget. “If the US really wants to solve this, the way to do it isn't to blame a social media company in China and make them the face of the problem. It's really to pass the meaningful data privacy regulations and go after [data] collection and go after these data brokers.”

The House recently passed a bill that would bar data brokers from selling Americans’ personal information to “adversary” countries like China. But, if passed, the law wouldn’t address the sale of that data to other entities or the wholesale collection of it to begin with.

Digital rights and free speech advocates like the Electronic Frontier Foundation (EFF) have also raised the possibility that the US forcing a ban or sale of TikTok could give other countries cover to enact similar bans or restrictions on US-based social media platforms. In a letter to lawmakers opposing the measure, the EFF, American Civil Liberties Union and other groups argued that it would “set an alarming global precedent for excessive government control over social media platforms.”

David Greene, a senior staff attorney at the EFF notes that the United States has forcefully criticized nations that have banned social media apps. “The State Department has been highly critical of countries that have shut down services,” Greene told Engadget, noting that the US condemned the Nigerian government for blocking Twitter in 2021. “Shutting down a whole service is essentially an anti-democratic thing.”

Intelligence officials held a classified briefing with members of Congress about TikTok shortly before the vote on the House floor. That’s led some pundits to believe that there must be new information about TikTok, but some lawmakers have suggested otherwise.“Not a single thing that we heard in today’s classified briefing was unique to TikTok,” Representative Sara Jacobs told the Associated Press. “It was things that happen on every single social media platform.” Likewise, the top Democrat on the House Intelligence Committee, Representative Jim Hines, said that TikTok is “largely a potential threat … if Congress were serious about dealing with this threat, we would start with a federal privacy bill.”

This article originally appeared on Engadget at https://www.engadget.com/the-case-against-the-tiktok-ban-bill-161517973.html?src=rss

House passes bill that would bar data brokers from selling Americans’ personal information to ‘adversary’ countries

The House of Representatives approved a measure targeting data brokers’ ability to sell Americans’ personal data to “adversary” countries, like Russia, China, Iran and North Korea. The Protecting Americans’ Data from Foreign Adversaries Act passed with a unanimous 414 - 0 vote.

The bill, which was introduced alongside a measure that could force a ban or sale of TikTok, would prohibit data brokers from selling Americans’ “sensitive” data to people or entities in “adversary” countries. Much like a recent executive order from President Joe Biden targeting data brokers, the bill specifically covers geolocation, financial, health, and biometric data, as well as other private information like text logs and phone call history.

If passed — the bill will need Senate approval before landing on Biden's desk — it would represent a significant check on the relatively unregulated data broker industry. US officials have previously warned that China and other geopolitical rivals of the United States have already acquired vast troves of Americans’ information from brokers and privacy advocates have long urged lawmakers to regulate the multibillion-dollar industry.

The bill is the second major piece of bipartisan legislation to come out of the House Energy and Commerce this month. The committee previously introduced the “Protecting Americans from Foreign Adversary Controlled Applications Act,” which would require TikTok to divest itself from parent company ByteDance or face a ban in the US. In a statement, Representatives Frank Pallone and Cathy McMorris Rodgers, said that the latest bill “builds” on their work to pass the measure targeting TikTok. “Today’s overwhelming vote sends a clear message that we will not allow our adversaries to undermine American national security and individual privacy by purchasing people’s personally identifiable sensitive information from data brokers,” they said.

This article originally appeared on Engadget at https://www.engadget.com/house-passes-bill-that-would-bar-data-brokers-from-selling-americans-personal-information-to-adversary-countries-004735748.html?src=rss

Here’s a video of the first human Neuralink patient controlling a computer with his thoughts

Earlier this year, Elon Musk announced that the first human patient had received a Neuralink brain implant as part of the company’s first clinical trial. Now, the company has shared a brief public demo of the brain-computer interface (BCI) in action.

The company briefly live streamed a demo on X with a 29-year-old man named Nolan Arbaugh, who said he was paralyzed from the neck down after a diving accident eight years ago. In the video, Arbaugh explains that after receiving the implant — he said the surgery was “super easy” — he had to learn how to differentiate “imagined movement versus attempted movement” in order to learn to control a cursor on a screen.

“A lot of what we started out with was attempting to move,” Arbaugh said. “I would attempt to move, say, my right hand left, right forward, back. And from there, I think it just became intuitive for me to start imagining the cursor moving.”

In the clip, which also features a Neuralink engineer, Arbaugh demonstrates the BCI by moving a cursor around the screen of a laptop, and pausing an on-screen music player. He said the implant has allowed him to play chess and Civilization VI. He noted that he has previously used other assistive devices like mouthsticks, but that the Neuralink implant has enabled longer gaming sessions, as well as online play. He said that he can get about eight hours of use before the implant needs to recharge (it’s not clear how charging works).

Arbaugh became the first human patient to receive the implant in January after Neuralink began recruiting patients last year. The company previously tested the BCI in animals, including chimps, and some of its animal testing practices have been the subject of federal investigations.

In the video, Arbaugh indicated his experience with the brain implant has so far been positive, despite some initial issues. “It's not perfect, I would say that we have run into some issues,” he said. “I don't want people to think that this is the end of the journey. There's a lot of work to be done, but it has already changed my life.”

This article originally appeared on Engadget at https://www.engadget.com/heres-a-video-of-the-first-human-neuralink-patient-controlling-a-computer-with-his-thoughts-235659486.html?src=rss