Women of color running for Congress are attacked disproportionately on X, report finds

Women of color running for Congress in 2024 have faced a disproportionate number of attacks on X compared with other candidates, according to a new report from the nonprofit Center for Democracy and Technology (CDT) and the University of Pittsburgh.

The report sought to “compare the levels of offensive speech and hate speech that different groups of Congressional candidates are targeted with based on race and gender, with a particular emphasis on women of color.” To do this, the report’s authors analyzed 800,000 tweets that covered a three-month period between May 20 and August 23 of this year. That dataset represented all posts mentioning a candidate running for Congress with an account on X.

The report’s authors found that more than 20 percent of posts directed at Black and Asian women candidates “contained offensive language about the candidate.” It also found that Black women in particular were targeted with hate speech more often compared with other candidates.

“On average, less than 1% of all tweets that mentioned a candidate contained hate speech,” the report says. “However, we found that African-American women candidates were more likely than any other candidate to be subject to this type of post (4%).” That roughly lines up with X’s recent transparency report — the company’s first since Elon Musk took over the company — which said that rule-breaking content accounts for less than 1 percent of all posts on its platform.

In a statement, an X spokesperson said the company had suspended more than 1 million accounts and removed more than 2 million posts in the first half of 2024 for breaking the company's rules. "While we encourage people to express themselves freely on X, abuse, harassment, and hateful conduct have no place on our platform and violate the X Rules," the spokesperson said. 

Notably, the CDT’s report analyzed both hate speech — which ostensibly violates X’s policies — and “offensive speech,” which the report defined as “words or phrases that demean, threaten, insult, or ridicule a candidate.” While the latter category may not be against X’s rules, the report notes that the volume of suck attacks could still deter women of color from running for office. It recommends that X and other platforms take “specific measures” to counteract such effects.

“This should include clear policies that prohibit attacks against someone based on race or gender, greater transparency into how their systems address these types of attacks, better reporting tools and means for accountability, regular risk assessments with an emphasis on race and gender, and privacy preserving mechanisms for independent researchers to conduct studies using their data. The consequences of the status-quo where women of color candidates are targeted with significant attacks online at much higher rates than other candidates creates an immense barrier to creating a truly inclusive democracy.”

Update: October 2, 2024, 12:13 PM ET: This post was updated to include a statement from an X spokesperson. 

This article originally appeared on Engadget at https://www.engadget.com/social-media/women-of-color-running-for-congress-are-attacked-disproportionately-on-x-report-finds-043206066.html?src=rss

Meta needs updated rules for sexually explicit deepfakes, Oversight Board says

Meta’s Oversight Board is urging the company to update its rules around sexually explicit deepfakes. The board made the recommendations as part of its decision in two cases involving AI-generated images of public figures.

The cases stem from two user appeals over AI-generated images of public figures, though the board declined to name the individuals. One post, which originated on Instagram, depicted a nude Indian woman. The post was reported to Meta but the report was automatically closed after 48 hours, as was a subsequent user appeal. The company eventually removed the post after attention from the Oversight Board, which nonetheless overturned Meta’s original decision to leave the image up.

The second post, which was shared to a Facebook group dedicated to AI art, showed “an AI-generated image of a nude woman with a man groping her breast.” Meta automatically removed the post because it had been added to an internal system that can identify images that have been previously reported to the company. The Oversight Board found that Meta was correct to have taken the post down.

In both cases, the Oversight Board said the AI deepfakes violated the company’s rules barring “derogatory sexualized photoshop” images. But in its recommendations to Meta, the Oversight Board said the current language used in these rules is outdated and may make it more difficult for users to report AI-made explicit images.

Instead, the board says that it should update its policies to make clear that it prohibits non-consensual explicit images that are AI-made or manipulated. “Much of the non-consensual sexualized imagery spread online today is created with generative AI models that either automatically edit existing images or create entirely new ones,” the board writes.”Meta should ensure that its prohibition on derogatory sexualized content covers this broader array of editing techniques, in a way that is clear to both users and the company’s moderators.”

The board also called out Meta’s practice of automatically closing user appeals, which it said could have “significant human rights impacts” on users. However, the board said it didn’t have “sufficient information” about the practice to make a recommendation.

The spread of explicit AI images has become an increasingly prominent issue as “deepfake porn” has become a more widespread form of online harassment in recent years. The board’s decision comes one day after the US Senate unanimously passed a bill cracking down on explicit deepfakes. If passed into law, the measure would allow victims to sue the creators of such images for as much as $250,000.

The cases aren’t the first time the Oversight Board has pushed Meta to update its rules for AI-generated content. In another high-profile case, the board investigated a maliciously edited video of President Joe Biden. The case ultimately resulted in Meta revamping its policies around how AI-generated content is labeled.

This article originally appeared on Engadget at https://www.engadget.com/meta-needs-updated-rules-for-sexually-explicit-deepfakes-oversight-board-says-100005969.html?src=rss

Snap will pay $15 million to settle California lawsuit alleging sexual discrimination

The California Civil Rights Department has revealed that Snap Inc. has agreed to pay $15 million to settle the lawsuit it filed "over alleged discrimination, harassment, and retaliation against women at the company." California's civil rights agency started investigating the company behind Snapchat over three years ago due to claims that it discriminated and retaliated against female employees. The agency accused the company of failing the make sure that female employees were paid equally despite a period of rapid growth between 2015 to 2022. 

Women, especially those in engineering roles, were allegedly discouraged to apply for promotions and lost them to less qualified male colleagues when they did. The agency said that they also had to endure unwelcome sexual advances and faced retaliation when they spoke up. Female employees were given negative performance reviews, were denied opportunities and, ultimately, were terminated.

"In California, we’re proud of the work of our state’s innovators who are a driving force of our nation’s economy," CRD Director Kevin Kish said in a statement. "We're also proud of the strength of our state’s civil rights laws, which help ensure every worker is protected against discrimination and has an opportunity to thrive. This settlement with Snapchat demonstrates a shared commitment to a California where all workers have a fair chance at the American Dream. Women are entitled to equality in every job, in every workplace, and in every industry."

Snapchat denies that the company has an issue with pay inequality and sexual discrimination. In a statement sent to Politico and Bloomberg, it says it only decided to settle due to the costs and impact of a lengthy litigation. "We care deeply about our commitment to maintain a fair and inclusive environment at Snap, and do not believe we have any ongoing systemic pay equity, discrimination, harassment, or retaliation issues against women. While we disagreed with the California Civil Rights Department's claims and analyses, we took into consideration the cost and impact of lengthy litigation, and the scope of the CRD’s other settlements, and decided it is in the best interest of the company to resolve these claims and focus on the future," the company explains.

Under the settlement terms, which still have to be approved by a judge, $14.5 million of the total amount will go towards women who worked as employees at Snap Inc. in California between 2014 and 2024. The company will also be required to have a third-party monitor audit its sexual harassment, retaliation and discrimination compliance.

California's Civil Rights Department was the same agency that sued Activision Blizzard in 2021 and accused the company of fostering a "frat boy" culture that encouraged rampant misogyny and sexual harassment. The agency also found that women in the company were overlooked for promotions and were paid less than their male colleagues. It settled with the video game developer in late 2023 for $54 million, though it had to withdraw its claims that there was widespread sexual harassment at the company. 

This article originally appeared on Engadget at https://www.engadget.com/snap-will-pay-15-million-to-settle-california-lawsuit-alleging-sexual-discrimination-120019788.html?src=rss

X now treats the term cisgender as a slur

The increasingly discriminatory X (Twitter) now considers the term “cisgender” a slur. Owner Elon Musk posted last June, to the delight of his bigoted brigade of blue-check sycophants, that “‘cis’ or ‘cisgender’ are considered slurs on this platform.” On Tuesday, X made good on the regressive provocateur’s stance and reportedly began posting an official warning that the LGBTQ-inclusive terms could result in a ban from the platform. Not that you’d miss much.

TechCrunch reported on Tuesday that trying to publish a post using the terms “cisgender” or “cis” in the X mobile app will pop up a full-screen warning reading, “This post contains language that may be considered a slur by X and could be used in a harmful manner in violation of our rules.” It then gives you the choice of continuing to publish the post or conforming to the backward views of the worst of us and deleting it.

Of course, neither form of the term cisgender is a slur.

As the historically marginalized transgender community finally began finding at least a sliver of widespread and long overdue social acceptance in the 21st century, the term became more commonly used in the mainstream lexicon to describe people whose gender identity matches their sex at birth. Organizations including the American Psychological Association, World Health Organization, American Medical Association, American Psychiatric Association recognize the term.

But some people have a hard time accepting and respecting that some humans are different from others. Those fantasizing (against all evidence and scientific consensus) that the heteronormative ideals they grew up with are absolute gospel sometimes take great offense at being asked to adjust their vocabulary to communicate respect for a community that has spent centuries forced to live in the shadows or risk their safety due to the widespread pathologization of their identities. 

Musk seems to consider those the good ol’ days.

This isn’t the billionaire’s first ride on the Transphobe Train. After his backward tweet last June (on the first day of Pride Month, no less), the edgelord’s platform ran a timeline takeover ad from a right-wing nonprofit, plugging a transphobic propaganda film. In case you’re wondering if the group may have anything of value to say, TechCrunch notes that the same organization also doubts climate change and downplays the dehumanizing atrocities of slavery.

X also reversed course on a policy, implemented long before Musk’s takeover, that banned the deadnaming or misgendering of transgender people.

This article originally appeared on Engadget at https://www.engadget.com/x-now-treats-the-term-cisgender-as-a-slur-211117779.html?src=rss

X now treats the term cisgender as a slur

The increasingly discriminatory X (Twitter) now considers the term “cisgender” a slur. Owner Elon Musk posted last June, to the delight of his bigoted brigade of blue-check sycophants, that “‘cis’ or ‘cisgender’ are considered slurs on this platform.” On Tuesday, X made good on the regressive provocateur’s stance and reportedly began posting an official warning that the LGBTQ-inclusive terms could result in a ban from the platform. Not that you’d miss much.

TechCrunch reported on Tuesday that trying to publish a post using the terms “cisgender” or “cis” in the X mobile app will pop up a full-screen warning reading, “This post contains language that may be considered a slur by X and could be used in a harmful manner in violation of our rules.” It then gives you the choice of continuing to publish the post or conforming to the backward views of the worst of us and deleting it.

Of course, neither form of the term cisgender is a slur.

As the historically marginalized transgender community finally began finding at least a sliver of widespread and long overdue social acceptance in the 21st century, the term became more commonly used in the mainstream lexicon to describe people whose gender identity matches their sex at birth. Organizations including the American Psychological Association, World Health Organization, American Medical Association, American Psychiatric Association recognize the term.

But some people have a hard time accepting and respecting that some humans are different from others. Those fantasizing (against all evidence and scientific consensus) that the heteronormative ideals they grew up with are absolute gospel sometimes take great offense at being asked to adjust their vocabulary to communicate respect for a community that has spent centuries forced to live in the shadows or risk their safety due to the widespread pathologization of their identities. 

Musk seems to consider those the good ol’ days.

This isn’t the billionaire’s first ride on the Transphobe Train. After his backward tweet last June (on the first day of Pride Month, no less), the edgelord’s platform ran a timeline takeover ad from a right-wing nonprofit, plugging a transphobic propaganda film. In case you’re wondering if the group may have anything of value to say, TechCrunch notes that the same organization also doubts climate change and downplays the dehumanizing atrocities of slavery.

X also reversed course on a policy, implemented long before Musk’s takeover, that banned the deadnaming or misgendering of transgender people.

This article originally appeared on Engadget at https://www.engadget.com/x-now-treats-the-term-cisgender-as-a-slur-211117779.html?src=rss

Meta’s Oversight Board will rule on AI-generated sexual images

Meta’s Oversight Board is once again taking on the social network’s rules for AI-generated content. The board has accepted two cases that deal with AI-made explicit images of public figures.

While Meta’s rules already prohibit nudity on Facebook and Instagram, the board said in a statement that it wants to address whether “Meta’s policies and its enforcement practices are effective at addressing explicit AI-generated imagery.” Sometimes referred to as “deepfake porn,” AI-generated images of female celebrities, politicians and other public figures has become an increasingly prominent form of online harassment and has drawn a wave of proposed regulation. With the two cases, the Oversight Board could push Meta to adopt new rules to address such harassment on its platform.

The Oversight Board said it’s not naming the two public figures at the center of each case in an effort to avoid further harassment, though it described the circumstances around each post.

One case involves an Instagram post showing an AI-generated image of a nude Indian woman that was posted by an account that “only shares AI- generated images of Indian women.” The post was reported to Meta but the report was closed after 48 hours because it wasn’t reviewed. The same user appealed that decision but the appeal was also closed and never reviewed. Meta eventually removed the post after the user appealed to the Oversight Board and the board agreed to take the case.

The second case involved a Facebook post in a group dedicated to AI art. The post in question showed “an AI-generated image of a nude woman with a man groping her breast.” The woman was meant to resemble “an American public figure” whose name was also in the caption of the post. The post was taken down automatically because it had been previously reported and Meta’s internal systems were able to match it to the prior post. The user appealed the decision to take it down but the appeal was “automatically closed.” The user then appealed to the Oversight Board, which agreed to consider the case.

In a statement, Oversight Board co-chair Helle Thorning-Schmidt said that the board took up the two cases from different countries in order to assess potential disparities in how Meta’s policies are enforced. “We know that Meta is quicker and more effective at moderating content in some markets and languages than others,” Thorning-Schmidt said. “By taking one case from the US and one from India, we want to look at whether Meta is protecting all women globally in a fair way.”

The Oversight Board is asking for public comment for the next two weeks and will publish its decision sometime in the next few weeks, along with policy recommendations for Meta. A similar process involving a misleadingly-edited video of Joe Biden recently resulted in Meta agreeing to label more AI-generated content on its platform.

This article originally appeared on Engadget at https://www.engadget.com/metas-oversight-board-will-rule-on-ai-generated-sexual-images-100047138.html?src=rss

Meta’s Oversight Board will rule on AI-generated sexual images

Meta’s Oversight Board is once again taking on the social network’s rules for AI-generated content. The board has accepted two cases that deal with AI-made explicit images of public figures.

While Meta’s rules already prohibit nudity on Facebook and Instagram, the board said in a statement that it wants to address whether “Meta’s policies and its enforcement practices are effective at addressing explicit AI-generated imagery.” Sometimes referred to as “deepfake porn,” AI-generated images of female celebrities, politicians and other public figures has become an increasingly prominent form of online harassment and has drawn a wave of proposed regulation. With the two cases, the Oversight Board could push Meta to adopt new rules to address such harassment on its platform.

The Oversight Board said it’s not naming the two public figures at the center of each case in an effort to avoid further harassment, though it described the circumstances around each post.

One case involves an Instagram post showing an AI-generated image of a nude Indian woman that was posted by an account that “only shares AI- generated images of Indian women.” The post was reported to Meta but the report was closed after 48 hours because it wasn’t reviewed. The same user appealed that decision but the appeal was also closed and never reviewed. Meta eventually removed the post after the user appealed to the Oversight Board and the board agreed to take the case.

The second case involved a Facebook post in a group dedicated to AI art. The post in question showed “an AI-generated image of a nude woman with a man groping her breast.” The woman was meant to resemble “an American public figure” whose name was also in the caption of the post. The post was taken down automatically because it had been previously reported and Meta’s internal systems were able to match it to the prior post. The user appealed the decision to take it down but the appeal was “automatically closed.” The user then appealed to the Oversight Board, which agreed to consider the case.

In a statement, Oversight Board co-chair Helle Thorning-Schmidt said that the board took up the two cases from different countries in order to assess potential disparities in how Meta’s policies are enforced. “We know that Meta is quicker and more effective at moderating content in some markets and languages than others,” Thorning-Schmidt said. “By taking one case from the US and one from India, we want to look at whether Meta is protecting all women globally in a fair way.”

The Oversight Board is asking for public comment for the next two weeks and will publish its decision sometime in the next few weeks, along with policy recommendations for Meta. A similar process involving a misleadingly-edited video of Joe Biden recently resulted in Meta agreeing to label more AI-generated content on its platform.

This article originally appeared on Engadget at https://www.engadget.com/metas-oversight-board-will-rule-on-ai-generated-sexual-images-100047138.html?src=rss

The Morning After: Zuckerberg’s Vision Pro review, and robotaxis crashing twice into same truck.

Sometimes, timing ruins things. Take this week, instead of detailing the disgust I feel towards this 'meaty' rice, this week's Morning After sets its sights on Mark Zuckerberg, the multimillionaire who's decided to review technology now. Does he know that's my gig?

The Meta boss unfavorably compared Apple's new Vision Pro to his company's Meta Quest 3 headset, which is a delightfully hollow and petty reason to 'review' something. But hey, I had to watch it. And now maybe, you'll watch me? 

We also look closer at Waymo's disastrous December, where two of its robotaxis collided with a truck. The ... same truck.

This week:

🥽🥽: Zuckerberg thinks the Quest 3 is a 'better product' than the Vision Pro

🤖🚙💥💥: Waymo robotaxis crash into the same pickup truck, twice

🚭🛫🚫: United Airlines grounds new Airbus fleet over no smoking sign law

Read this:

GLAAD, the world's largest LGBTQ media advocacy group, has published its first annual report on the video game industry. It found that nearly 20 percent of all players in the United States identify as LGBTQ, yet just 2 percent of games contain characters and storylines relevant to this community. And half of those might be Baldur's Gate 3 alone. (I half-joke.) The report notes that not only does representation matter to many LGBTQ players, but also that new generations of gamers are only becoming increasingly more open to queer content regardless of their sexual orientation. We break down the full report here.

Like email more than video? Subscribe right here for daily reports, direct to your inbox.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-zuckerbergs-vision-pro-review-and-robotaxis-crashing-twice-into-same-truck-150021958.html?src=rss