X walks back its misgendering policy after right-wing complaints

X has, once again, quietly changed its rules around deadnaming and misgendering without an explanation. With the latest change, it seems that there will be no penalties for misgendering or deadnaming people on X after al, except in cases when it may be “required by local laws.”

The update, which was first spotted by Mashable, comes after X appeared to reinstate some aspects of Twitter’s former policy, which fell under its hateful conduct rules. Prior to Elon Musk’s takeover, Twitter had barred targeted deadnaming and misgendering. That section of the company’s rules then disappeared last April. Then, last week, ArsTechnica noted that the policy was quietly updated to indicate that X would “reduce the visibility of posts that purposefully use different pronouns to address someone other than what that person uses for themselves, or that use a previous name that someone no longer goes by as part of their transition.”

While it wasn’t a full reversal of the earlier policy — under the company’s previous leadership, intentional misgendering was grounds for a suspension — it seemed that there once again would be penalties for this type of harassment. Now, that section of Twitter’s rules is prefaced with “where required by local laws.”

As with so much of what happens at X, there is significant confusion about the policy as the company’s rules seem to change based on the whims of Musk rather than a considered process. This was on display over the last fewldays as Musk fielded several complaints from right-wing personalities about last week’s change. On Thursday, Musk told one such account that the update “is just about repeated, targeted harassment of a particular person.” But by Saturday, Musk was offering a new explanation. “Turns out this was due to a court judgment in Brazil, which is being appealed, but should not apply outside of Brazil,” he said.

X didn’t respond to a request for comment about the policy or why it was changed twice in a matter of days. But Musk is known to be sympathetic to people who regularly engage in anti-trans harassment. One of his first moves after taking over the company was to reinstate a number of accounts banned for violating the company's previous hateful conduct policy. He has also repeatedly mocked people who specify their pronouns and publicly criticized X staff for attempting to apply the company’s “freedom of speech, not reach” policy to a transphobic documentary.

This article originally appeared on Engadget at https://www.engadget.com/x-walks-back-its-misgendering-policy-after-right-wing-complaints-202433498.html?src=rss

X walks back its misgendering policy after right-wing complaints

X has, once again, quietly changed its rules around deadnaming and misgendering without an explanation. With the latest change, it seems that there will be no penalties for misgendering or deadnaming people on X after al, except in cases when it may be “required by local laws.”

The update, which was first spotted by Mashable, comes after X appeared to reinstate some aspects of Twitter’s former policy, which fell under its hateful conduct rules. Prior to Elon Musk’s takeover, Twitter had barred targeted deadnaming and misgendering. That section of the company’s rules then disappeared last April. Then, last week, ArsTechnica noted that the policy was quietly updated to indicate that X would “reduce the visibility of posts that purposefully use different pronouns to address someone other than what that person uses for themselves, or that use a previous name that someone no longer goes by as part of their transition.”

While it wasn’t a full reversal of the earlier policy — under the company’s previous leadership, intentional misgendering was grounds for a suspension — it seemed that there once again would be penalties for this type of harassment. Now, that section of Twitter’s rules is prefaced with “where required by local laws.”

As with so much of what happens at X, there is significant confusion about the policy as the company’s rules seem to change based on the whims of Musk rather than a considered process. This was on display over the last fewldays as Musk fielded several complaints from right-wing personalities about last week’s change. On Thursday, Musk told one such account that the update “is just about repeated, targeted harassment of a particular person.” But by Saturday, Musk was offering a new explanation. “Turns out this was due to a court judgment in Brazil, which is being appealed, but should not apply outside of Brazil,” he said.

X didn’t respond to a request for comment about the policy or why it was changed twice in a matter of days. But Musk is known to be sympathetic to people who regularly engage in anti-trans harassment. One of his first moves after taking over the company was to reinstate a number of accounts banned for violating the company's previous hateful conduct policy. He has also repeatedly mocked people who specify their pronouns and publicly criticized X staff for attempting to apply the company’s “freedom of speech, not reach” policy to a transphobic documentary.

This article originally appeared on Engadget at https://www.engadget.com/x-walks-back-its-misgendering-policy-after-right-wing-complaints-202433498.html?src=rss

X reinstates policy against deadnaming and misgendering

X has updated its abuse and harassment page in January, and it has added a new section that explains its new rule against intentionally using the wrong pronouns for a person or using a name they no longer go by. As noticed by Ars Technica, the new section entitled "Use of Prior Names and Pronouns" states that the service will "reduce the visibility of posts" that use pronouns for a person different from what they use for themselves and those who are now using a different name as part of their transition. 

The social networking service formerly known as Twitter removed its longtime policy against deadnaming and misgendering transgender individuals just as quietly back in April 2023. GLAAD CEO Sarah Kate Ellis said at the time that X's decision was "the latest example of just how unsafe the company is for users and advertisers alike." It's worth noting that Elon Musk, the website's owner, has a history of liking and sharing anti-trans posts and talking points. 

Under the new policy, X will only act on a post if it hears from the target themselves "given the complexity of determining whether such a violation has occurred." That puts the onus on the target who might end up being blamed for not reporting if they choose to distance themselves from the abuse. Jenni Olson, GLAAD's senior director of social media safety, told Ars that the organization doesn't recommend self-reporting for social media platforms. Still, policies clearly prohibiting the deadnaming and misgendering of trans people are still better than vague ones that don't clarify whether or not they're in violation of a platform's rules, Olson said. 

X reduces the visibility of posts by removing them from search results, home timelines, trends and notifications. These posts will also be downranked in the replies section and can only be discovered through the authors' profiles. Finally, they will not be displayed on the X website or app with ads adjacent to them, which could prevent a repeat of the ad revenue losses the company suffered last year. In late 2023, advertisers pulled their campaigns from the website just before the holidays after Media Matters published a report showing ads on the website right next to antisemitic content.

This article originally appeared on Engadget at https://www.engadget.com/x-reinstates-policy-against-deadnaming-and-misgendering-114608696.html?src=rss

X reinstates policy against deadnaming and misgendering

X has updated its abuse and harassment page in January, and it has added a new section that explains its new rule against intentionally using the wrong pronouns for a person or using a name they no longer go by. As noticed by Ars Technica, the new section entitled "Use of Prior Names and Pronouns" states that the service will "reduce the visibility of posts" that use pronouns for a person different from what they use for themselves and those who are now using a different name as part of their transition. 

The social networking service formerly known as Twitter removed its longtime policy against deadnaming and misgendering transgender individuals just as quietly back in April 2023. GLAAD CEO Sarah Kate Ellis said at the time that X's decision was "the latest example of just how unsafe the company is for users and advertisers alike." It's worth noting that Elon Musk, the website's owner, has a history of liking and sharing anti-trans posts and talking points. 

Under the new policy, X will only act on a post if it hears from the target themselves "given the complexity of determining whether such a violation has occurred." That puts the onus on the target who might end up being blamed for not reporting if they choose to distance themselves from the abuse. Jenni Olson, GLAAD's senior director of social media safety, told Ars that the organization doesn't recommend self-reporting for social media platforms. Still, policies clearly prohibiting the deadnaming and misgendering of trans people are still better than vague ones that don't clarify whether or not they're in violation of a platform's rules, Olson said. 

X reduces the visibility of posts by removing them from search results, home timelines, trends and notifications. These posts will also be downranked in the replies section and can only be discovered through the authors' profiles. Finally, they will not be displayed on the X website or app with ads adjacent to them, which could prevent a repeat of the ad revenue losses the company suffered last year. In late 2023, advertisers pulled their campaigns from the website just before the holidays after Media Matters published a report showing ads on the website right next to antisemitic content.

This article originally appeared on Engadget at https://www.engadget.com/x-reinstates-policy-against-deadnaming-and-misgendering-114608696.html?src=rss

Meta is killing the Facebook News tab in the US and Australia

In early April, the Facebook News tab will start disappearing for users in the US and Australia. Meta has announced that it's deprecating the dedicated tab found in the bookmarks section of its social network as part of its efforts to "align [its] investments to [its] products and services people value the most." The company already retired the News tab in the UK, France and Germany in early December 2023, explaining that it's funneling its resources to other things that people want to see more of, such as short form videos. 

In Meta's new post, it said the number of people using the News tab in the US and Australia over the past year has dropped by 80 percent. News makes up less than three percent of what users see on Facebook apparently, and it's just not a big part of their experience. "We know that people don’t come to Facebook for news and political content — they come to connect with people and discover new opportunities, passions and interests," the company wrote. 

By pulling the News tab in Australia, the company will also stop paying publishers in the country for their content after their current deals end. A few years ago, Facebook blocked news links in the country in response to the then-proposed law that would require companies like Meta to pay media organizations for their content. The company unblocked news links just a few days later after it started striking deals with Australian media organizations.

According to The Age, the Australian Competition and Consumer Commission believes that Google and Meta inked deals with dozens of outlets, including Guardian Australia and News Corp Australia, worth about $200 million a year. Meta is responsible for around one-third or $66 million of that total amount, meaning its decision is bound to have a huge impact on the news business in the country. And there seems to be no room for negotiation: The company made it clear in its announcement that it's not going to enter new commercial deals for traditional news content in any of the regions where it has already removed the News tab. 

Meta has not blocked news links in the aforementioned countries, however, and Facebook users can still access any that's been posted on the social network. Publishers can also continue posting links to their stories on their official pages as usual. 

This article originally appeared on Engadget at https://www.engadget.com/meta-is-killing-the-facebook-news-tab-in-the-us-and-australia-082750820.html?src=rss

Meta is killing the Facebook News tab in the US and Australia

In early April, the Facebook News tab will start disappearing for users in the US and Australia. Meta has announced that it's deprecating the dedicated tab found in the bookmarks section of its social network as part of its efforts to "align [its] investments to [its] products and services people value the most." The company already retired the News tab in the UK, France and Germany in early December 2023, explaining that it's funneling its resources to other things that people want to see more of, such as short form videos. 

In Meta's new post, it said the number of people using the News tab in the US and Australia over the past year has dropped by 80 percent. News makes up less than three percent of what users see on Facebook apparently, and it's just not a big part of their experience. "We know that people don’t come to Facebook for news and political content — they come to connect with people and discover new opportunities, passions and interests," the company wrote. 

By pulling the News tab in Australia, the company will also stop paying publishers in the country for their content after their current deals end. A few years ago, Facebook blocked news links in the country in response to the then-proposed law that would require companies like Meta to pay media organizations for their content. The company unblocked news links just a few days later after it started striking deals with Australian media organizations.

According to The Age, the Australian Competition and Consumer Commission believes that Google and Meta inked deals with dozens of outlets, including Guardian Australia and News Corp Australia, worth about $200 million a year. Meta is responsible for around one-third or $66 million of that total amount, meaning its decision is bound to have a huge impact on the news business in the country. And there seems to be no room for negotiation: The company made it clear in its announcement that it's not going to enter new commercial deals for traditional news content in any of the regions where it has already removed the News tab. 

Meta has not blocked news links in the aforementioned countries, however, and Facebook users can still access any that's been posted on the social network. Publishers can also continue posting links to their stories on their official pages as usual. 

This article originally appeared on Engadget at https://www.engadget.com/meta-is-killing-the-facebook-news-tab-in-the-us-and-australia-082750820.html?src=rss

Substack has direct messages now

Substack newsletter writers and readers can now send direct messages to each other. The company says this was a highly requested feature and it adds to the platform's slate of social networking tools.

You'll find DMs in the Chat tab on the app and website. You can start a private conversation from that tab, someone's profile page or by selecting the Share option on a note or post. When you get a DM, Substack will let you know in the app and by email.

By default, DMs from people you're connected to will land in your inbox and those from others will drop into a Requests folder. Writers can restrict incoming DM requests to paid or founding subscribers. Free subscribers who try to message you will then see a prompt to become a paid subscriber. Writers can include a "send a message" button on their posts if they wish.

If you've blocked or banned someone, they won't be able to send you a DM. You can also turn off DMs entirely by disabling message requests in your settings. If you receive a message that breaks Substack's rules, you can report it. 

Substack has added a number of social networking features over the last year or so, such as the X-like Notes function for short-form posts. It also last week updated a system that allows writers to recommend other scribes to readers.

The platform came under fire last month over its handling of pro-Nazi content. It removed five newsletters that promoted white nationalist and Nazi views. However, some prominent newsletter writers left Substack in protest over its approach to content moderation.

This article originally appeared on Engadget at https://www.engadget.com/substack-has-direct-messages-now-184154827.html?src=rss

Substack has direct messages now

Substack newsletter writers and readers can now send direct messages to each other. The company says this was a highly requested feature and it adds to the platform's slate of social networking tools.

You'll find DMs in the Chat tab on the app and website. You can start a private conversation from that tab, someone's profile page or by selecting the Share option on a note or post. When you get a DM, Substack will let you know in the app and by email.

By default, DMs from people you're connected to will land in your inbox and those from others will drop into a Requests folder. Writers can restrict incoming DM requests to paid or founding subscribers. Free subscribers who try to message you will then see a prompt to become a paid subscriber. Writers can include a "send a message" button on their posts if they wish.

If you've blocked or banned someone, they won't be able to send you a DM. You can also turn off DMs entirely by disabling message requests in your settings. If you receive a message that breaks Substack's rules, you can report it. 

Substack has added a number of social networking features over the last year or so, such as the X-like Notes function for short-form posts. It also last week updated a system that allows writers to recommend other scribes to readers.

The platform came under fire last month over its handling of pro-Nazi content. It removed five newsletters that promoted white nationalist and Nazi views. However, some prominent newsletter writers left Substack in protest over its approach to content moderation.

This article originally appeared on Engadget at https://www.engadget.com/substack-has-direct-messages-now-184154827.html?src=rss

Google CEO says Gemini image generation failures were ‘unacceptable’

Google CEO Sundar Pichai addressed the company’s recent issues with its AI-powered Gemini image generation tool after it started overcorrecting for diversity in historical images. He called the turn of events “unacceptable” and said that the company’s “working around the clock” on a fix, according to an internal employee memo published by Semafor.

“No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes,” Pichai wrote to staffers. “And we’ll review what happened and make sure we fix it at scale.”

Pichai remains optimistic regarding the future of the Gemini chatbot, formerly called Bard, noting that the team has already “seen substantial improvement on a wide range of prompts.” The image generation aspect of Gemini will remain paused until a fix is fully worked out.

This started when Gemini users began noticing that the generator began cranking out historically inaccurate images, like pictures of Nazis and America’s Founding Fathers as people of color. This quickly became a big thing on social media, with the word “woke” being thrown around a whole lot.

Prabhakar Raghavan, Google’s senior vice president for knowledge and information, did not lay the blame on wokeness, but rather a series of tuning errors. Basically, the model was fine-tuned to allow for diverse groups of people in pictures, but “failed to account for cases that should clearly not show a range.” This led to controversial images like people of color showing up as Vikings and Native American Catholic Popes.

Raghavan also said that the model became more cautious over time, occasionally refusing to answer certain prompts after wrongly interpreting them as sensitive. This accounts for reports that the model refused to generate images of white people.

It sounds like the company was trying to both please a global audience and ensure the model didn’t fall into some of the traps of rival products, like creating sexually explicit images or depictions of real people. Tuning these AI models is extremely delicate work and the software can easily be led to make ridiculous errors. It’s what they do. In any event, I’d prefer a historically inaccurate Catholic Pope over unexpected violent imagery any day of the week. Chalk this up as yet another reminder that AI still has a long way to go. 

As for Gemini, the company promises the image generator will return in the near future, but it still requires a suite of fixes and tests to make sure this never happens again, including “structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming and technical recommendations.”

This article originally appeared on Engadget at https://www.engadget.com/google-ceo-says-gemini-image-generation-failures-were-unacceptable-163748934.html?src=rss

Google CEO says Gemini image generation failures were ‘unacceptable’

Google CEO Sundar Pichai addressed the company’s recent issues with its AI-powered Gemini image generation tool after it started overcorrecting for diversity in historical images. He called the turn of events “unacceptable” and said that the company’s “working around the clock” on a fix, according to an internal employee memo published by Semafor.

“No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes,” Pichai wrote to staffers. “And we’ll review what happened and make sure we fix it at scale.”

Pichai remains optimistic regarding the future of the Gemini chatbot, formerly called Bard, noting that the team has already “seen substantial improvement on a wide range of prompts.” The image generation aspect of Gemini will remain paused until a fix is fully worked out.

This started when Gemini users began noticing that the generator began cranking out historically inaccurate images, like pictures of Nazis and America’s Founding Fathers as people of color. This quickly became a big thing on social media, with the word “woke” being thrown around a whole lot.

Prabhakar Raghavan, Google’s senior vice president for knowledge and information, did not lay the blame on wokeness, but rather a series of tuning errors. Basically, the model was fine-tuned to allow for diverse groups of people in pictures, but “failed to account for cases that should clearly not show a range.” This led to controversial images like people of color showing up as Vikings and Native American Catholic Popes.

Raghavan also said that the model became more cautious over time, occasionally refusing to answer certain prompts after wrongly interpreting them as sensitive. This accounts for reports that the model refused to generate images of white people.

It sounds like the company was trying to both please a global audience and ensure the model didn’t fall into some of the traps of rival products, like creating sexually explicit images or depictions of real people. Tuning these AI models is extremely delicate work and the software can easily be led to make ridiculous errors. It’s what they do. In any event, I’d prefer a historically inaccurate Catholic Pope over unexpected violent imagery any day of the week. Chalk this up as yet another reminder that AI still has a long way to go. 

As for Gemini, the company promises the image generator will return in the near future, but it still requires a suite of fixes and tests to make sure this never happens again, including “structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming and technical recommendations.”

This article originally appeared on Engadget at https://www.engadget.com/google-ceo-says-gemini-image-generation-failures-were-unacceptable-163748934.html?src=rss