Malaysia lifts ban on Grok after taking X at its word

After being one of the first countries in the world to block Elon Musk’s Grok chatbot, Malaysia has now lifted its ban. Along with Indonesia, the country moved swiftly to temporarily halt access to X's frequently controversial AI chatbot earlier this month, after multiple reports emerged of it being used to generate deepfake sexualized images of people, including women and children.

At the time, the Malaysian Communications and Multimedia Commission (MCMC) said the restrictions would remain in place until X Corp and parent xAI could prove they had enforced the necessary safeguards against misuse of the above nature.

Malaysian authorities appear to be taking X at its word, after the MCMC released a statement confirming it was satisfied that Musk’s company has implemented the required safety measures. It added that the authorities will continue to monitor the social media platform, and that any further user safety breaches or violations of Malaysian laws would be dealt with firmly.

At the time of writing, only Malaysia and Indonesia have hit Grok with official bans, though UK regulator Ofcom opened a formal investigation into X under the country’s Online Safety Act, in the wake of the non-consensual sexual deepfake scandal. X has since changed its image-editing policies, and on January 14 the company said Grok will no longer allow "the editing of images of real people in revealing clothing such as bikinis."

Earlier this week, the UK-based non-profit, the Center for Countering Digital Hate (CCDH), estimated that in the 11-day period between December 29 and January 9, Grok generated approximately 3 million sexualized images, around 23,000 of which were of children.

This article originally appeared on Engadget at https://www.engadget.com/ai/malaysia-lifts-ban-on-grok-after-taking-x-at-its-word-144457468.html?src=rss

California AG sends cease and desist to xAI over Grok’s explicit deepfakes

California Attorney General Rob Bonta has sent a cease and desist letter to xAI, days after his office launched an official investigation into the company over reports that Grok was generating nonconsensual

If you’ll recall, xAI and Grok have been under fire for taking images of real individuals and putting them in revealing clothing like bikinis upon random users’ requests.

Bonta’s office demands that xAI immediately cease and desist from creating “digitized sexually explicit material” when the depicted individual didn’t consent to it or if the individual is a minor. It also demanded that xAI stop “facilitating or aiding and abetting the creation… or publication of digitized sexually explicit material” of nonconsenting individuals and persons under 18 years of age.

X changed its policies after the issue broke out and prevented the Grok account from being able to edit images of real people into revealing clothing. xAI also moved Grok’s image-generating features behind a paywall and geoblocked paying users’ ability to edit images of real people into bikinis, but only in regions where it’s illegal.

In his announcement, Bonta said xAI developed a “spicy mode” for Grok to generate explicit content and used it as a marketing point. The California AG also said that Grok-generated sexual images are being used to harass both public figures and ordinary users. “Most alarmingly, news reports have described the use of Grok to alter images of children to depict them in minimal clothing and sexual situations,” Bonta’s announcement reads.

“The actions above violate California law, including California Civil Code section 1708.86, California Penal Code sections 311 et seq. and 647(j)(4), and California Business & Professions Code section 17200,” it said. The state’s Department of Justice now expects to hear from xAI on the steps it’s taking to address these issues within the next five days.

This article originally appeared on Engadget at https://www.engadget.com/ai/california-ag-sends-cease-and-desist-to-xai-over-groks-explicit-deepfakes-140000574.html?src=rss

TikTok sued by former workers over alleged union-busting

You know things are messed up when a Big Tech company fights accusations of union-busting by insisting it was only AI layoffs. That's where things stand after a group of fired TikTok moderators in the UK filed a legal claim with an employment tribunal. The Guardian reported on Friday that around 400 TikTok content moderators who were unionizing were laid off before Christmas.

The workers were sacked a week before a vote was scheduled to establish a collective bargaining unit. The moderators said they wanted better protection against the personal toll of processing traumatic content at a high speed. They accused TikTok of unfair dismissal and violating UK trade union laws.

"Content moderators have the most dangerous job on the internet," John Chadfield, the national officer for tech workers at the Communication Workers Union (CWU), said in a statement to The Guardian. "They are exposed to the child sex abuse material, executions, war and drug use. Their job is to make sure this content doesn't reach TikTok's 30 million monthly users. It is high pressure and low paid. They wanted input into their workflows and more say over how they kept the platform safe. They said they were being asked to do too much with too few resources."

TikTok denied that the firings were union-busting, calling the accusations "baseless." Instead, the company claimed the layoffs were part of a restructuring plan amid its adoption of AI for content moderation. The company said 91 percent of transgressive content is now removed automatically.

The company first announced a restructuring exercise in August, just as hundreds of moderators in TikTok's London offices were organizing for union recognition. At the time, John Chadfield, CWU's National Officer for Tech, said the workers had long been "sounding the alarm over the real-world costs of cutting human moderation teams in favour of hastily developed, immature AI alternatives."

"That TikTok management have announced these cuts just as the company's workers are about to vote on having their union recognised stinks of union-busting and putting corporate greed over the safety of workers and the public,” Chadfield said.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/tiktok-sued-by-former-workers-over-alleged-union-busting-170446921.html?src=rss

X says Grok will no longer edit images of real people into bikinis

X says it is changing its policies around Grok’s image-editing abilities following a multi-week outcry over the chatbot repeatedly being accused of generating sexualized images of children and nonconsensual nudity. In an update shared from the @Safety account on X, the company said it has “implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis.”

The new safeguards, according to X, will apply to all users regardless of whether they pay for Grok. xAI is also moving all of Grok’s image-generating features behind its subscriber paywall so that non-paying users will no longer be able to create images. And it will geoblock "the ability of all users to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X" in regions where it's illegal.

The company's statement comes hours after the state of California opened an investigation into xAI and Grok over its handling of AI-generated nudity and child exploitation material. A statement from California Attorney General Rob Bonta cited one analysis that found "more than half of the 20,000 images generated by xAI between Christmas and New Years depicted people in minimal clothing," including some that appeared to be children.

In its update, X said that it has "zero tolerance" for child exploitation and that it removes "high-priority violative content, including Child Sexual Abuse Material (CSAM) and non-consensual nudity" from its platform. Earlier in the day, Elon Musk said he was "not aware of any naked underage images generated by Grok." He later added that when its NSFW setting is enabled, "Grok is supposed [sic] allow upper body nudity of imaginary adult humans (not real ones) consistent with what can be seen in R-rated movies on Apple TV." He added that "this will vary in other regions" based on local laws.  

Malaysia and Indonesia both recently moved to block Grok citing safety concerns and its handling of sexually explicit AI-generated material. In the UK, where regulator Ofcom is also investigating xAI and Grok, officials have also said they would back a similar block of the chatbot. 

Have a tip for Karissa? You can reach her by email, on X, Bluesky, Threads, or send a message to @karissabe.51 to chat confidentially on Signal.

This article originally appeared on Engadget at https://www.engadget.com/ai/x-says-grok-will-no-longer-edit-images-of-real-people-into-bikinis-231430257.html?src=rss

California is investigating Grok over AI-generated CSAM and nonconsensual deepfakes

California authorities have launched an investigation into xAI following weeks of reports that the chatbot was generating sexualized images of children. "xAI appears to be facilitating the large-scale production of deepfake nonconsensual intimate images that are being used to harass women and girls across the internet, including via the social media platform X," California Attorney General Rob Bonta's office said in a statement

The statement cited a report that "more than half of the 20,000 images generated by xAI between Christmas and New Years depicted people in minimal clothing," including some that appeared to be children. "We have zero tolerance for the AI-based creation and dissemination of nonconsensual intimate images or of child sexual abuse material,” Bonta said. “Today, my office formally announces an investigation into xAI to determine whether and how xAI violated the law.

The investigation was announced as California Governor Gavin Newsom also called on Bonta to investigate xAI. "xAI’s decision to create and host a breeding ground for predators to spread nonconsensual sexually explicit AI deepfakes, including images that digitally undress children, is vile," Newsom wrote.

California authorities aren't the first to investigate the company following widespread reports of AI-generated child sexual abuse material (CSAM) and non-consensual intimate images of women. UK regulator Ofcom has also opened an official inquiry, and European Union officials have said they are also looking into  the issue. Malaysia and Indonesia have moved to block Grok. 

Last week, xAI began imposing rate limits on Grok's image generation abilities, but has so far declined to pull the plug entirely. When asked to comment on the California investigation, xAI responded with an automated email that said "Legacy Media Lies." 

Earlier on Wednesday, Elon Musk said he was "not aware of any naked underage images generated by Grok." Notably, that statement does not directly refute Bonta's allegation that Grok is being used "to alter images of children to depict them in minimal clothing and sexual situations." Musk said that "the operating principle for Grok is to obey the laws" and that the company works to address cases of "adversarial hacking of Grok prompts."

This article originally appeared on Engadget at https://www.engadget.com/ai/california-is-investigating-grok-over-ai-generated-csam-and-nonconsensual-deepfakes-202029635.html?src=rss

He could just turn it off

Generative AI, we are repeatedly told, is a transformative and complicated technology. So complicated that its own creators are unable to explain why it acts the way it does, and so transformative that we'd be fools to stand in the way of progress. Even when progress resembles a machine for undressing strangers without their consent on an unprecedented scale, as has been the case of late with Elon Musk's Grok chatbot. 

UK Prime Minister Kier Starmer seems to have so fully bought into the grand lie of the AI bubble that he was willing to announce:

"I have been informed this morning that X is acting to ensure full compliance with UK law."

Not that it currently is in compliance. Nor a timeline in which it is expected to do so. Just that he seems satisfied that someday, eventually, Musk's pet robot will stop generating child sexual abuse material

This statement comes just under two days after Starmer was quoted as saying "If X cannot control Grok, we will." What could Elon possibly have said to earn this pathetic capitulation. AI is difficult? Solutions take time?

These are entirely cogent technical arguments until you remember: He could just turn it off. 

Elon Musk has the power to disable Grok, if not in whole (we should be so lucky) than its image generation capabilities. We know this intuitively, but also because he rate-limited Grok's image generation after this latest scandal: after a few requests, free users are now prompted to pay $8 per month to continue enlisting a wasteful technology to remove articles of clothing from women. Sweep it under the rug, make a couple bucks along the way.

Not only is it entirely possible for image generation to be turned off, it's the only responsible option. Software engineers regularly roll back updates or turn off features that work less than optimally; this one's still up and running despite likely running afoul of the law. 

That we have now gone the better part of a month aware this problem exists; that the "feature" still remains should tell Starmer and others all they need to know. Buddy, you're carrying water for a bozo who does not seem to care that one such victim was reportedly Ashley St Clair, the mother of one of his (many) children.

Some countries — namely Malaysia and Indonesia — chose to turn Grok off for their citizens by blocking the service. Indonesia's Communication and Digital Affairs Minister was quoted as saying “The government sees nonconsensual sexual deepfakes as a serious violation of human rights." Imagine if everyone in the business of statecraft felt that way. 

The UK (not to mention the US, but please, expect nothing from us, we're busy doing authoritarianism) has a lot more sway over X, and by extension Elon, than either of those countries. Musk does, and is looking to do even more, business in the UK. Even if Musk were not perhaps the world's most well known liar, Grok can still make images and that should speak for itself. Grok should be well out of second chances by now, and it's up to government leaders to say no more until they can independently verify it's no longer capable of harm.

This article originally appeared on Engadget at https://www.engadget.com/he-could-just-turn-it-off-180209551.html?src=rss

UK regulator Ofcom opens a formal investigation into X over CSAM scandal

The UK’s media regulator has opened a formal investigation into X under the Online Safety Act. "There have been deeply concerning reports of the Grok AI chatbot account on X being used to create and share undressed images of people — which may amount to intimate image abuse or pornography — and sexualized images of children that may amount to child sexual abuse material (CSAM)," Ofcom said.

The investigation will focus on whether X has "has complied with its duties to protect people in the UK from content that is illegal in the UK." That includes whether X is taking appropriate measures to prevent UK users from seeing "priority" illegal content, such as CSAM and non-consensual intimate images; if the platform is removing illegal content quickly after becoming aware of it; and whether X carried out an updated risk assessment before making "any significant changes" to the platform. The probe will also consider whether X assessed the risk that its platform poses to UK children and if it has ”highly effective age assurance to protect UK children from seeing pornography.”

The regulator said it contacted X on January 5 and received a response by its January 9 deadline. Ofcom is conducting an "expedited assessment of available evidence as a matter of urgency" and added that it has asked xAI for "urgent clarification" on the steps the company is taking to protect UK users.

"Reports of Grok being used to create and share illegal non-consensual intimate images and child sexual abuse material on X have been deeply concerning," an Ofcom spokesperson said. "Platforms must protect people in the UK from content that’s illegal in the UK, and we won’t hesitate to investigate where we suspect companies are failing in their duties, especially where there’s a risk of harm to children. We’ll progress this investigation as a matter of the highest priority, while ensuring we follow due process. As the UK’s independent online safety enforcement agency, it’s important we make sure our investigations are legally robust and fairly decided."

If Ofcom deems that a company has broken the law, it can "require platforms to take specific steps to come into compliance or to remedy harm caused by the breach." The regulator can additionally impose fines of up to £18 million ($24.3 million) or 10 percent of "qualifying" worldwide revenue, whichever of the two figures is higher. It can also seek a court order to stop payment providers or advertisers from working with a platform, or to require internet service providers to block a site in the UK. The UK government has said it would back any action that Ofcom takes against X.

Reports over the weekend suggested that the UK had held discussions with allies over a coordinated response to Grok-generated deepfakes. Regulators elsewhere, including in India and the European Union, are also investigating X.

Last week, the Grok account on X started telling users that its image generation and editing tools were being limited to paying subscribers. But as of Monday it was still possible for non-paying users to generate images through the Grok tab on the X website and app. 

Meanwhile, Malaysia and Indonesia became the first countries to block Grok, claiming that X’s chatbot does not have sufficient safeguards in place to prevent explicit AI-generated deepfakes of women and children from being created and disseminated on X. Indonesia temporarily blocked access to Grok on Saturday, as did Malaysia on Sunday, the Associated Press reports. 

"The government sees non-consensual sexual deepfakes as a serious violation of human rights, dignity and the safety of citizens in the digital space," Indonesia’s Communication and Digital Affairs Minister Meutya Hafid said in a statement. Officials in the country said initial findings showed that Grok lacks effective controls to prevent users from creating and sharing sexually explicit deepfakes based on photos of Indonesian residents. The country's director general of digital space supervision, Alexander Sabar, said generating deepfakes can violate individuals' image and privacy rights when photos are shared or manipulated without consent, adding that they can lead to reputational, social and psychological harm.

The Malaysian Communications and Multimedia Commission cited "repeated misuse" of Grok to generate explicit and non-consensual deepfakes, some of which involved women and children. The regulator said Grok will remain blocked in the country until X Corp and parent xAI establish strong enough safeguards.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/uk-regulator-ofcom-opens-a-formal-investigation-into-x-over-csam-scandal-120000312.html?src=rss

Paris court finds 10 people guilty of cyberbullying Brigitte Macron

A Paris court has found 10 people guilty of cyberbullying Brigitte Macron, wife of President of France Emmanuel Macron, the BBC reports. The judge found that the defendants made false claims about Macron's gender and sexuality, and "malicious remarks" about the 24-year age gap between Macron and her spouse.

Of the ten defendants, only one received a firm prison sentence of six months, according to LeMonde. Eight others received suspended sentences (effectively probation in France) of four to eight months, while the 10th person received a fine and was required to attend a sensitivity training course. Three of the “instigators” found guilty will lose access to their social media accounts for six months.

Key to the lawsuit is the fringe belief that Brigitte Macron was born a man — proponents for some reason believe Macron is Jean-Michel Trogneux, her older brother — and transitioned to living as a woman at some point later in life. This style of "transvestigation" is an unfortunately common type of online conspiracy theory, a roundabout way to both spread hateful rhetoric about transgender people and bully cisgender people at the same time. The campaign against Macron has the added twist of her age: Brigitte Macron is 72, 24 years older than President Macron. The pair married in 2007, but their age difference has been an ongoing narrative throughout Emmanuel Macron's political career.

In July 2025, Macron also filed a defamation lawsuit in the US against Candace Owens, a right-wing podcaster and conspiracy theorist. Owens has made multiple attempts since 2024 to spread false claims about Macron's gender, and has said that she's willing to stake her "entire professional reputation" that she's right.

Correction 01/06/26 8:43: A previous version of this article incorrectly stated the length of the sentences. We regret the error.

This article originally appeared on Engadget at https://www.engadget.com/social-media/paris-court-finds-10-people-guilty-of-cyberbullying-brigitte-macron-195500994.html?src=rss

The UK government will ‘look into’ Rockstar’s firing of union-organizing workers

Rockstar Games may have to answer for what appears to be union-busting behavior. UK Prime Minister Keir Starmer, under pressure from parliament, said the government will "look into" the firing of 31 employees in October.

The sacked workers were all part of a private trade union chat group on Discord. The company claimed the firings were "for gross misconduct" and accused the workers of sharing confidential information outside of the company.

But based on what we know, it's hard to see that characterization as anything but union-busting in search of legal cover. The Independent Workers' Union of Great Britain (IWGB) described the case as "the most blatant and ruthless act of union busting in the history of the games industry."

In November, IWGB issued legal claims against the Grand Theft Auto developer. The next day, over 200 staff at Rockstar North signed a letter condemning the firings and pressuring management to reinstate the workers. Earlier that month, the fired workers and their supporters protested outside Rockstar North's Edinburgh headquarters. Others picketed in Paris, London and New York.

People protesting and holding picket signs outside Rockstar North headquarters
Fired workers and supporters protesting outside Rockstar's headquarters
IWGB

“It’s clear to everyone close to this situation that this is a blatant, unapologetic act of vicious union busting,” one of the fired staffers said anonymously in a November statement. “Rockstar employs so many talented game developers, all of whom are crucial to making the games we put out.”

Edinburgh East and Musselburgh MP Chris Murray, who prompted Starmer's response, said in parliament that he recently met with Rockstar to discuss the case. "The meeting only entrenched my concerns about the process Rockstar used to dismiss so many of their staff members," he said. "I was not assured their process paid robust attention to UK employment law, I was not convinced that this course of action was necessary, and alarmingly, I did not leave informed on exactly what these 31 people had done to warrant their immediate dismissal."

Murray added that Rockstar initially refused entry to the MPs unless they signed a non-disclosure agreement. The company eventually relented on that front.

On Wednesday, Murray triggered Starmer's response in parliament. The MP asked the Prime Minister if he agreed that "all companies, regardless of profit size, must follow UK employment law and all workers have the right to join a union?"

Starmer replied that he found the case "deeply concerning." He added that "every worker has the right to join a trade union, and we're determined to strengthen workers' rights and ensure they don't face unfair consequences for being part of a union. Our ministers will look into the particular case the member raises and will keep him updated."

This article originally appeared on Engadget at https://www.engadget.com/gaming/the-uk-government-will-look-into-rockstars-firing-of-union-organizing-workers-174216696.html?src=rss

Russia blocks Roblox, citing ‘LGBT propaganda’ as a reason

Russia has blocked the popular gaming platform Roblox, according to a report by Reuters. The country's communications watchdog Roskomnadzor accused the developers of distributing extremist materials and "LGBT propaganda." The agency went on to say that Roblox is "rife with inappropriate content that can negatively impact the spiritual and moral development of children."

This is just the latest move the country has taken against what it calls the "international LGBT movement." It recently pressured the language-learning app Duolingo into deleting references to what the country calls "non-traditional sexual relations."

Russian courts regularly issue fines to organizations that violate its "LGBT propaganda" law, which criminalizes the promotion of same-sex relationships. President Vladimir Putin has called the protection of gay and transgender rights a move "towards open satanism."

Roblox doesn't have a "LGBT propaganda" problem because there's no such thing, but the platform does have plenty of issues that Russia doesn't seem all that concerned about. It's a noted haven for child predators, which has caused other countries like Iraq and Turkey to ban the platform. To its credit, the company has begun cracking down on user-generated content and added new age-based restrictions.

Roblox is still one of the more popular entertainment platforms in the world. It averaged over 151 million daily active users in the third quarter of this year alone.

This article originally appeared on Engadget at https://www.engadget.com/gaming/russia-blocks-roblox-citing-lgbt-propaganda-as-a-reason-180757267.html?src=rss