Sam Bankman-Fried wants a re-trial for fraud charges

Former cryptocurrency poster boy Sam Bankman-Fried is trying to get another chance in court. He has filed a request that for a new trial on claims that new witness testimony could alter the case made against him by prosecutors, according to Bloomberg. His odds for getting the re-trial, where he'd be representing himself, seem pretty slim. This is a separate motion from a formal appeal of his previous conviction.

Bankman-Fried is one of many cryptocurrency leaders who have since been prosecuted for fraud. After being jailed for witness tampering, he was found guilty of seven charges of fraud and conspiracy in 2023. Bankman-Fried was sentenced to 25 years in prison for his actions as CEO and co-founder of crypto exchange FTX.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/sam-bankman-fried-wants-a-re-trial-for-fraud-charges-185910093.html?src=rss

X’s Paris HQ raided by French prosecutors

Paris prosecutors announced that a search was underway at offices belong to Elon Musk’s X platform as part of an ongoing investigation first launched in January 2025. The raid is being conducted by Paris and national cybercrime units, with support from Interpol, according to post from Paris prosecutors on X. Officials from X have yet to comment on the matter.

At the same time, Paris prosecutors issued summonses to Elon Musk and Linda Yaccarino for “voluntary interviews” on April 20, 2026 in Paris. The prosecutors also announced they would no longer use X and would only communicate on LinkedIn and Instagram going forward.

The searches are part of an investigation that has been ongoing for nearly a year over the functioning of X’s algorithms that are “likely to have distorted the operation of an automated data processing system,” investigators said at the time. Those changes reportedly gave greater prominence to certain political content (especially from Musk) without user knowledge — something that could be a crime under French laws.

An investigation was officially launched in July, with Paris prosecutors adding an additional charge: “Fraudulent extraction of data from an automated data processing system by an organized group.” More recently, it also includes “complicity in the possession of images of minors representing a pedo-pornographic character,” due to images created by Grok between December 25, 2025 and January 1, 2026.

In July, X said in a statement that the probe “egregiously undermines X’s fundamental right to due process and threatens our users' rights to privacy and free speech. [French officials have] accused X of manipulating its algorithm for 'foreign interference' purposes, an allegation which is completely false.”

Update, Feb 3 2026, 4:00pm ET: X posted a lengthly statement on its Global Government Affairs account, calling the allegations “baseless” and stating the company “categorically denies any wrongdoing.” The company went on to describe the raid as “an abusive act of law enforcement theater designed to achieve illegitimate political objectives rather than advance legitimate law enforcement goals.”

This article originally appeared on Engadget at https://www.engadget.com/social-media/xs-paris-hq-raided-by-french-prosecutors-110411170.html?src=rss

Blizzard’s quality assurance workers finally have a union contract

Almost three years after starting the bargaining process with Microsoft, quality assurance workers at two Blizzard locations have ratified a union contract. The agreement covers 60 workers at Blizzard Albany and Blizzard Austin.

The agreement includes guaranteed pay increases across the three years of the contract, assurances that workers will be given fair credits and recognition on games that ship, discrimination-free disability accommodations, restrictions on crunch (i.e. mandatory overtime) and "protection to immigrant workers from unfair discipline and loss of seniority while streamlining legal verification." Stronger rules around the use of AI are included in the contract as well.

“At a time when layoffs are hitting our industry hard, today is another big step in building a better future for video game workers at every level,” Blizzard Albany quality analyst Brock Davis said in a statement. “For quality assurance testers, this contract provides us wages to live on, increased job security benefits and guardrails around artificial intelligence in the workplace.”

As with other unions in Microsoft's game divisions, the Blizzard QA workers organized with the Communications Workers of America. This marks the third union agreement at Microsoft after ZeniMax and Raven Software workers ratified contracts last summer. Several other Blizzard divisions have unionized within the last year, including the cinematics team, Overwatch developers and a unit that works on Diablo.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/blizzards-quality-assurance-workers-finally-have-a-union-contract-162614979.html?src=rss

Indonesia is lifting its ban on Grok, but with some conditions

Grok is once again available in Indonesia, after the country lifted its ban on the AI chatbot that was seen generating millions of sexualized deepfakes, thousands of which included children. The country's Ministry of Communication and Digital Affairs released a statement earlier today, which said X is allowed to resume service in Indonesia but will be subject to monitoring for any future violations.

According to the Indonesian government agency, X provided a letter that detailed several implemented measures that prevent the misuse of its Grok chatbot. Alexander Sabar, the ministry’s director general of digital space supervision, said in the statement that the agency will test the new measures on an ongoing basis and will ban Grok again if it's found spreading illegal content or violating the country's laws regarding children.

The issue dates back to earlier this year, when Indonesia, along with Malaysia and the Philippines, banned the AI chatbot after it was found producing sexually explicit deepfake images of women and children without their consent in response to user requests. Later that month, the Philippines lifted its ban on Grok, followed by Malaysia doing the same just a couple of days after. Similar to Indonesia, Malaysian authorities said they will continue to monitor Grok and threatened more enforcement actions if the AI chatbot repeats its past offenses. Beyond the bans, Grok is also facing investigations from California's attorney general and the UK's media regulator concerning the same issue.

This article originally appeared on Engadget at https://www.engadget.com/ai/indonesia-is-lifting-its-ban-on-grok-but-with-some-conditions-175305634.html?src=rss

Malaysia lifts ban on Grok after taking X at its word

After being one of the first countries in the world to block Elon Musk’s Grok chatbot, Malaysia has now lifted its ban. Along with Indonesia, the country moved swiftly to temporarily halt access to X's frequently controversial AI chatbot earlier this month, after multiple reports emerged of it being used to generate deepfake sexualized images of people, including women and children.

At the time, the Malaysian Communications and Multimedia Commission (MCMC) said the restrictions would remain in place until X Corp and parent xAI could prove they had enforced the necessary safeguards against misuse of the above nature.

Malaysian authorities appear to be taking X at its word, after the MCMC released a statement confirming it was satisfied that Musk’s company has implemented the required safety measures. It added that the authorities will continue to monitor the social media platform, and that any further user safety breaches or violations of Malaysian laws would be dealt with firmly.

At the time of writing, only Malaysia and Indonesia have hit Grok with official bans, though UK regulator Ofcom opened a formal investigation into X under the country’s Online Safety Act, in the wake of the non-consensual sexual deepfake scandal. X has since changed its image-editing policies, and on January 14 the company said Grok will no longer allow "the editing of images of real people in revealing clothing such as bikinis."

Earlier this week, the UK-based non-profit, the Center for Countering Digital Hate (CCDH), estimated that in the 11-day period between December 29 and January 9, Grok generated approximately 3 million sexualized images, around 23,000 of which were of children.

This article originally appeared on Engadget at https://www.engadget.com/ai/malaysia-lifts-ban-on-grok-after-taking-x-at-its-word-144457468.html?src=rss

California AG sends cease and desist to xAI over Grok’s explicit deepfakes

California Attorney General Rob Bonta has sent a cease and desist letter to xAI, days after his office launched an official investigation into the company over reports that Grok was generating nonconsensual

If you’ll recall, xAI and Grok have been under fire for taking images of real individuals and putting them in revealing clothing like bikinis upon random users’ requests.

Bonta’s office demands that xAI immediately cease and desist from creating “digitized sexually explicit material” when the depicted individual didn’t consent to it or if the individual is a minor. It also demanded that xAI stop “facilitating or aiding and abetting the creation… or publication of digitized sexually explicit material” of nonconsenting individuals and persons under 18 years of age.

X changed its policies after the issue broke out and prevented the Grok account from being able to edit images of real people into revealing clothing. xAI also moved Grok’s image-generating features behind a paywall and geoblocked paying users’ ability to edit images of real people into bikinis, but only in regions where it’s illegal.

In his announcement, Bonta said xAI developed a “spicy mode” for Grok to generate explicit content and used it as a marketing point. The California AG also said that Grok-generated sexual images are being used to harass both public figures and ordinary users. “Most alarmingly, news reports have described the use of Grok to alter images of children to depict them in minimal clothing and sexual situations,” Bonta’s announcement reads.

“The actions above violate California law, including California Civil Code section 1708.86, California Penal Code sections 311 et seq. and 647(j)(4), and California Business & Professions Code section 17200,” it said. The state’s Department of Justice now expects to hear from xAI on the steps it’s taking to address these issues within the next five days.

This article originally appeared on Engadget at https://www.engadget.com/ai/california-ag-sends-cease-and-desist-to-xai-over-groks-explicit-deepfakes-140000574.html?src=rss

TikTok sued by former workers over alleged union-busting

You know things are messed up when a Big Tech company fights accusations of union-busting by insisting it was only AI layoffs. That's where things stand after a group of fired TikTok moderators in the UK filed a legal claim with an employment tribunal. The Guardian reported on Friday that around 400 TikTok content moderators who were unionizing were laid off before Christmas.

The workers were sacked a week before a vote was scheduled to establish a collective bargaining unit. The moderators said they wanted better protection against the personal toll of processing traumatic content at a high speed. They accused TikTok of unfair dismissal and violating UK trade union laws.

"Content moderators have the most dangerous job on the internet," John Chadfield, the national officer for tech workers at the Communication Workers Union (CWU), said in a statement to The Guardian. "They are exposed to the child sex abuse material, executions, war and drug use. Their job is to make sure this content doesn't reach TikTok's 30 million monthly users. It is high pressure and low paid. They wanted input into their workflows and more say over how they kept the platform safe. They said they were being asked to do too much with too few resources."

TikTok denied that the firings were union-busting, calling the accusations "baseless." Instead, the company claimed the layoffs were part of a restructuring plan amid its adoption of AI for content moderation. The company said 91 percent of transgressive content is now removed automatically.

The company first announced a restructuring exercise in August, just as hundreds of moderators in TikTok's London offices were organizing for union recognition. At the time, John Chadfield, CWU's National Officer for Tech, said the workers had long been "sounding the alarm over the real-world costs of cutting human moderation teams in favour of hastily developed, immature AI alternatives."

"That TikTok management have announced these cuts just as the company's workers are about to vote on having their union recognised stinks of union-busting and putting corporate greed over the safety of workers and the public,” Chadfield said.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/tiktok-sued-by-former-workers-over-alleged-union-busting-170446921.html?src=rss

X says Grok will no longer edit images of real people into bikinis

X says it is changing its policies around Grok’s image-editing abilities following a multi-week outcry over the chatbot repeatedly being accused of generating sexualized images of children and nonconsensual nudity. In an update shared from the @Safety account on X, the company said it has “implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis.”

The new safeguards, according to X, will apply to all users regardless of whether they pay for Grok. xAI is also moving all of Grok’s image-generating features behind its subscriber paywall so that non-paying users will no longer be able to create images. And it will geoblock "the ability of all users to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X" in regions where it's illegal.

The company's statement comes hours after the state of California opened an investigation into xAI and Grok over its handling of AI-generated nudity and child exploitation material. A statement from California Attorney General Rob Bonta cited one analysis that found "more than half of the 20,000 images generated by xAI between Christmas and New Years depicted people in minimal clothing," including some that appeared to be children.

In its update, X said that it has "zero tolerance" for child exploitation and that it removes "high-priority violative content, including Child Sexual Abuse Material (CSAM) and non-consensual nudity" from its platform. Earlier in the day, Elon Musk said he was "not aware of any naked underage images generated by Grok." He later added that when its NSFW setting is enabled, "Grok is supposed [sic] allow upper body nudity of imaginary adult humans (not real ones) consistent with what can be seen in R-rated movies on Apple TV." He added that "this will vary in other regions" based on local laws.  

Malaysia and Indonesia both recently moved to block Grok citing safety concerns and its handling of sexually explicit AI-generated material. In the UK, where regulator Ofcom is also investigating xAI and Grok, officials have also said they would back a similar block of the chatbot. 

Have a tip for Karissa? You can reach her by email, on X, Bluesky, Threads, or send a message to @karissabe.51 to chat confidentially on Signal.

This article originally appeared on Engadget at https://www.engadget.com/ai/x-says-grok-will-no-longer-edit-images-of-real-people-into-bikinis-231430257.html?src=rss

California is investigating Grok over AI-generated CSAM and nonconsensual deepfakes

California authorities have launched an investigation into xAI following weeks of reports that the chatbot was generating sexualized images of children. "xAI appears to be facilitating the large-scale production of deepfake nonconsensual intimate images that are being used to harass women and girls across the internet, including via the social media platform X," California Attorney General Rob Bonta's office said in a statement

The statement cited a report that "more than half of the 20,000 images generated by xAI between Christmas and New Years depicted people in minimal clothing," including some that appeared to be children. "We have zero tolerance for the AI-based creation and dissemination of nonconsensual intimate images or of child sexual abuse material,” Bonta said. “Today, my office formally announces an investigation into xAI to determine whether and how xAI violated the law.

The investigation was announced as California Governor Gavin Newsom also called on Bonta to investigate xAI. "xAI’s decision to create and host a breeding ground for predators to spread nonconsensual sexually explicit AI deepfakes, including images that digitally undress children, is vile," Newsom wrote.

California authorities aren't the first to investigate the company following widespread reports of AI-generated child sexual abuse material (CSAM) and non-consensual intimate images of women. UK regulator Ofcom has also opened an official inquiry, and European Union officials have said they are also looking into  the issue. Malaysia and Indonesia have moved to block Grok. 

Last week, xAI began imposing rate limits on Grok's image generation abilities, but has so far declined to pull the plug entirely. When asked to comment on the California investigation, xAI responded with an automated email that said "Legacy Media Lies." 

Earlier on Wednesday, Elon Musk said he was "not aware of any naked underage images generated by Grok." Notably, that statement does not directly refute Bonta's allegation that Grok is being used "to alter images of children to depict them in minimal clothing and sexual situations." Musk said that "the operating principle for Grok is to obey the laws" and that the company works to address cases of "adversarial hacking of Grok prompts."

This article originally appeared on Engadget at https://www.engadget.com/ai/california-is-investigating-grok-over-ai-generated-csam-and-nonconsensual-deepfakes-202029635.html?src=rss

He could just turn it off

Generative AI, we are repeatedly told, is a transformative and complicated technology. So complicated that its own creators are unable to explain why it acts the way it does, and so transformative that we'd be fools to stand in the way of progress. Even when progress resembles a machine for undressing strangers without their consent on an unprecedented scale, as has been the case of late with Elon Musk's Grok chatbot. 

UK Prime Minister Kier Starmer seems to have so fully bought into the grand lie of the AI bubble that he was willing to announce:

"I have been informed this morning that X is acting to ensure full compliance with UK law."

Not that it currently is in compliance. Nor a timeline in which it is expected to do so. Just that he seems satisfied that someday, eventually, Musk's pet robot will stop generating child sexual abuse material

This statement comes just under two days after Starmer was quoted as saying "If X cannot control Grok, we will." What could Elon possibly have said to earn this pathetic capitulation. AI is difficult? Solutions take time?

These are entirely cogent technical arguments until you remember: He could just turn it off. 

Elon Musk has the power to disable Grok, if not in whole (we should be so lucky) than its image generation capabilities. We know this intuitively, but also because he rate-limited Grok's image generation after this latest scandal: after a few requests, free users are now prompted to pay $8 per month to continue enlisting a wasteful technology to remove articles of clothing from women. Sweep it under the rug, make a couple bucks along the way.

Not only is it entirely possible for image generation to be turned off, it's the only responsible option. Software engineers regularly roll back updates or turn off features that work less than optimally; this one's still up and running despite likely running afoul of the law. 

That we have now gone the better part of a month aware this problem exists; that the "feature" still remains should tell Starmer and others all they need to know. Buddy, you're carrying water for a bozo who does not seem to care that one such victim was reportedly Ashley St Clair, the mother of one of his (many) children.

Some countries — namely Malaysia and Indonesia — chose to turn Grok off for their citizens by blocking the service. Indonesia's Communication and Digital Affairs Minister was quoted as saying “The government sees nonconsensual sexual deepfakes as a serious violation of human rights." Imagine if everyone in the business of statecraft felt that way. 

The UK (not to mention the US, but please, expect nothing from us, we're busy doing authoritarianism) has a lot more sway over X, and by extension Elon, than either of those countries. Musk does, and is looking to do even more, business in the UK. Even if Musk were not perhaps the world's most well known liar, Grok can still make images and that should speak for itself. Grok should be well out of second chances by now, and it's up to government leaders to say no more until they can independently verify it's no longer capable of harm.

This article originally appeared on Engadget at https://www.engadget.com/he-could-just-turn-it-off-180209551.html?src=rss