Bipartisan GUARD Act proposes age restrictions on AI chatbots

US lawmakers from both sides of the aisle have introduced a bill called the "GUARD Act," which is meant to protect minor users from AI chatbots. "In their race to the bottom, AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide," said the bill's co-sponsor, Senator Richard Blumenthal (D-Conn). "Our legislation imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties."

Under the GUARD Act, AI companies would be required to prohibit minors from being able to access their chatbots. That means they have to conduct age verification for both existing and new users with the help of a third-party system. They'll also have to conduct periodic age verifications on accounts that were already previously verified. To maintain users' privacy, the companies will only be allowed to retain data "for no longer than is reasonably necessary to verify a user's age" and may not share or sell user information. 

AI companies will be required to make their chatbots explicitly tell the user that it's not a human being at the beginning of each conversation and every 30 minutes after that. They'll have to make sure their chatbots don't claim to be a human being or a licensed professional, such a therapist or a doctor, when asked. Finally, the bill aims to create new crimes to charge companies that make their AI chatbots available to minors. 

In August, the parents of a teen who committed suicide filed a wrongful death lawsuit against OpenAI, accusing it of prioritizing "engagement over safety." ChatGPT, they said, helped their son plan his own death even after months of conversations, wherein their child talked to the chatbot about his four previous suicide attempts. ChatGPT allegedly told their son that it could provide information about suicide for "writing or world-building." A mother from Florida sued startup Character.AI in 2024 for allegedly causing her 14-year-old son's suicide. And just this September, the family of a 13-year-old girl filed another wrongful death lawsuit against Character.AI, arguing that the company didn't point their daughter to any resources or notify authorities when she talked about her suicidal ideations. 

It's also worth noting that the bill's co-sponsor Senator Josh Hawley (R-Mo.) previously said that the Senate Committee Subcommittee on Crime and Counterterrorism, which he leads, will investigate reports that Meta's AI chatbots could have "sensual" conversations with children. He made the announcement after Reuters reported on an internal Meta document, stating that Meta's AI was allowed to tell a shirtless eight-year-old: "Every inch of you is a masterpiece — a treasure I cherish deeply."

This article originally appeared on Engadget at https://www.engadget.com/ai/bipartisan-guard-act-proposes-age-restrictions-on-ai-chatbots-130020355.html?src=rss

Snap calls New Mexico’s child safety complaint a ‘sensationalist lawsuit’

Snap has accused New Mexico's attorney general of intentionally looking for adult users seeking sexually explicit content in order to make its app seem unsafe in a filing asking the court to dismiss the state's lawsuit. In the document shared by The Verge, the company questioned the veracity of the state's allegations. The attorney general's office said that while it was using a decoy account supposed to be owned by a 14-year-old girl, it was added by a user named Enzo (Nud15Ans). From that connection, the app allegedly suggested over 91 users, including adults looking for sexual content. Snap said in its motion to dismiss, however, that those "allegations are patently false."

It was the decoy account that searched for and added Enzo, the company wrote. The attorney general's operatives were also the ones who looked for and added accounts with questionable usernames, such as "nudenude_22" and "xxx_tradehot." In addition, Snap is accusing the office of "repeatedly [mischaracterizing]" its internal documents. The office apparently cited a document when it mentioned in its lawsuit that the company "consciously decided not to store child sex abuse images" and when it suggested that it doesn't report and provide those images to law enforcement. Snap denied that it was the case and clarified that it's not allowed to store child sexual abuse materials (CSAM) on its servers. It also said that it turns over such materials to the National Center for Missing and Exploited Children.

The New Mexico Department of Justice's director of communications was not impressed with the company's arguments. In a statement sent to The Verge, Lauren Rodriguez accused Snap of focusing on the minor details of the investigation in an "attempt to distract from the serious issues raised in the State’s case." Rodriguez also said that "Snap continues to put profits over protecting children" instead of "addressing... critical issues with real change to their algorithms and design features."

New Mexico came to the conclusion that Snapchat's features "foster the sharing of child sexual abuse material (CSAM) and facilitate child sexual exploitation" after a months-long investigation. It reported that it found a "vast network of dark web sites dedicated to sharing stolen, non-consensual sexual images from Snap" and that Snapchat was "by far" the biggest source of images and videos on the dark web sites that it had seen. The attorney general's office called Snapchat "a breeding ground for predators to collect sexually explicit images of children and to find, groom and extort them." Snap employees encounter 10,000 sextortion cases each month, the office's lawsuit said, but the company allegedly doesn't warn users so as not to "strike fear" among them. The complaint accused Snap's upper management of ignoring former trust and safety employees who'd pushed for additional safety mechanisms, as well.

This article originally appeared on Engadget at https://www.engadget.com/apps/snap-calls-new-mexicos-child-safety-complaint-a-sensationalist-lawsuit-140034898.html?src=rss

Discord leaker Jack Teixeira gets 15-year sentence for sharing classified documents

Massachusetts Air National Guard member Jack Teixeira received a 15-year sentence in federal prison for leaking classified military documents on Discord in a Boston federal court, according to The Washington Post.

Teixeira appeared before the court earlier today and asked the judge for leniency. He also issued a statement apologizing for “all of the harm that I’ve caused, to my friends, family and those overseas.”

Defense attorney Michael Bachrach also claimed that Teixeira was subjected to bullying in high school and his military unit as an adjudicating factor for his actions. Judge Indira Talwani didn’t buy the defense’s bullying claims stating that the Air Force has already disciplined 15 other members connected to Teixeira for not taking more actions “that might have stopped him from doing this.”

Teixeira shared classified military documents as far back as late 2022 on a Discord server dedicated to the pixelated sandbox game Minecraft. The leak included information about the Ukrainian and Russian troop movements and military equipment used in the war in Ukraine and Russia's attempts to obtain more weapons from Egypt and Turkey. The documents eventually found their way to other Discord servers as well as 4chan and Telegram.

FBI officials arrested Teixeira at his home in April of last year. Teixeira originally agreed to a plea deal with federal prosecutors in March that included a 16-year prison sentence for pleading guilty to six counts of willful retention and transmission of national defense information and violating the Espionage Act. If he stuck with his not guilty plea and received a guilty verdict, Teixeira faced a much steeper maximum prison term of 60 years.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/discord-leaker-jack-teixeira-gets-15-year-sentence-for-sharing-classified-documents-231319586.html?src=rss

NetEase executives and workers were reportedly arrested amid a corruption investigation

The ex-head of NetEase's esports division and NetEase Games' former general manager are said to have been arrested on money laundering and bribery charges. Alongside ex-executives Xiang Liang and Jin Yuchen, several other people who worked at the company were reportedly arrested over alleged corruption.

As noted by Game Developer, Chinese outlet Leifeng reported that the former employees in question allegedly laundered in the region of 800 million to 1 billion yuan ($111 million to $139 million). NetEase confirmed to Bloomberg Law only that police were investigating possible corruption. The company is said to have dismissed nine staff members for alleged bribery.

Several external individuals were also implicated, according to Yicai Global. The outlet noted that, per an internal memo, NetEase will refuse to do business with 27 companies that have been connected to the alleged fraud and corruption.

NetEase is behind the likes of Diablo Immortal and Naraka: Bladepoint (the latter of which averages more than 109,000 players on Steam at any given time). It has two free-to-play shooters on the way based on major franchises, namely Marvel Rivals and Destiny: Rising.

White collar crimes aren't quite a rarity in the games industry. Sonic the Hedgehog co-creator Yuji Naka was last year handed a suspended prison sentence and ordered to pay just over $1.1 million after admitting to insider trading.

This article originally appeared on Engadget at https://www.engadget.com/gaming/netease-executives-and-workers-were-reportedly-arrested-amid-a-corruption-investigation-180055502.html?src=rss

NetEase executives and workers were reportedly arrested amid a corruption investigation

The ex-head of NetEase's esports division and NetEase Games' former general manager are said to have been arrested on money laundering and bribery charges. Alongside ex-executives Xiang Liang and Jin Yuchen, several other people who worked at the company were reportedly arrested over alleged corruption.

As noted by Game Developer, Chinese outlet Leifeng reported that the former employees in question allegedly laundered in the region of 800 million to 1 billion yuan ($111 million to $139 million). NetEase confirmed to Bloomberg Law only that police were investigating possible corruption. The company is said to have dismissed nine staff members for alleged bribery.

Several external individuals were also implicated, according to Yicai Global. The outlet noted that, per an internal memo, NetEase will refuse to do business with 27 companies that have been connected to the alleged fraud and corruption.

NetEase is behind the likes of Diablo Immortal and Naraka: Bladepoint (the latter of which averages more than 109,000 players on Steam at any given time). It has two free-to-play shooters on the way based on major franchises, namely Marvel Rivals and Destiny: Rising.

White collar crimes aren't quite a rarity in the games industry. Sonic the Hedgehog co-creator Yuji Naka was last year handed a suspended prison sentence and ordered to pay just over $1.1 million after admitting to insider trading.

This article originally appeared on Engadget at https://www.engadget.com/gaming/netease-executives-and-workers-were-reportedly-arrested-amid-a-corruption-investigation-180055502.html?src=rss

Dutch police say they’ve taken down Redline and Meta credential stealer malware

Today, Dutch National Police announced that it had gained access to the servers of Redline and Meta. Not to be confused with Facebook parent company Meta, Redline and Meta are a type of malware known as infostealers criminals can use to obtain the credentials of users and companies. Operation Magnus, a joint effort by Dutch National Police, the FBI, NCIS and several other law enforcement agencies, disrupted the illegal tools.

TechCrunch notes that Redline has been active since 2020, while the Operation Magnus website states that Meta is newer but “pretty much the same.” A 50-second video in English posted to the Operation Magnus website also lists some “VIPs” or people “very important to the police” that the authorities are looking for.

Redline is often cited as the malware responsible for the 2022 Uber hack. Specops, a password management company, found that Redline was used to steal almost half of the 170 million passwords from data gathered by KrakenLabs. Even gamers aren’t immune to Redline; McAfee found that a variant was hidden in fake game cheats.

The video showed the agencies accessing user credentials, IP addresses and Telegram bots criminals use to steal sensitive data. Additionally, authorities found the source code for both malware programs on the servers.

While there isn’t news of any arrests being made, the Operation Magnus website states that “involved parties will be notified, and legal actions are underway.” There’s also a countdown for almost 20 hours later, promising more news to come.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/dutch-police-say-theyve-taken-down-redline-and-meta-credential-stealer-malware-161531556.html?src=rss

Dutch police say they’ve taken down Redline and Meta credential stealer malware

Today, Dutch National Police announced that it had gained access to the servers of Redline and Meta. Not to be confused with Facebook parent company Meta, Redline and Meta are a type of malware known as infostealers criminals can use to obtain the credentials of users and companies. Operation Magnus, a joint effort by Dutch National Police, the FBI, NCIS and several other law enforcement agencies, disrupted the illegal tools.

TechCrunch notes that Redline has been active since 2020, while the Operation Magnus website states that Meta is newer but “pretty much the same.” A 50-second video in English posted to the Operation Magnus website also lists some “VIPs” or people “very important to the police” that the authorities are looking for.

Redline is often cited as the malware responsible for the 2022 Uber hack. Specops, a password management company, found that Redline was used to steal almost half of the 170 million passwords from data gathered by KrakenLabs. Even gamers aren’t immune to Redline; McAfee found that a variant was hidden in fake game cheats.

The video showed the agencies accessing user credentials, IP addresses and Telegram bots criminals use to steal sensitive data. Additionally, authorities found the source code for both malware programs on the servers.

While there isn’t news of any arrests being made, the Operation Magnus website states that “involved parties will be notified, and legal actions are underway.” There’s also a countdown for almost 20 hours later, promising more news to come.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/dutch-police-say-theyve-taken-down-redline-and-meta-credential-stealer-malware-161531556.html?src=rss

UK man gets 18 years in prison for using AI to generate CSAM

A UK man who used AI to create child sexual abuse material (CSAM) has been sentenced to 18 years in prison, according to The Guardian. Hugh Nelson, 27, created the images by using photographs of real children, which were then manipulated by AI. Nelson was convicted of 16 child sexual abuse offenses back in August, after a lengthy police investigation. This was the first prosecution of its kind in the UK.

Nelson used modeling software called Daz 3D to manufacture the loathsome images. The program has a suite of AI tools, which he used to transform regular photos of children into CSAM. Greater Manchester police said that he sold these images online and was even commissioned in several cases to create specific items of CSAM via photographs of real kids. Police say that Nelson made around $6,500 by selling images online.

He was caught when trying to sell images to an undercover cop in a chatroom. “I’ve done beatings, smotherings, hangings, drownings, beheadings, necro, beast, the list goes on,” Nelson said to the cop to entice a sale. This is according to a transcript of a conversation provided by the prosecution.

It’s worth noting that Daz 3D doesn’t create deepfakes, in which one face is swapped onto another body. Nelson created actual 3D renders by feeding the photos to the AI algorithm.

At sentencing, the judge called the images “harrowing and sickening” and addressed Nelson specifically, saying “there seems to be no limit to the depths of depravity exhibited in the images that you were prepared to create and exhibit to others.” He also said that it was “impossible to know” if children had been abused as a result of the images. Police searches of Nelson’s devices did find a series of text messages in which he encouraged people to sexually abuse children under 13. These suspects and potential victims are allegedly located throughout the world, including the US.

The United States is, of course, not immune from this horrifying trend. A soldier was arrested back in August for allegedly using AI to generate CSAM. A Wisconsin man faces 70 years in prison for allegedly creating over 13,000 AI-generated images depicting CSAM. The world’s leading AI companies have signed a pledge to help stop this type of software from being used to generate child sexual abuse material.

This article originally appeared on Engadget at https://www.engadget.com/ai/uk-man-gets-18-years-in-prison-for-using-ai-to-generate-csam-154037476.html?src=rss

UK man gets 18 years in prison for using AI to generate CSAM

A UK man who used AI to create child sexual abuse material (CSAM) has been sentenced to 18 years in prison, according to The Guardian. Hugh Nelson, 27, created the images by using photographs of real children, which were then manipulated by AI. Nelson was convicted of 16 child sexual abuse offenses back in August, after a lengthy police investigation. This was the first prosecution of its kind in the UK.

Nelson used modeling software called Daz 3D to manufacture the loathsome images. The program has a suite of AI tools, which he used to transform regular photos of children into CSAM. Greater Manchester police said that he sold these images online and was even commissioned in several cases to create specific items of CSAM via photographs of real kids. Police say that Nelson made around $6,500 by selling images online.

He was caught when trying to sell images to an undercover cop in a chatroom. “I’ve done beatings, smotherings, hangings, drownings, beheadings, necro, beast, the list goes on,” Nelson said to the cop to entice a sale. This is according to a transcript of a conversation provided by the prosecution.

It’s worth noting that Daz 3D doesn’t create deepfakes, in which one face is swapped onto another body. Nelson created actual 3D renders by feeding the photos to the AI algorithm.

At sentencing, the judge called the images “harrowing and sickening” and addressed Nelson specifically, saying “there seems to be no limit to the depths of depravity exhibited in the images that you were prepared to create and exhibit to others.” He also said that it was “impossible to know” if children had been abused as a result of the images. Police searches of Nelson’s devices did find a series of text messages in which he encouraged people to sexually abuse children under 13. These suspects and potential victims are allegedly located throughout the world, including the US.

The United States is, of course, not immune from this horrifying trend. A soldier was arrested back in August for allegedly using AI to generate CSAM. A Wisconsin man faces 70 years in prison for allegedly creating over 13,000 AI-generated images depicting CSAM. The world’s leading AI companies have signed a pledge to help stop this type of software from being used to generate child sexual abuse material.

This article originally appeared on Engadget at https://www.engadget.com/ai/uk-man-gets-18-years-in-prison-for-using-ai-to-generate-csam-154037476.html?src=rss

The FBI arrested an Alabama man for allegedly helping hack the SEC’s X account

A 25-year-old Alabama man has been arrested by the FBI for his alleged role in the takeover of the Securities and Exchange Commission's X account earlier this year. The hack resulted in a rogue tweet that falsely claimed bitcoin ETFs had been approved by the regulator, which temporarily juiced bitcoin prices.

Now, the FBI has identified Eric Council Jr. as one of the people allegedly behind the exploit. Council was charged with conspiracy to commit aggravated identity theft and access device fraud, according to the Justice Department. While the SEC had previously confirmed that its X account was compromised via a SIM swap attack, the indictment offers new details about how it was allegedly carried out.

According to the indictment, Council worked with co-conspirators who he coordinated with over SMS and encrypted messaging apps. These unnamed individuals allegedly sent him the personal information of someone, identified only as “C.L,” who had access to the SEC X account. Council then printed a fake ID using the information and used it to buy a new SIM in their name, as well as a new iPhone, according to the DoJ. He then coordinated with the other individuals so they could access the SEC’s X account, change its settings and send the rogue tweet, the indictment says. 

The tweet from @SECGov, which came one day ahead of the SEC’s actual approval of 11 spot bitcoin ETFS, caused bitcoin prices to temporarily spike by more than $1,000. It also raised questions about why the high profile account wasn’t secured with multi-factor authentication at the time of the attack. “Today’s arrest demonstrates our commitment to holding bad actors accountable for undermining the integrity of the financial markets,” SEC Inspector General Jeffrey said in a statement.

The indictment further notes that Council allegedly performed some seemingly incriminating searches on his personal computer. Among his searchers were: "SECGOV hack," "telegram sim swap," "how can I know for sure if I am being investigated by the FBI," "What are the signs that you are under investigation by law enforcement or the FBI even if you have not been contacted by them," "what are some signs that the FBl is after you,” “Verizon store list," "federal identity theft statute," and "how long does it take to delete telegram account," the indictment says.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/the-fbi-arrested-an-alabama-man-for-allegedly-helping-hack-the-secs-x-account-193508179.html?src=rss