How to watch the CEOs of Meta, TikTok, Discord, Snap and X testify about child safety

The CEOs of five social media companies are headed to Washington to testify in a Senate Judiciary Committee hearing about child safety. The hearing will feature Meta CEO Mark Zuckerberg, Snap CEO Evan Spiegel, TikTok CEO Shou Chew, Discord CEO Jason Citron and X CEO Linda Yaccarino.

The group will face off with lawmakers over their record on child exploitation and their efforts to protect teens using their services. The hearing will be live streamed beginning at 10 AM ET on Wednesday, January 31.

Though there have been previous hearings dedicated to teen safety, Wednesday’s event will be the first time Congress has heard directly from Spiegel, Yaccarino and Citron. It’s also only the second appearance for TikTok’s Chew, who was grilled by lawmakers about the app’s safety record and ties to China last year.

Zuckerberg, of course, is well-practiced at these hearings by now. But he will likely face particular pressure from lawmakers following a number of allegations about Meta’s safety practices that have come out in recent months as the result of a lawsuit from 41 state attorneys general. Court documents from the suit allege that Meta turned a blind eye to children under 13 using its service, did little to stop adults from sexually harassing teens on Facebook and that Zuckerberg personally intervened to stop an effort to ban plastic surgery filters on Instagram.

As with previous hearings with tech CEOs, it’s unclear what meaningful policy changes might come from their testimony. Lawmakers have proposed a number of bills dealing with online safety and child exploitation, though none have been passed into law. However, there is growing bipartisan support for measures that would shield teens from algorithms and data gathering and implement parental consent requirements.

This article originally appeared on Engadget at https://www.engadget.com/how-to-watch-the-ceos-of-meta-tiktok-discord-snap-and-x-testify-about-child-safety-214210385.html?src=rss

X plans to hire 100 content moderators to fill new Trust and Safety center in Austin

X’s head of business operations Joe Benarroch said the company plans to open a new office in Austin, Texas for a team that will be dedicated to content moderation, Bloomberg reports. The “Trust and Safety center of excellence,” for which the company is planning to hire 100 full-time employees, will primarily focus on stopping the spread of child sexual exploitation (CSE) materials. 

X CEO Linda Yaccarino is set to testify before Congress on Wednesday in a hearing about CSE, and the platform at the end of last week published a blog post about its efforts to curb such materials, saying it’s “determined to make X inhospitable for actors who seek to exploit minors.”

According to Bloomberg, Benarroch said, “X does not have a line of business focused on children, but it’s important that we make these investments to keep stopping offenders from using our platform for any distribution or engagement with CSE content.” The team will also address other content issues, like hate speech and “violent posts,” according to Bloomberg. Elon Musk spent much of his first year at X taking steps to turn the platform into a bastion of “free speech,” and gutted the content moderation teams that had been put in place by Twitter before his takeover.

This article originally appeared on Engadget at https://www.engadget.com/x-plans-to-hire-100-content-moderators-to-fill-new-trust-and-safety-center-in-austin-173111536.html?src=rss

ElevenLabs reportedly banned the account that deepfaked Biden’s voice with its AI tools

ElevenLabs, an AI startup that offers voice cloning services with its tools, has banned the user that created an audio deepfake of Joe Biden used in an attempt to disrupt the elections, according to Bloomberg. The audio impersonating the president was used in a robocall that went out to some voters in New Hampshire last week, telling them not to vote in their state's primary. It initially wasn't clear what technology was used to copy Biden's voice, but a thorough analysis by security company Pindrop showed that the perpetrators used ElevanLabs' tools. 

The security firm removed the background noise and cleaned the robocall's audio before comparing it to samples from more than 120 voice synthesis technologies used to generate deepfakes. Pindrop CEO Vijay Balasubramaniyan told Wired that it "came back well north of 99 percent that it was ElevenLabs." Bloomberg says the company was notified of Pindrop's findings and is still investigating, but it has already identified and suspended the account that made the fake audio. ElevenLabs told the news organization that it can't comment on the issue itself, but that it's "dedicated to preventing the misuse of audio AI tools and [that it takes] any incidents of misuse extremely seriously."

The deepfaked Biden robocall shows how technologies that can mimic somebody else's likeness and voice could be used to manipulate votes this upcoming presidential election in the US. "This is kind of just the tip of the iceberg in what could be done with respect to voter suppression or attacks on election workers," Kathleen Carley, a professor at Carnegie Mellon University, told The Hill. "It was almost a harbinger of what all kinds of things we should be expecting over the next few months."

It only took the internet a few days after ElevenLabs launched the beta version of its platform to start using it to create audio clips that sound like celebrities reading or saying something questionable. The startup allows customers to use its technology to clone voices for "artistic and political speech contributing to public debates." Its safety page does warn users that they "cannot clone a voice for abusive purposes such as fraud, discrimination, hate speech or for any form of online abuse without infringing the law." But clearly, it needs to put more safeguards in place to prevent bad actors from using its tools to influence voters and manipulate elections around the world. 

This article originally appeared on Engadget at https://www.engadget.com/elevenlabs-reportedly-banned-the-account-that-deepfaked-bidens-voice-with-its-ai-tools-083355975.html?src=rss

23andMe’s data hack went unnoticed for months

In late 2023, genetic testing company 23andMe admitted that its customer data was leaked online. A company representative told us back then that the bad actors were able to access the DNA Relatives profile information of roughly 5.5 million customers and the Family Tree profile information of 1.4 million DNA Relative participants. Now, the company has revealed more details about the incident in a legal filing, where it said that the hackers started breaking into customer accounts in late April 2023. The bad actors' activities went on for months and lasted until September 2023 before the company finally found out about the security breach. 

23andMe's filing contains the letters it sent customers who were affected by the incident. In the letters, the company explained that the attackers used a technique called credential stuffing, which entailed using previously compromised login credentials to access customer accounts through its website. The company didn't notice anything wrong until after a user posted a sample of the stolen data on the 23andMe subreddit in October. As TechCrunch notes, hackers had already advertised that stolen data on a hacker forum a few months before that in August, but 23andMe didn't catch wind of that post. The stolen information included customer names, birth dates, ancestry and health-related data. 

23andMe advised affected users to change their passwords after disclosing the data breach. But before sending out letters to customers, the company changed the language in its terms of service that reportedly made it harder for people affected by the incident to join forces and legally go after the company. 

This article originally appeared on Engadget at https://www.engadget.com/23andmes-data-hack-went-unnoticed-for-months-081332978.html?src=rss