The Justice Department sues TikTok for breaking child privacy laws

The US Department of Justice is suing TikTok for violating a child privacy law and violating a 2019 agreement with the Federal Trade Commission for previous privacy violations. The lawsuit stems from an earlier investigation into the company by the Federal Trade Commission, which referred its privacy case to the DoJ earlier this year.

The FTC had been looking into whether TikTok had violated the terms of an earlier privacy settlement with Musical.ly, which was acquired by ByteDance prior to the launch of TikTok. According to the FTC, the investigation found that TikTok had “flagrantly” violated both the 2019 settlement and the Children's Online Privacy Protection Act (COPPA).

In a statement, the Justice Department also cited TikTok’s collection of personal information about children on its platform and its failure to comply with the requests for the information to be deleted.

From 2019 to the present, TikTok knowingly permitted children to create regular TikTok accounts and to create, view, and share short-form videos and messages with adults and others on the regular TikTok platform. The defendants collected and retained a wide variety of personal information from these children without notifying or obtaining consent from their parents. Even for accounts that were created in “Kids Mode” (a pared-back version of TikTok intended for children under 13), the defendants unlawfully collected and retained children’s email addresses and other types of personal information. Further, when parents discovered their children’s accounts and asked the defendants to delete the accounts and information in them, the defendants frequently failed to honor those requests. The defendants also had deficient and ineffectual internal policies and processes for identifying and deleting TikTok accounts created by children.

In a statement, TikTok said it took issue with the allegations, saying it had previously addressed some of the conduct described by the Justice Department. “We disagree with these allegations, many of which relate to past events and practices that are factually inaccurate or have been addressed,” the company said. “We are proud of our efforts to protect children, and we will continue to update and improve the platform. To that end, we offer age-appropriate experiences with stringent safeguards, proactively remove suspected underage users, and have voluntarily launched features such as default screentime limits, Family Pairing, and additional privacy protections for minors.”

The lawsuit comes at a particularly inconvenient time for TikTok, which is set to face off with the Justice Department in federal court next month over a law that aims to force ByteDance to sell the app or face a ban in the United States.

This article originally appeared on Engadget at https://www.engadget.com/the-justice-department-sues-tiktok-for-breaking-child-privacy-laws-190456433.html?src=rss

Turkey has blocked Instagram amid a dispute over Hamas-related content

Instagram is blocked in Turkey amid a dispute over Hamas-related content on the platform. The app has been inaccessible in the country since Friday morning. Netblocks, an organization that tracks internet and social media outages, confirmed that Instagram had been restricted in the country.

Turkish regulators didn’t specify why the block was in place but, as Bloomberg reports, the crackdown on Instagram appears to be related to its handling of Hamas-related posts on the platform. On Friday, Turkey’s head of communications Fahrettin Altun, said in a post on X that Instagram “is actively preventing people from posting messages of condolences” for Ismail Haniyeh, the Hamas leader who was killed earlier this week.

Meta hasn’t publicly commented on the block.

It’s not the first time Turkish authorities have blocked a major social media service. Twitter was briefly blocked in the country last year following a devastating earthquake that killed thousands of people. YouTube and Twitter were also blocked in 2014.

This article originally appeared on Engadget at https://www.engadget.com/turkey-has-blocked-instagram-amid-a-dispute-over-hamas-related-content-175934777.html?src=rss

Meta’s Threads has 200 million users

The Threads app has passed the 200 million user mark, according to Meta exec Adam Mosseri. The milestone comes one day after Mark Zuckerberg said that the service was “about” to hit 200 million users during the company’s latest earnings call.

While Threads is still relatively tiny compared to Meta’s other apps, it has grown at a much faster clip. Zuckerberg previously announced 175 million users last month as Threads marked its one-year anniversary, and the Meta CEO has repeatedly speculated that it could be the company’s next one-billion-user app.

“We've been building this company for 20 years, and there just are not that many opportunities that come around to grow a billion-person app,” Zuckerberg said. “Obviously, there's a ton of work between now and there.”

Continuing to grow the app’s user base will be key to Meta’s ability to eventually monetize Threads, which currently has no ads or business model. “All these new products, we ship them, and then there's a multi-year time horizon between scaling them and then scaling them into not just consumer experiences but very large businesses,” Zuckerberg said.

While Threads has so far been able to capitalize on the chaos and controversy surrounding X, Meta is still grappling with how to position its app that’s widely viewed as an alternative to X. Mosseri and Zuckerberg have said they don’t want the app to promote political content to users that don’t explicitly ask for it. This policy has even raised questions among some Meta employees, The Information recently reported.

Thread’s “for you” algorithm is also widely viewed as slow to keep up with breaking news and current events. Mosseri recently acknowledged the issue. “We’re definitely not fast enough yet, and we’re actively working to get better there,” he wrote in a post on Threads.

This article originally appeared on Engadget at https://www.engadget.com/metas-threads-has-200-million-users-211656147.html?src=rss

Meta’s Threads has 200 million users

The Threads app has passed the 200 million user mark, according to Meta exec Adam Mosseri. The milestone comes one day after Mark Zuckerberg said that the service was “about” to hit 200 million users during the company’s latest earnings call.

While Threads is still relatively tiny compared to Meta’s other apps, it has grown at a much faster clip. Zuckerberg previously announced 175 million users last month as Threads marked its one-year anniversary, and the Meta CEO has repeatedly speculated that it could be the company’s next one-billion-user app.

“We've been building this company for 20 years, and there just are not that many opportunities that come around to grow a billion-person app,” Zuckerberg said. “Obviously, there's a ton of work between now and there.”

Continuing to grow the app’s user base will be key to Meta’s ability to eventually monetize Threads, which currently has no ads or business model. “All these new products, we ship them, and then there's a multi-year time horizon between scaling them and then scaling them into not just consumer experiences but very large businesses,” Zuckerberg said.

While Threads has so far been able to capitalize on the chaos and controversy surrounding X, Meta is still grappling with how to position its app that’s widely viewed as an alternative to X. Mosseri and Zuckerberg have said they don’t want the app to promote political content to users that don’t explicitly ask for it. This policy has even raised questions among some Meta employees, The Information recently reported.

Thread’s “for you” algorithm is also widely viewed as slow to keep up with breaking news and current events. Mosseri recently acknowledged the issue. “We’re definitely not fast enough yet, and we’re actively working to get better there,” he wrote in a post on Threads.

This article originally appeared on Engadget at https://www.engadget.com/metas-threads-has-200-million-users-211656147.html?src=rss

Season 2 of ‘Squid Game’ arrives on Netflix December 26

Netflix has finally set a date for the next season of Squid Game, almost three years after the Korean drama became a massive hit in the US. Season 2 is set to hit Netflix December 26, with a final third season coming sometime in 2025, the streamer announced.

While the initial teaser for Season 2 doesn’t reveal much about what to expect in the next installment, Netflix shared a few more details about the plot in a letter from Hwang Dong-hyuk, the series’ director and writer.

Seong Gi-hun who vowed revenge at the end of Season 1 returns and joins the game again. Will he succeed in getting his revenge? Front Man doesn’t seem to be an easy opponent this time either. The fierce clash between their two worlds will continue into the series finale with Season 3, which will be brought to you next year.

I am thrilled to see the seed that was planted in creating a new Squid Game grow and bear fruit through the end of this story.

We’ll do our best to make sure we bring you yet another thrill ride. I hope you’re excited for what’s to come. Thank you, always, and see you soon, everyone.

Despite the long wait since the initial season, Netflix has done a lot to capitalize on the success of Squid Game. The series inspired a spinoff reality show, called Squid Game: The Challenge, which has also been greenlit for a second season. The company also treated fans to an IRL Squid Game pop-up in Los Angeles.

Additionally, Netflix announced plans for a Squid Game multiplayer game that will debut alongside Season 2 of the show. Details of the game are unclear, but the company has said that players will “compete with friends in games they’ll recognize from the series.”

This article originally appeared on Engadget at https://www.engadget.com/season-2-of-squid-game-arrives-on-netflix-december-26-000010045.html?src=rss

Google dismisses Elon Musk’s claim that autocomplete engaged in election interference

Google has responded to allegations that it “censored” searches about Donald Trump after Elon Musk baselessly claimed the company had imposed a “search ban” on the former president. The issues, Google explained, were due to bugs in its autocomplete feature. But Musk’s tweet, which was viewed more than 118 million times, nonetheless forced the search giant to publicly explain one of its most basic features.

“Over the past few days, some people on X have posted claims that Search is ‘censoring’ or ‘banning’ particular terms,” Google wrote in a series of posts on X. “That’s not happening.”

Though Google didn’t name Musk specifically, over the weekend the X owner said that “ Google has a search ban on President Donald Trump.” The claim appeared to be based on a single screenshot of a search that showed Google suggested “president donald duck” and “president donald regan” when “president donald” was typed into the search box.

The same day, Donald Trump Jr. shared a similar image that showed no autocomplete results relating to Donald Trump for the search “assassination attempt on.” Both Trump Jr. and Musk accused the company of “election interference.”

In its posts Tuesday, Google explained that people are free to search for whatever they want regardless of what appears in its autocomplete suggestions. It added that “built-in protections related to political violence” had prevented autocomplete from suggesting Trump-related searches and that “those systems were out of date.”

Likewise, the company said that the strange suggestions for “president donald” were due to a ”bug that spanned the political spectrum.” It also affected searches related to former President Barack Obama and other figures.

Finally, the company explained that articles about Kamala Harris appearing in search results for Donald Trump is not due to a shadowy conspiracy, but because the two— both of whom are actively campaigning for president — are often mentioned in the same news stories. That may sound like something that should be painfully obvious to anyone who has ever used the internet, but Musk’s post on X has fueled days of conspiracy theories about Google’s intentions.

Musk’s post, which questioned whether the search giant was interfering in the election, was particularly ironic considering that the X owner came under fire the same weekend for sharing a manipulated video of Kamala Harris without a label, a violation of his company’s own policies.

While Google’s statements didn’t cite Musk’s post directly, the company pointed out that X’s search feature has also experienced issues in the past. “Many platforms, including the one we’re posting on now, will show strange or incomplete predictions at various times,” the company said.

This article originally appeared on Engadget at https://www.engadget.com/google-dismisses-elon-musks-claim-that-autocomplete-engaged-in-election-interference-214834630.html?src=rss

The Senate just passed two landmark bills aimed at protecting minors online

The Senate has passed two major online safety bills amid years of debate over social media’s impact on teen mental health. The Kids Online Safety Act (KOSA) and the Children and Teens' Online Privacy Protection Act, also known as COPPA 2.0, passed the Senate in a vote of 91 - T3.

The bills will next head to the House, though it’s unclear if the measures will have enough support to pass. If passed into law, the bills would be the most significant pieces of legislation regulating tech companies in years.

KOSA requires social media companies like Meta to offer controls to disable algorithmic feeds and other “addictive” features for children under the age of 16. It also requires companies to provide parental supervision features and safeguard minors from content that promotes eating disorders, self harm, sexual exploitation and other harmful content.

One of the most controversial provisions in the bill creates what’s known as a “duty of care.” This means platforms are required to prevent or mitigate certain harmful effects of their products, like “addictive” features or algorithms that promote dangerous content. The Federal Trade Commission would be in charge of enforcing the standard.

The bill was originally introduced in 2022 but stalled amid pushback from digital rights and other advocacy groups who said the legislation would force platforms to spy on teens. A revised version, meant to address some of those concerns, was introduced last year, though the ACLU, EFF and other free speech groups still oppose the bill. In a statement last week, the ACLU said that KOSA would encourage social media companies “to censor protected speech” and “incentivize the removal of anonymous browsing on wide swaths of the internet.”

COPPA 2.0, on the other hand, has been less controversial among privacy advocates. An expansion of the 1998 Children and Teens' Online Privacy Protection Act, it aims to revise the nearly 30-year-old law to better reflect the modern internet and social media landscape. If passed, the law would prohibit companies from targeting advertising to children and collecting personal data on teens between 13 and 16 without consent. It also requires companies to offer an “eraser button” for personal data to delete children and teens’ personal information from a platform when “technologically feasible.”

The vote underscores how online safety has become a rare source of bipartisan agreement in the Senate, which has hosted numerous hearings on teen safety issues in recent years. The CEOs of Meta, Snap, Discord, X and TikTok testified at one such hearing earlier this year, during which South Carolina Senator Lindsey Graham accused the executives of having “blood on their hands” for numerous safety lapses.

This article originally appeared on Engadget at https://www.engadget.com/the-senate-just-passed-two-landmark-bills-aimed-at-protecting-minors-online-170935128.html?src=rss

Mark Zuckerberg says ‘f*ck that’ to closed platforms

In his two decades running the company now known as Meta, Mark Zuckerberg has gone through many transformations. More recently, he’s been showing off a seemingly less filtered version of himself. But during a live streamed conversation with NVIDIA CEO Jensen Huang, the Meta CEO seemed to veer a little more off script than he intended.

The conversation began normally enough, with the two billionaire executives congratulating each other on their AI dominance. Zuckerberg made sure to talk up the company’s recent AI Studio announcement before settling into his usual talking points, which recently have included pointed criticism of Apple.

Zuckerberg then launched into a lengthy rant about his frustrations with “closed” ecosystems like Apple’s App Store. None of that is particularly new, as the Meta founder has been feuding with Apple for years. But then Zuckerberg, who is usually quite controlled in his public appearances, revealed just how frustrated he is, telling Huang that his reaction to being told “no” is “fuck that.”

“I mean, this is sort of selfish, but, you know, after building this company for awhile, one of my things for the next 10 or 15 years is like, I just want to make sure that we can build the fundamental technology that we're going to be building social experiences on, because there just have been too many things that I've tried to build and then have just been told ‘nah you can't really build that by the platform provider,’ that at some level I'm just like, ‘nah, fuck that,’” Zuckerberg said.

“There goes our broadcast opportunity,” Huang said. “Sorry,” Zuckerberg said. “Get me talking about closed platforms, and I get angry.”

This article originally appeared on Engadget at https://www.engadget.com/mark-zuckerberg-says-fck-that-to-closed-platforms-235700788.html?src=rss

Instagram creators can now make AI doppelgangers to chat with their followers

The next time you DM a creator on Instagram, you might get a reply from their AI. Meta is starting to roll out its AI Studio, a set of tools that will allow Instagram creators to make an AI persona that can answer questions and chat with their followers and fans on their behalf.

The company first introduced AI Studio at its Connect event last fall but it only recently began to test creator-made AIs with a handful of prominent Instagrammers. Now, Meta is making the tools available to more US-based creators and giving the rest of its users the chance to experiment with specialized AI “characters.”

According to Meta, the new creator AIs are meant to address a long-running issue for Instagram users with large followings: it can be nearly impossible for the service’s most popular users to keep up with the flood of messages they receive every day. Now, though, they’ll be able to make an AI that functions as “an extension of themselves,” says Connor Hayes, who is VP of Product for AI Studio at Meta.

“These creators can actually use the comments that they've made, the captions that they've made, the transcripts of the Reels that they've posted, as well as any custom instructions or links that they want to provide … so that the AI can answer on their behalf,” Hayes tells Engadget.

Mark Zuckerberg has suggested he has big ambitions for such chatbots. In a recent interview with Bloomberg he said he expects there will eventually be “hundreds of millions” of creator-made AIs on Meta’s apps. However, it’s unclear if Instagram users will be as interested in engaging with AI versions of their favorite creators. Meta previously experimented with AI chatbots that took on the personalities of celebrities like Snoop Dogg and Kendall Jenner, but those “characters” proved to be largely underwhelming. Those chatbots have now been phased out, The Information reported.

“One thing that ended up being somewhat confusing for people was, ‘am I talking to the celebrity that is embodying this AI, or am I talking to an AI and they're playing the character,’” Meta’s Hayes says about the celebrity-branded chatbots. “We think that going in this direction where the public figures can represent themselves, or an AI that's an extension of themselves, will be a lot clearer.”

Anyone can create an AI
Meta

AI Studio isn’t just for creators, though. Meta will also allow any user to create custom AI “characters” that can chat about specific topics, make memes or offer advice. Like the creator-focused characters, these chatbots will be powered by Meta’s new Llama 3.1 model. Users can share their chatbot creations and track how many people are using them, though they won’t be able to view other users’ interactions with them.

The new chatbots are the latest way Meta has pushed its users to spend more time with its AI as it crams Meta AI into more and more places in its apps. But Meta AI has also at times struggled to relay accurate information In a blog post, Meta notes that it has “policies and protections in place to keep people safe and help ensure AIs are used responsibly.”

Screenshots provided by the company show that chats with the new AI characters will also have a familiar disclaimer: “Some messages generated by AI may be inaccurate or inappropriate.”

Update July 30, 2024, 4:35 PM PT: This story was updated with additional information about Meta's celebrity-branded chatbots.

This article originally appeared on Engadget at https://www.engadget.com/instagram-creators-can-now-make-ai-doppelgangers-to-chat-with-their-followers-220052768.html?src=rss

Meta needs updated rules for sexually explicit deepfakes, Oversight Board says

Meta’s Oversight Board is urging the company to update its rules around sexually explicit deepfakes. The board made the recommendations as part of its decision in two cases involving AI-generated images of public figures.

The cases stem from two user appeals over AI-generated images of public figures, though the board declined to name the individuals. One post, which originated on Instagram, depicted a nude Indian woman. The post was reported to Meta but the report was automatically closed after 48 hours, as was a subsequent user appeal. The company eventually removed the post after attention from the Oversight Board, which nonetheless overturned Meta’s original decision to leave the image up.

The second post, which was shared to a Facebook group dedicated to AI art, showed “an AI-generated image of a nude woman with a man groping her breast.” Meta automatically removed the post because it had been added to an internal system that can identify images that have been previously reported to the company. The Oversight Board found that Meta was correct to have taken the post down.

In both cases, the Oversight Board said the AI deepfakes violated the company’s rules barring “derogatory sexualized photoshop” images. But in its recommendations to Meta, the Oversight Board said the current language used in these rules is outdated and may make it more difficult for users to report AI-made explicit images.

Instead, the board says that it should update its policies to make clear that it prohibits non-consensual explicit images that are AI-made or manipulated. “Much of the non-consensual sexualized imagery spread online today is created with generative AI models that either automatically edit existing images or create entirely new ones,” the board writes.”Meta should ensure that its prohibition on derogatory sexualized content covers this broader array of editing techniques, in a way that is clear to both users and the company’s moderators.”

The board also called out Meta’s practice of automatically closing user appeals, which it said could have “significant human rights impacts” on users. However, the board said it didn’t have “sufficient information” about the practice to make a recommendation.

The spread of explicit AI images has become an increasingly prominent issue as “deepfake porn” has become a more widespread form of online harassment in recent years. The board’s decision comes one day after the US Senate unanimously passed a bill cracking down on explicit deepfakes. If passed into law, the measure would allow victims to sue the creators of such images for as much as $250,000.

The cases aren’t the first time the Oversight Board has pushed Meta to update its rules for AI-generated content. In another high-profile case, the board investigated a maliciously edited video of President Joe Biden. The case ultimately resulted in Meta revamping its policies around how AI-generated content is labeled.

This article originally appeared on Engadget at https://www.engadget.com/meta-needs-updated-rules-for-sexually-explicit-deepfakes-oversight-board-says-100005969.html?src=rss