Spotify’s new Taste Profile feature lets users fine-tune their algorithm’s recommendations

You're responsible for your own Spotify algorithm now. On stage at SXSW, Spotify's co-CEO, Gustav Söderström, announced the Taste Profile feature, which allows users to personally customize exactly what they want to listen to, whether it's music, audiobooks or podcasts. This AI-powered feature is still in beta, and it will be available to Premium users in New Zealand in the coming weeks.

From its short video demo, Spotify's Taste Profile feature will show you a summary of your listening habits and offer a "Tell us more" prompt at the bottom. With the new prompt, users can inform the AI what they want to see more of or if they want to get rid of a genre that keeps popping up in their algorithm. Spotify said that the Taste Profile will take into consideration more ambiguous prompts, too, like if you're training for a marathon and want upbeat music or want to listen to news podcasts during your commute to work. Spotify added that Taste Profile is an optional feature, and unwilling users can "leave it and enjoy Spotify as usual."

With Taste Profile, Spotify is continuing its momentum of offering AI features, like the Prompted Playlist feature that was made available last month. Unlike the existing AI Playlist feature, Prompted Playlist lets you put in specific requests to generate a playlist, like only including songs from a specific TV show. Like Taste Profile, the Prompted Playlist feature saw beta testing in New Zealand first, before expanding to US and Canadian users a month later.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/music/spotifys-new-taste-profile-feature-lets-users-fine-tune-their-algorithms-recommendations-191104626.html?src=rss

Spotify’s new Taste Profile feature lets users fine-tune their algorithm’s recommendations

You're responsible for your own Spotify algorithm now. On stage at SXSW, Spotify's co-CEO, Gustav Söderström, announced the Taste Profile feature, which allows users to personally customize exactly what they want to listen to, whether it's music, audiobooks or podcasts. This AI-powered feature is still in beta, and it will be available to Premium users in New Zealand in the coming weeks.

From its short video demo, Spotify's Taste Profile feature will show you a summary of your listening habits and offer a "Tell us more" prompt at the bottom. With the new prompt, users can inform the AI what they want to see more of or if they want to get rid of a genre that keeps popping up in their algorithm. Spotify said that the Taste Profile will take into consideration more ambiguous prompts, too, like if you're training for a marathon and want upbeat music or want to listen to news podcasts during your commute to work. Spotify added that Taste Profile is an optional feature, and unwilling users can "leave it and enjoy Spotify as usual."

With Taste Profile, Spotify is continuing its momentum of offering AI features, like the Prompted Playlist feature that was made available last month. Unlike the existing AI Playlist feature, Prompted Playlist lets you put in specific requests to generate a playlist, like only including songs from a specific TV show. Like Taste Profile, the Prompted Playlist feature saw beta testing in New Zealand first, before expanding to US and Canadian users a month later.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/music/spotifys-new-taste-profile-feature-lets-users-fine-tune-their-algorithms-recommendations-191104626.html?src=rss

Digg shuts down for a ‘hard reset’ because it was flooded with bots

Digg has shut down, for now, just a few months after its open beta launched. Justin Mezzell, the company’s CEO, has explained on the home page that it noticed hours after the beta launched that it was already being targeted by SEO spammers. “The internet is now populated, in meaningful part, by sophisticated AI agents and automated accounts,” he wrote. Apparently, the Digg team wasn’t ready for the scale and the speed at which bots found and started flooding the website.

Mezzell said Digg banned thousands of accounts and deployed both internal tools and external solutions, but they weren’t enough. He admitted that the votes and the comments on the website couldn’t be trusted due to the amount of bot activity it got. While Digg has decided to significantly downsize its team, a small number of staff members has stayed to rebuild it completely. He said it wasn’t enough to present Digg as an alternative to current social networks and community-based websites. “What comes next needs to be genuinely different,” he added.

The CEO didn’t explain how Digg will reinvent itself, but he did announce that its founder, Kevin Rose, is joining the company full time. Rose bought back Digg last year in partnership with Reddit co-founder Alexis Ohanian. Back then, they said they had “a fresh vision to restore the spirit of discovery and genuine community that made the early web a fun and exciting place to be.” Based on what happened to Digg, that’s now harder to achieve with the state of the internet today.

This article originally appeared on Engadget at https://www.engadget.com/social-media/digg-shuts-down-for-a-hard-reset-because-it-was-flooded-with-bots-153848094.html?src=rss

Meta is bringing more international news to its AI

Meta AI should soon be better at surfacing international news content thanks to a set of new deals with publishers. The company announced new agreements with international outlets and offered additional details on its recent deal with News Corp. 

The latest deals bring French newspaper Le Figaro, Spanish media company Prisa and German newspaper Süddeutsche Zeitung into the fold. Together, along with News Corp, which runs a number of outlets in the UK, these sources should give Meta AI better access to timely info about world events. Meta didn't disclose terms of the deals — The Wall Street Journal previously reported the News Corp arrangement was worth up to $50 million a year — but it said that it intends to link out to the relevant news sources.

"These integrations will also facilitate easier access to information by linking out to articles, allowing you to visit these partners’ websites for more details while providing value to partners, enabling them to reach new audiences," Meta wrote in an update. The company has a long and sometimes fraught history with publishers as its priorities have shifted over the years. In the past, Meta has struck deals to pay publishers to produce live video and "instant articles" only to change course as news content has become less of a priority for Facebook.

Now, with Meta struggling to compete with its AI rivals, it seems the social media company is once again interested in news content. As the company notes in its blog post, Meta AI isn't always great at surfacing accurate and timely info. I noted this in 2024 when the company's assistant was repeatedly unable to accurately answer seemingly simple questions like " who is the Speaker of the House of Representatives." 

By striking a bunch of deals with publishers, the company should be better equipped to handle these kinds of queries (and hopefully more complex ones). How much benefit publishers will see from these arrangements, however, is an open question. While Meta says it will link out to the relevant news sources, there are lots of outside data points that raise serious questions about the effect AI search tools are having on web traffic.

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-is-bringing-more-international-news-to-its-ai-213323713.html?src=rss

Grammarly has disabled its tool offering generative-AI feedback credited to real writers

Superhuman has taken its writing assistant Grammarly on quite the merry-go-round ride regarding its approach to AI tools. In August, the company launched a feature called Expert Review that would offer feedback on your writing, offering AI-generated feedback that would appear to come from a famous writer or academic of note. These recreations were based on "publicly available information from third-party LLMs," which sounds a lot like web crawlers of dubious legality were involved. 

The suggested experts would be based on the subject matter and could be anyone from great scientific minds to bestselling fiction authors to your friendly neighborhood tech bloggers. Living or dead, these writers' names appeared on Grammarly without their permission or knowledge. "References to experts in this product are for informational purposes only and do not indicate any affiliation with Grammarly or endorsement by those individuals or entities," the company hedged in a disclaimer on the service. 

As one might imagine, once people took notice, a large number of the living contingent of those writers were none too pleased. In fact, there's an attempted class action suit already underway against Superhuman. The company initially attempted to address the complaints by allowing writers to opt out of the platform. Which I'm sure was a big relief to the deceased contingent and to those living ones who aren't closely following AI news and might still not know they were being cited by the tool. 

Today, Superhuman CEO Shishir Mehrotra wrote in a LinkedIn post that the company will disable Expert Review while it reassesses the feature. "The agent was designed to help users discover influential perspectives and scholarship relevant to their work, while also providing meaningful ways for experts to build deeper relationships with their fans," he said. Yes, Carl Sagan must be bemoaning the lack of deep relationships with his fans from the afterlife.

Update, March 11, 2026, 5:34PM ET: Updated to note pending class action lawsuit filed against Superhuman over this feature.

This article originally appeared on Engadget at https://www.engadget.com/ai/grammarly-has-disabled-its-tool-offering-generative-ai-feedback-credited-to-real-writers-201614257.html?src=rss

Google starts rolling out Gemini in Chrome to users in Canada, India and New Zealand

At the start of the year, Google brought a host of new Gemini-powered features, including built-in Nano Banana image generation, to Chrome. After debuting in the United States, those features are now making their way to Chrome users in Canada, India and New Zealand, with support for 50 additional in tow. Among the new languages Gemini in Chrome can now converse in are French, Gujarati, Hindi and Spanish.   

To try out Gemini in Chrome, tap the sparkle icon at the top right of the interface. This will open the sidebar interface Google introduced in January. From there, you can chat with the company's Gemini chatbot without the need to switch tabs. From the sidebar, you can also access Google's in-house image generator. Additionally, Gemini in Chrome offers integrations with Gmail, Maps, Calendar, YouTube and other Google apps. If you live outside Canada, India or New Zealand, Google says it will make Gemini in Chrome available in more countries and languages throughout the rest of 2026. Oh, and if don’t want to use Gemini in Chrome, you can right click on the sparkle icon and select unpin to never see it again.

This article originally appeared on Engadget at https://www.engadget.com/ai/google-starts-rolling-out-gemini-in-chrome-to-users-in-canada-india-and-new-zealand-023000528.html?src=rss

Meta is buying Moltbook, the ridiculous social network populated by AI bots

Meta is snapping up Moltbook, a Reddit-like social network for AI agents that has been around since January and remains completely ridiculous. The company hasn't disclosed the terms of the deal.

Moltbook and its creators Matt Schlicht and Ben Parr will be joining Meta Superintelligence Labs (MSL) when the deal closes. That's expected to happen in the coming days, according to Axios.

“The Moltbook team joining MSL opens up new ways for AI agents to work for people and businesses," a Meta spokesperson told TechCrunch. "Their approach to connecting agents through an always-on directory is a novel step in a rapidly developing space, and we look forward to working together to bring innovative, secure agentic experiences to everyone.”

It seems current Moltbook users will be able to continue interacting with the platform for the time being. Moltbook was built on the back of OpenClaw, a tool that enables people to whip up AI agents that can interact with dozens of different apps. (OpenAI hired the creator of OpenClaw last month.)

Schlicht used OpenClaw to create a bot named “Clawd Clawderberg” and asked it to create a social network for AI agents. And that's how Moltbook came to be.

For what it's worth, Clawd Clawderberg is a play on "Mark Zuckerberg" and Moltbook is a clear riff on "Facebook," so it’s somewhat fitting that Schlicht vibe-coded his way to a job at Meta. It also emerged that it was relatively easy for humans to pose as AI agents and post on Moltbook. Again, all of this is deeply, deeply absurd.

This article originally appeared on Engadget at https://www.engadget.com/ai/meta-is-buying-moltbook-the-ridiculous-social-network-populated-by-ai-bots-152732453.html?src=rss

COPPA 2.0 passes the Senate again, unanimously this time

Today the US Senate unanimously passed proposed legislation known as COPPA 2.0. This measure, fully named the Children and Teens’ Online Privacy Protection Act, aims to create new protections for younger users online, such as blocking platforms from collecting their personal data without consent. 

COPPA 2.0 is a modernized take on the Children’s Online Privacy Protection Act of 1998, attempting to address recent changes in common online activities, like targeted advertising, that could prove harmful to minors. Lawmakers have made several attempts to get this bipartisan bill through. While it has made varying amounts of headway in the Senate, none of the COPPA 2.0 bills to date have gotten past the House of Representatives. Industry groups such as NetChoice have previously opposed COPPA 2.0 and other measures around minors' online activity such as KOSA, the Kids Online Safety Act. NetChoice members include Google, YouTube, Meta, Reddit, Discord, TikTok and X. Google specifically has since changed its stance to support COPPA 2.0, however.

"This bill expands the current law protecting our kids online to ensure companies cannot collect personal information from anyone under the age of 17," Senate Democratic Leader Chuck Schumer (D-NY) said in a statement about the latest result. "This is a big step forward for protecting our kids. We hope the House can join us. They haven’t thus far."

However, there has been a bigger push both domestically and internationally toward restrictions on when and how younger people engage online. Several states — Utah, California and Washington to name a few — have enacted laws requiring some level of age verification, either to access mature content online or to use social media apps at all. Many of these efforts have raised concerns about privacy regarding where and how people's personal information is stored and protected. COPPA 2.0 might wind up benefitting from the privacy debates since it emphasizes giving teens and parents ways to protect themselves from having their data used against them rather than asking adults to give up data in order to use the internet as usual.

Update, March 6 2026, 11:38AM ET: Article updated with additional context on Google.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/coppa-20-passes-the-senate-again-unanimously-this-time-215044656.html?src=rss

Mark Zuckerberg downplays Meta’s own research in New Mexico child safety trial

Jurors in a New Mexico child safety trial heard testimony from Meta CEO Mark Zuckerberg today. During pre-recorded testimony, Zuckerberg was repeatedly asked about the company's understanding of social media addiction and other issues that had been studied by its researchers. 

During the deposition, which was recorded last March, Zuckerberg was asked about numerous findings from researchers at Meta who studied how the company's apps affect users and teens. The CEO downplayed the significance of many of these documents.

Early in the testimony, which was viewed by Engadget on Courtroom View Network, Zuckerberg was questioned about a document on the effect of feedback on Facebook users. The document stated that "contributors on Facebook are likely to learn to associate the act of posting with feedback" which will "lead contributors to seek rewards by visiting the site more often.” Zuckerberg said he wasn’t “sure if that's actually how it works in practice, but I agree that you're summarizing what they appear to be saying.”

Later, the CEO was questioned about a document that graphed the proportion of 11 and 12-year-olds who were monthly active users on Instagram. The chart indicated that at the time, around 20 percent of 11-year-olds were monthly users of the service. "I agree that the graph says that, I am not familiar with what methodology we were using to estimate this," Zuckerberg said. "I assume that if we had direct knowledge that any given person was under the age of 13, that we would have them removed from our services."

New Mexico's attorney general sued the company in 2023 for alleged lapses in child safety, including facilitating predators' access to minors and building features it knew were addictive. In court, Meta's lawyers and executives have disputed the idea that social media should be considered an "addiction." In public statements, the company has said that lawsuits have relied on "cherry-picked quotes and snippets of conversations taken out of context" and that it "has consistently put teen safety ahead of growth for over a decade."

As with his recent testimony in a separate trial over social media addiction in Los Angeles, Zuckerberg repeatedly rejected the "characterization" of questions that were posed to him. And he said that Meta's goal was to make its apps "useful" rather than to increase the amount of time people spend with them. 

Zuckerberg was also questioned about a document written by a company researcher that stated "there is increasing scientific evidence, particularly in the US, … that the average net effect of Facebook on people's well being is slightly negative." The CEO said that "my understanding is that the general consensus view is not that."

It's not the first time a Meta executive has tried to downplay the significance of internal research. The company used a similar strategy in 2021 after former employee turned whistleblower Frances Haugen disclosed documents showing that Facebook's researchers had found that Instagram made some teen girls feel worse about themselves.

Zuckerberg's testimony was played one day after jurors heard recorded testimony from Instagram chief Adam Mosseri. The exec was also asked about Haugen's disclosures and Meta's response to them. Some of those disclosures were based on "problematic research," he said. "Most research is surveys. We run hundreds of surveys every month."

This article originally appeared on Engadget at https://www.engadget.com/social-media/mark-zuckerberg-downplays-metas-own-research-in-new-mexico-child-safety-trial-222924340.html?src=rss

Instagram will alert parents if teens repeatedly search for suicide or self-harm content

Instagram is adding a new alert for the parents of teen users of its social media platform. The network will alert the adult if their child repeatedly searches for terms about suicide or self-harm in a short time frame. From that notification, the parent will optionally be able to access resources for having conversations with their teen about these topics. These alerts will begin rolling out for parental supervision users in the US, UK, Australia and Canada next week, with later regions to be added in the future.

"We chose a threshold that requires a few searches within a short period of time, while still erring on the side of caution," Instagram's blog post explains. "While that means we may sometimes notify parents when there may not be real cause for concern, we feel — and experts agree — that this is the right starting point, and we’ll continue to monitor and listen to feedback to make sure we’re in the right place." 

The platform reiterated that search results for terms connected to suicide and self-harm are blocked for teen younger users, and content about those topics is not shown to them under its current policies. Instagram also noted that a similar parental alert feature is in the works for its AI tools, but news on that isn't expected until later this year.

This article originally appeared on Engadget at https://www.engadget.com/social-media/instagram-will-alert-parents-if-teens-repeatedly-search-for-suicide-or-self-harm-content-120000156.html?src=rss