Meta says the future of Facebook is young adults (again)

When you think of the 20-year-old social network that is Facebook, its popularity among “young adults” is probably not what comes to mind. Naturally, Meta wants to change that and the company is once again telling the world it intends to reorient its platform in order to appeal to that demographic.

In an update from Tom Alison, who heads up the Facebook app for Meta, he says that the service is shifting to reflect an “increased focus on young adults” compared with other users. “Facebook is still for everyone, but in order to build for the next generation of social media consumers, we’ve made significant changes with young adults in mind,” he wrote.

If any of this sounds familiar, it’s because Meta executives have been trying to win over “young adults” for years in an effort to better compete with TikTok. Mark Zuckerberg said almost three years ago that he wanted to make young adults the company’s “North Star.” And Alison and Zuckerberg have both been talking about the Facebook app’s pivot to a discovery-focused feed rather than one based on users’ connections.

That shift is now well underway. Alison said that the company’s AI advancements have already improved recommendations for Reels and feed, and that “advanced recommendations technology will power more products” over the next year. He added that private sharing among users is also on the rise, with more users sharing video (though no word on the once-rumored plan to bring messaging back into the main app).

Notably, Alison’s note makes no mention of the “metaverse,” which Zuckerberg also once saw as a central part of the company’s future. Instead, he says that “leaning into new product capabilities enabled by AI” is a significant goal, along with luring younger users. That’s also not surprising, given that Meta and Zuckerberg have recently tried to rebrand some of the company’s metaverse ambitions as AI advancements.

But it’s also not clear how successful Meta will be in its efforts to win over young adults. Though Alison says Facebook has seen “five quarters of healthy growth in young adult app usage in the US and Canada,” with 40 million young adult daily active users, that’s still a relatively small percentage of the 205 million daily US Facebook users the company reported in February, the last time it would break out user numbers for the app.

This article originally appeared on Engadget at https://www.engadget.com/meta-says-the-future-of-facebook-is-young-adults-again-203500866.html?src=rss

Google is putting more restrictions on AI Overviews after it told people to put glue on pizza

Liz Reid, the Head of Google Search, has admitted that the company's search engine has returned some "odd, inaccurate or unhelpful AI Overviews" after they rolled out to everyone in the US. The executive published an explanation for Google's more peculiar AI-generated responses in a blog post, where it also announced that the company has implemented safeguards that will help the new feature return more accurate and less meme-worthy results. 

Reid defended Google and pointed out that some of the more egregious AI Overview responses going around, such as claims that it's safe to leave dogs in cars, are fake. The viral screenshot showing the answer to "How many rocks should I eat?" is real, but she said that Google came up with an answer because a website published a satirical content tackling the topic. "Prior to these screenshots going viral, practically no one asked Google that question," she explained, so the company's AI linked to that website.

The Google VP also confirmed that AI Overview told people to use glue to get cheese to stick to pizza based on content taken from a forum. She said forums typically provide "authentic, first-hand information," but they could also lead to "less-than-helpful advice." The executive didn't mention the other viral AI Overview answers going around, but as The Washington Post reports, the technology also told users that Barack Obama was Muslim and that people should drink plenty of urine to help them pass a kidney stone. 

Reid said the company tested the feature extensively before launch, but "there’s nothing quite like having millions of people using the feature with many novel searches." Google was apparently able to determine patterns wherein its AI technology didn't get things right by looking at examples of its responses over the past couple of weeks. It has then put protections in place based on its observations, starting by tweaking its AI to be able to better detect humor and satire content. It has also updated its systems to limit the addition of user-generated replies in Overviews, such as social media and forum posts, which could give people misleading or even harmful advice. In addition, it has also "added triggering restrictions for queries where AI Overviews were not proving to be as helpful" and has stopped showing AI-generated replies for certain health topics. 

This article originally appeared on Engadget at https://www.engadget.com/google-is-putting-more-restrictions-on-ai-overviews-after-it-told-people-to-put-glue-on-pizza-011316780.html?src=rss

Google is putting more restrictions on AI Overviews after it told people to put glue on pizza

Liz Reid, the Head of Google Search, has admitted that the company's search engine has returned some "odd, inaccurate or unhelpful AI Overviews" after they rolled out to everyone in the US. The executive published an explanation for Google's more peculiar AI-generated responses in a blog post, where it also announced that the company has implemented safeguards that will help the new feature return more accurate and less meme-worthy results. 

Reid defended Google and pointed out that some of the more egregious AI Overview responses going around, such as claims that it's safe to leave dogs in cars, are fake. The viral screenshot showing the answer to "How many rocks should I eat?" is real, but she said that Google came up with an answer because a website published a satirical content tackling the topic. "Prior to these screenshots going viral, practically no one asked Google that question," she explained, so the company's AI linked to that website.

The Google VP also confirmed that AI Overview told people to use glue to get cheese to stick to pizza based on content taken from a forum. She said forums typically provide "authentic, first-hand information," but they could also lead to "less-than-helpful advice." The executive didn't mention the other viral AI Overview answers going around, but as The Washington Post reports, the technology also told users that Barack Obama was Muslim and that people should drink plenty of urine to help them pass a kidney stone. 

Reid said the company tested the feature extensively before launch, but "there’s nothing quite like having millions of people using the feature with many novel searches." Google was apparently able to determine patterns wherein its AI technology didn't get things right by looking at examples of its responses over the past couple of weeks. It has then put protections in place based on its observations, starting by tweaking its AI to be able to better detect humor and satire content. It has also updated its systems to limit the addition of user-generated replies in Overviews, such as social media and forum posts, which could give people misleading or even harmful advice. In addition, it has also "added triggering restrictions for queries where AI Overviews were not proving to be as helpful" and has stopped showing AI-generated replies for certain health topics. 

This article originally appeared on Engadget at https://www.engadget.com/google-is-putting-more-restrictions-on-ai-overviews-after-it-told-people-to-put-glue-on-pizza-011316780.html?src=rss

Instagram makes its status update feature more interactive

Instagram launched Notes in December 2022 as a way for people to share statuses (not so dissimilar to Facebook) on the platform. Now, the Meta-owned app is taking inspiration from its sister site for more features, with the addition of Note Prompts. 

Instagram first experimented with Note Prompts earlier this year, and the feature allows users to share questions such as "What should I eat?" or "Who is going to be in X city this weekend?" Friends can then respond with tips, suggestions and random thoughts on the subject. It feels very Facebook circa 2012, as does another new feature, Mentions, in which users can tag a friend directly in their Notes. The example Instagram gives, "Hanging with @user later," would be right out of the early 2010s with just adding "Text! :)" Instagram also announced Note Likes, which works similarly to how likes function everywhere else on Instagram — all users need to do is double tap a note or click the heart. 

Notes have only emerged on Instagram in the past couple of years. They mirror stories in many ways, lasting only 24 hours and with controls as to who can see them (such as just mutual followers). Notes are visible in a user's inbox and on profiles. 

This article originally appeared on Engadget at https://www.engadget.com/instagram-makes-its-status-update-feature-more-interactive-160057778.html?src=rss

Instagram makes its status update feature more interactive

Instagram launched Notes in December 2022 as a way for people to share statuses (not so dissimilar to Facebook) on the platform. Now, the Meta-owned app is taking inspiration from its sister site for more features, with the addition of Note Prompts. 

Instagram first experimented with Note Prompts earlier this year, and the feature allows users to share questions such as "What should I eat?" or "Who is going to be in X city this weekend?" Friends can then respond with tips, suggestions and random thoughts on the subject. It feels very Facebook circa 2012, as does another new feature, Mentions, in which users can tag a friend directly in their Notes. The example Instagram gives, "Hanging with @user later," would be right out of the early 2010s with just adding "Text! :)" Instagram also announced Note Likes, which works similarly to how likes function everywhere else on Instagram — all users need to do is double tap a note or click the heart. 

Notes have only emerged on Instagram in the past couple of years. They mirror stories in many ways, lasting only 24 hours and with controls as to who can see them (such as just mutual followers). Notes are visible in a user's inbox and on profiles. 

This article originally appeared on Engadget at https://www.engadget.com/instagram-makes-its-status-update-feature-more-interactive-160057778.html?src=rss

Meta caught an Israeli marketing firm running hundreds of fake Facebook accounts

Meta caught an Israeli marketing firm using fake Facebook accounts to run an influence campaign on its platform, the company said in its latest report on coordinated inauthentic behavior. The scheme targeted people in the US and Canada and posted about the Israel-Hamas war.

In all, Meta’s researchers uncovered 510 Facebook accounts, 11 pages, 32 Instagram accounts and one group that were tied to the effort, including fake and previously hacked accounts. The accounts posed as “Jewish students, African Americans and ‘concerned’ citizens” and shared posts that praised Israel’s military actions and criticized the United Nations Relief and Works Agency (UNRWA) and college protests. They also shared Islamaophobic comments in Canada, saying that “radical Islam poses a threat to liberal values in Canada.”

Meta’s researchers said the campaign was linked to STOIC, a “a political marketing and business intelligence firm” based in Israel, though they didn’t speculate on the motives behind it. STOIC was also active on X and YouTube and ran websites “focused on the Israel-Hamas war and Middle Eastern politics.”

According to Meta, the campaign was discovered before it could build up a large audience and many of the fake accounts were disabled by the company’s automated systems. The accounts reached about 500 followers on Facebook and about 2,000 on Instagram.

The report also notes that the people behind the accounts seemed to use generative AI tools to write many of their comments on the pages of politicians, media organizations and other public figures.“These comments generally linked to the operations’ websites, but they were often met with critical responses from authentic users calling them propaganda,” Meta’s policy director for threat disruption, David Agranovich, said during a briefing with reporters “So far, we have not seen novel Gen AI driven tactics that would impede our ability to disrupt the adversarial networks behind them.”

This article originally appeared on Engadget at https://www.engadget.com/meta-caught-an-israeli-marketing-firm-running-hundreds-of-fake-facebook-accounts-150021954.html?src=rss

The Internet Archive has been fending off DDoS attacks for days

If you couldn't access the Internet Archive and its Wayback Machine over the past few days, that's because the website has been under attack. In fact, the nonprofit organization has announced that it's currently in its "third day of warding off an intermittent DDoS cyber-attack" in a blog post. Over the Memorial Day weekend, the organization posted on Twitter/X that most of its services aren't available due to bad actors pummeling its website with "tens of thousands of fake information requests per second." On Tuesday morning, it warned that it's "continuing to experience service disruptions" because the attackers haven't stopped targeting it. 

The website's data doesn't seem to be affected, though, and you could still look up previous pages' content whenever you could access it. "Thankfully the collections are safe, but we are sorry that the denial-of-service attack has knocked us offline intermittently during these last three days," Brewster Kahle, the founder of the the Internet Archive, said in a statement. "With the support from others and the hard work of staff we are hardening our defenses to provide more reliable access to our library. What is new is this attack has been sustained, impactful, targeted, adaptive, and importantly, mean."

The Internet Archive has yet to identify the source of the attacks, but it did talk about how libraries and similar institutions are being targeted more frequently these days. One of the institutions it mentioned was the British Library whose online information system was held hostage for ransom by a hacker group last year. It also talked about how it's being sued by the US book publishing and US recording industries, which accuse it of copyright infringement

This article originally appeared on Engadget at https://www.engadget.com/the-internet-archive-has-been-fending-off-ddos-attacks-for-days-035950028.html?src=rss

The Internet Archive has been fending off DDoS attacks for days

If you couldn't access the Internet Archive and its Wayback Machine over the past few days, that's because the website has been under attack. In fact, the nonprofit organization has announced that it's currently in its "third day of warding off an intermittent DDoS cyber-attack" in a blog post. Over the Memorial Day weekend, the organization posted on Twitter/X that most of its services aren't available due to bad actors pummeling its website with "tens of thousands of fake information requests per second." On Tuesday morning, it warned that it's "continuing to experience service disruptions" because the attackers haven't stopped targeting it. 

The website's data doesn't seem to be affected, though, and you could still look up previous pages' content whenever you could access it. "Thankfully the collections are safe, but we are sorry that the denial-of-service attack has knocked us offline intermittently during these last three days," Brewster Kahle, the founder of the the Internet Archive, said in a statement. "With the support from others and the hard work of staff we are hardening our defenses to provide more reliable access to our library. What is new is this attack has been sustained, impactful, targeted, adaptive, and importantly, mean."

The Internet Archive has yet to identify the source of the attacks, but it did talk about how libraries and similar institutions are being targeted more frequently these days. One of the institutions it mentioned was the British Library whose online information system was held hostage for ransom by a hacker group last year. It also talked about how it's being sued by the US book publishing and US recording industries, which accuse it of copyright infringement

This article originally appeared on Engadget at https://www.engadget.com/the-internet-archive-has-been-fending-off-ddos-attacks-for-days-035950028.html?src=rss

OpenAI’s new safety team is led by board members, including CEO Sam Altman

OpenAI has created a new Safety and Security Committee less than two weeks after the company dissolved the team tasked with protecting humanity from AI’s existential threats. This latest iteration of the group responsible for OpenAI’s safety guardrails will include two board members and CEO Sam Altman, raising questions about whether the move is little more than self-policing theatre amid a breakneck race for profit and dominance alongside partner Microsoft.

The Safety and Security Committee, formed by OpenAI’s board, will be led by board members Bret Taylor (Chair), Nicole Seligman, Adam D’Angelo and Sam Altman (CEO). The new team follows co-founder Ilya Sutskever’s and Jan Leike’s high-profile resignations, which raised more than a few eyebrows. Their former “Superalignment Team” was only created last July.

Following his resignation, Leike wrote in an X (Twitter) thread on May 17 that, although he believed in the company’s core mission, he left because the two sides (product and safety) “reached a breaking point.” Leike added that he was “concerned we aren’t on a trajectory” to adequately address safety-related issues as AI grows more intelligent. He posted that the Superalignment team had recently been “sailing against the wind” within the company and that “safety culture and processes have taken a backseat to shiny products.”

A cynical take would be that a company focused primarily on “shiny products” — while trying to fend off the PR blow of high-profile safety departures — might create a new safety team led by the same people speeding toward those shiny products.

Headshot of former OpenAI head of alignment Jan Leike. He smiles against a grayish-brown background.
Former OpenAI head of alignment Jan Leike
Jan Leike / X

The safety departures earlier this month weren’t the only concerning news from the company recently. It also launched (and quickly pulled) a new voice model that sounded remarkably like two-time Oscar Nominee Scarlett Johansson. The Jojo Rabbit actor then revealed that OpenAI Sam Altman had pursued her consent to use her voice to train an AI model but that she had refused.

In a statement to Engadget, Johansson’s team said she was shocked that OpenAI would cast a voice talent that “sounded so eerily similar” to her after pursuing her authorization. The statement added that Johansson’s “closest friends and news outlets could not tell the difference.”

OpenAI also backtracked on nondisparagement agreements it had required from departing executives, changing its tune to say it wouldn’t enforce them. Before that, the company forced exiting employees to choose between being able to speak against the company and keeping the vested equity they earned. 

The Safety and Security Committee plans to “evaluate and further develop” the company’s processes and safeguards over the next 90 days. After that, the group will share its recommendations with the entire board. After the whole leadership team reviews its conclusions, it will “publicly share an update on adopted recommendations in a manner that is consistent with safety and security.”

In its blog post announcing the new Safety and Security Committee, OpenAI confirmed that the company is currently training its next model, which will succeed GPT-4. “While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment,” the company wrote.

This article originally appeared on Engadget at https://www.engadget.com/openais-new-safety-team-is-led-by-board-members-including-ceo-sam-altman-164927745.html?src=rss

OpenAI’s new safety team is led by board members, including CEO Sam Altman

OpenAI has created a new Safety and Security Committee less than two weeks after the company dissolved the team tasked with protecting humanity from AI’s existential threats. This latest iteration of the group responsible for OpenAI’s safety guardrails will include two board members and CEO Sam Altman, raising questions about whether the move is little more than self-policing theatre amid a breakneck race for profit and dominance alongside partner Microsoft.

The Safety and Security Committee, formed by OpenAI’s board, will be led by board members Bret Taylor (Chair), Nicole Seligman, Adam D’Angelo and Sam Altman (CEO). The new team follows co-founder Ilya Sutskever’s and Jan Leike’s high-profile resignations, which raised more than a few eyebrows. Their former “Superalignment Team” was only created last July.

Following his resignation, Leike wrote in an X (Twitter) thread on May 17 that, although he believed in the company’s core mission, he left because the two sides (product and safety) “reached a breaking point.” Leike added that he was “concerned we aren’t on a trajectory” to adequately address safety-related issues as AI grows more intelligent. He posted that the Superalignment team had recently been “sailing against the wind” within the company and that “safety culture and processes have taken a backseat to shiny products.”

A cynical take would be that a company focused primarily on “shiny products” — while trying to fend off the PR blow of high-profile safety departures — might create a new safety team led by the same people speeding toward those shiny products.

Headshot of former OpenAI head of alignment Jan Leike. He smiles against a grayish-brown background.
Former OpenAI head of alignment Jan Leike
Jan Leike / X

The safety departures earlier this month weren’t the only concerning news from the company recently. It also launched (and quickly pulled) a new voice model that sounded remarkably like two-time Oscar Nominee Scarlett Johansson. The Jojo Rabbit actor then revealed that OpenAI Sam Altman had pursued her consent to use her voice to train an AI model but that she had refused.

In a statement to Engadget, Johansson’s team said she was shocked that OpenAI would cast a voice talent that “sounded so eerily similar” to her after pursuing her authorization. The statement added that Johansson’s “closest friends and news outlets could not tell the difference.”

OpenAI also backtracked on nondisparagement agreements it had required from departing executives, changing its tune to say it wouldn’t enforce them. Before that, the company forced exiting employees to choose between being able to speak against the company and keeping the vested equity they earned. 

The Safety and Security Committee plans to “evaluate and further develop” the company’s processes and safeguards over the next 90 days. After that, the group will share its recommendations with the entire board. After the whole leadership team reviews its conclusions, it will “publicly share an update on adopted recommendations in a manner that is consistent with safety and security.”

In its blog post announcing the new Safety and Security Committee, OpenAI confirmed that the company is currently training its next model, which will succeed GPT-4. “While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment,” the company wrote.

This article originally appeared on Engadget at https://www.engadget.com/openais-new-safety-team-is-led-by-board-members-including-ceo-sam-altman-164927745.html?src=rss