Google is putting more restrictions on AI Overviews after it told people to put glue on pizza

Liz Reid, the Head of Google Search, has admitted that the company's search engine has returned some "odd, inaccurate or unhelpful AI Overviews" after they rolled out to everyone in the US. The executive published an explanation for Google's more peculiar AI-generated responses in a blog post, where it also announced that the company has implemented safeguards that will help the new feature return more accurate and less meme-worthy results. 

Reid defended Google and pointed out that some of the more egregious AI Overview responses going around, such as claims that it's safe to leave dogs in cars, are fake. The viral screenshot showing the answer to "How many rocks should I eat?" is real, but she said that Google came up with an answer because a website published a satirical content tackling the topic. "Prior to these screenshots going viral, practically no one asked Google that question," she explained, so the company's AI linked to that website.

The Google VP also confirmed that AI Overview told people to use glue to get cheese to stick to pizza based on content taken from a forum. She said forums typically provide "authentic, first-hand information," but they could also lead to "less-than-helpful advice." The executive didn't mention the other viral AI Overview answers going around, but as The Washington Post reports, the technology also told users that Barack Obama was Muslim and that people should drink plenty of urine to help them pass a kidney stone. 

Reid said the company tested the feature extensively before launch, but "there’s nothing quite like having millions of people using the feature with many novel searches." Google was apparently able to determine patterns wherein its AI technology didn't get things right by looking at examples of its responses over the past couple of weeks. It has then put protections in place based on its observations, starting by tweaking its AI to be able to better detect humor and satire content. It has also updated its systems to limit the addition of user-generated replies in Overviews, such as social media and forum posts, which could give people misleading or even harmful advice. In addition, it has also "added triggering restrictions for queries where AI Overviews were not proving to be as helpful" and has stopped showing AI-generated replies for certain health topics. 

This article originally appeared on Engadget at https://www.engadget.com/google-is-putting-more-restrictions-on-ai-overviews-after-it-told-people-to-put-glue-on-pizza-011316780.html?src=rss

Instagram makes its status update feature more interactive

Instagram launched Notes in December 2022 as a way for people to share statuses (not so dissimilar to Facebook) on the platform. Now, the Meta-owned app is taking inspiration from its sister site for more features, with the addition of Note Prompts. 

Instagram first experimented with Note Prompts earlier this year, and the feature allows users to share questions such as "What should I eat?" or "Who is going to be in X city this weekend?" Friends can then respond with tips, suggestions and random thoughts on the subject. It feels very Facebook circa 2012, as does another new feature, Mentions, in which users can tag a friend directly in their Notes. The example Instagram gives, "Hanging with @user later," would be right out of the early 2010s with just adding "Text! :)" Instagram also announced Note Likes, which works similarly to how likes function everywhere else on Instagram — all users need to do is double tap a note or click the heart. 

Notes have only emerged on Instagram in the past couple of years. They mirror stories in many ways, lasting only 24 hours and with controls as to who can see them (such as just mutual followers). Notes are visible in a user's inbox and on profiles. 

This article originally appeared on Engadget at https://www.engadget.com/instagram-makes-its-status-update-feature-more-interactive-160057778.html?src=rss

Instagram makes its status update feature more interactive

Instagram launched Notes in December 2022 as a way for people to share statuses (not so dissimilar to Facebook) on the platform. Now, the Meta-owned app is taking inspiration from its sister site for more features, with the addition of Note Prompts. 

Instagram first experimented with Note Prompts earlier this year, and the feature allows users to share questions such as "What should I eat?" or "Who is going to be in X city this weekend?" Friends can then respond with tips, suggestions and random thoughts on the subject. It feels very Facebook circa 2012, as does another new feature, Mentions, in which users can tag a friend directly in their Notes. The example Instagram gives, "Hanging with @user later," would be right out of the early 2010s with just adding "Text! :)" Instagram also announced Note Likes, which works similarly to how likes function everywhere else on Instagram — all users need to do is double tap a note or click the heart. 

Notes have only emerged on Instagram in the past couple of years. They mirror stories in many ways, lasting only 24 hours and with controls as to who can see them (such as just mutual followers). Notes are visible in a user's inbox and on profiles. 

This article originally appeared on Engadget at https://www.engadget.com/instagram-makes-its-status-update-feature-more-interactive-160057778.html?src=rss

Meta caught an Israeli marketing firm running hundreds of fake Facebook accounts

Meta caught an Israeli marketing firm using fake Facebook accounts to run an influence campaign on its platform, the company said in its latest report on coordinated inauthentic behavior. The scheme targeted people in the US and Canada and posted about the Israel-Hamas war.

In all, Meta’s researchers uncovered 510 Facebook accounts, 11 pages, 32 Instagram accounts and one group that were tied to the effort, including fake and previously hacked accounts. The accounts posed as “Jewish students, African Americans and ‘concerned’ citizens” and shared posts that praised Israel’s military actions and criticized the United Nations Relief and Works Agency (UNRWA) and college protests. They also shared Islamaophobic comments in Canada, saying that “radical Islam poses a threat to liberal values in Canada.”

Meta’s researchers said the campaign was linked to STOIC, a “a political marketing and business intelligence firm” based in Israel, though they didn’t speculate on the motives behind it. STOIC was also active on X and YouTube and ran websites “focused on the Israel-Hamas war and Middle Eastern politics.”

According to Meta, the campaign was discovered before it could build up a large audience and many of the fake accounts were disabled by the company’s automated systems. The accounts reached about 500 followers on Facebook and about 2,000 on Instagram.

The report also notes that the people behind the accounts seemed to use generative AI tools to write many of their comments on the pages of politicians, media organizations and other public figures.“These comments generally linked to the operations’ websites, but they were often met with critical responses from authentic users calling them propaganda,” Meta’s policy director for threat disruption, David Agranovich, said during a briefing with reporters “So far, we have not seen novel Gen AI driven tactics that would impede our ability to disrupt the adversarial networks behind them.”

This article originally appeared on Engadget at https://www.engadget.com/meta-caught-an-israeli-marketing-firm-running-hundreds-of-fake-facebook-accounts-150021954.html?src=rss

The Internet Archive has been fending off DDoS attacks for days

If you couldn't access the Internet Archive and its Wayback Machine over the past few days, that's because the website has been under attack. In fact, the nonprofit organization has announced that it's currently in its "third day of warding off an intermittent DDoS cyber-attack" in a blog post. Over the Memorial Day weekend, the organization posted on Twitter/X that most of its services aren't available due to bad actors pummeling its website with "tens of thousands of fake information requests per second." On Tuesday morning, it warned that it's "continuing to experience service disruptions" because the attackers haven't stopped targeting it. 

The website's data doesn't seem to be affected, though, and you could still look up previous pages' content whenever you could access it. "Thankfully the collections are safe, but we are sorry that the denial-of-service attack has knocked us offline intermittently during these last three days," Brewster Kahle, the founder of the the Internet Archive, said in a statement. "With the support from others and the hard work of staff we are hardening our defenses to provide more reliable access to our library. What is new is this attack has been sustained, impactful, targeted, adaptive, and importantly, mean."

The Internet Archive has yet to identify the source of the attacks, but it did talk about how libraries and similar institutions are being targeted more frequently these days. One of the institutions it mentioned was the British Library whose online information system was held hostage for ransom by a hacker group last year. It also talked about how it's being sued by the US book publishing and US recording industries, which accuse it of copyright infringement

This article originally appeared on Engadget at https://www.engadget.com/the-internet-archive-has-been-fending-off-ddos-attacks-for-days-035950028.html?src=rss

The Internet Archive has been fending off DDoS attacks for days

If you couldn't access the Internet Archive and its Wayback Machine over the past few days, that's because the website has been under attack. In fact, the nonprofit organization has announced that it's currently in its "third day of warding off an intermittent DDoS cyber-attack" in a blog post. Over the Memorial Day weekend, the organization posted on Twitter/X that most of its services aren't available due to bad actors pummeling its website with "tens of thousands of fake information requests per second." On Tuesday morning, it warned that it's "continuing to experience service disruptions" because the attackers haven't stopped targeting it. 

The website's data doesn't seem to be affected, though, and you could still look up previous pages' content whenever you could access it. "Thankfully the collections are safe, but we are sorry that the denial-of-service attack has knocked us offline intermittently during these last three days," Brewster Kahle, the founder of the the Internet Archive, said in a statement. "With the support from others and the hard work of staff we are hardening our defenses to provide more reliable access to our library. What is new is this attack has been sustained, impactful, targeted, adaptive, and importantly, mean."

The Internet Archive has yet to identify the source of the attacks, but it did talk about how libraries and similar institutions are being targeted more frequently these days. One of the institutions it mentioned was the British Library whose online information system was held hostage for ransom by a hacker group last year. It also talked about how it's being sued by the US book publishing and US recording industries, which accuse it of copyright infringement

This article originally appeared on Engadget at https://www.engadget.com/the-internet-archive-has-been-fending-off-ddos-attacks-for-days-035950028.html?src=rss

OpenAI’s new safety team is led by board members, including CEO Sam Altman

OpenAI has created a new Safety and Security Committee less than two weeks after the company dissolved the team tasked with protecting humanity from AI’s existential threats. This latest iteration of the group responsible for OpenAI’s safety guardrails will include two board members and CEO Sam Altman, raising questions about whether the move is little more than self-policing theatre amid a breakneck race for profit and dominance alongside partner Microsoft.

The Safety and Security Committee, formed by OpenAI’s board, will be led by board members Bret Taylor (Chair), Nicole Seligman, Adam D’Angelo and Sam Altman (CEO). The new team follows co-founder Ilya Sutskever’s and Jan Leike’s high-profile resignations, which raised more than a few eyebrows. Their former “Superalignment Team” was only created last July.

Following his resignation, Leike wrote in an X (Twitter) thread on May 17 that, although he believed in the company’s core mission, he left because the two sides (product and safety) “reached a breaking point.” Leike added that he was “concerned we aren’t on a trajectory” to adequately address safety-related issues as AI grows more intelligent. He posted that the Superalignment team had recently been “sailing against the wind” within the company and that “safety culture and processes have taken a backseat to shiny products.”

A cynical take would be that a company focused primarily on “shiny products” — while trying to fend off the PR blow of high-profile safety departures — might create a new safety team led by the same people speeding toward those shiny products.

Headshot of former OpenAI head of alignment Jan Leike. He smiles against a grayish-brown background.
Former OpenAI head of alignment Jan Leike
Jan Leike / X

The safety departures earlier this month weren’t the only concerning news from the company recently. It also launched (and quickly pulled) a new voice model that sounded remarkably like two-time Oscar Nominee Scarlett Johansson. The Jojo Rabbit actor then revealed that OpenAI Sam Altman had pursued her consent to use her voice to train an AI model but that she had refused.

In a statement to Engadget, Johansson’s team said she was shocked that OpenAI would cast a voice talent that “sounded so eerily similar” to her after pursuing her authorization. The statement added that Johansson’s “closest friends and news outlets could not tell the difference.”

OpenAI also backtracked on nondisparagement agreements it had required from departing executives, changing its tune to say it wouldn’t enforce them. Before that, the company forced exiting employees to choose between being able to speak against the company and keeping the vested equity they earned. 

The Safety and Security Committee plans to “evaluate and further develop” the company’s processes and safeguards over the next 90 days. After that, the group will share its recommendations with the entire board. After the whole leadership team reviews its conclusions, it will “publicly share an update on adopted recommendations in a manner that is consistent with safety and security.”

In its blog post announcing the new Safety and Security Committee, OpenAI confirmed that the company is currently training its next model, which will succeed GPT-4. “While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment,” the company wrote.

This article originally appeared on Engadget at https://www.engadget.com/openais-new-safety-team-is-led-by-board-members-including-ceo-sam-altman-164927745.html?src=rss

OpenAI’s new safety team is led by board members, including CEO Sam Altman

OpenAI has created a new Safety and Security Committee less than two weeks after the company dissolved the team tasked with protecting humanity from AI’s existential threats. This latest iteration of the group responsible for OpenAI’s safety guardrails will include two board members and CEO Sam Altman, raising questions about whether the move is little more than self-policing theatre amid a breakneck race for profit and dominance alongside partner Microsoft.

The Safety and Security Committee, formed by OpenAI’s board, will be led by board members Bret Taylor (Chair), Nicole Seligman, Adam D’Angelo and Sam Altman (CEO). The new team follows co-founder Ilya Sutskever’s and Jan Leike’s high-profile resignations, which raised more than a few eyebrows. Their former “Superalignment Team” was only created last July.

Following his resignation, Leike wrote in an X (Twitter) thread on May 17 that, although he believed in the company’s core mission, he left because the two sides (product and safety) “reached a breaking point.” Leike added that he was “concerned we aren’t on a trajectory” to adequately address safety-related issues as AI grows more intelligent. He posted that the Superalignment team had recently been “sailing against the wind” within the company and that “safety culture and processes have taken a backseat to shiny products.”

A cynical take would be that a company focused primarily on “shiny products” — while trying to fend off the PR blow of high-profile safety departures — might create a new safety team led by the same people speeding toward those shiny products.

Headshot of former OpenAI head of alignment Jan Leike. He smiles against a grayish-brown background.
Former OpenAI head of alignment Jan Leike
Jan Leike / X

The safety departures earlier this month weren’t the only concerning news from the company recently. It also launched (and quickly pulled) a new voice model that sounded remarkably like two-time Oscar Nominee Scarlett Johansson. The Jojo Rabbit actor then revealed that OpenAI Sam Altman had pursued her consent to use her voice to train an AI model but that she had refused.

In a statement to Engadget, Johansson’s team said she was shocked that OpenAI would cast a voice talent that “sounded so eerily similar” to her after pursuing her authorization. The statement added that Johansson’s “closest friends and news outlets could not tell the difference.”

OpenAI also backtracked on nondisparagement agreements it had required from departing executives, changing its tune to say it wouldn’t enforce them. Before that, the company forced exiting employees to choose between being able to speak against the company and keeping the vested equity they earned. 

The Safety and Security Committee plans to “evaluate and further develop” the company’s processes and safeguards over the next 90 days. After that, the group will share its recommendations with the entire board. After the whole leadership team reviews its conclusions, it will “publicly share an update on adopted recommendations in a manner that is consistent with safety and security.”

In its blog post announcing the new Safety and Security Committee, OpenAI confirmed that the company is currently training its next model, which will succeed GPT-4. “While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment,” the company wrote.

This article originally appeared on Engadget at https://www.engadget.com/openais-new-safety-team-is-led-by-board-members-including-ceo-sam-altman-164927745.html?src=rss

You can now hum to find a song on YouTube Music for Android

YouTube Music for Android is finally releasing a long-awaited tool that lets people hum a song to search for it, in addition to singing the tune or playing the melody on an instrument, according to reporting by 9to5Google. The software has been in the testing phase since March.

All you have to do is tap the magnifying glass in the top-right corner and look for the waveform icon next to the microphone icon. Tap the waveform icon and start humming or singing. A fullscreen results page should quickly bring up the cover art, song name, artist, album, release year and other important data about the song. The software builds upon the Pixel’s Now Playing feature, which uses AI to “match the sound to the original recording.”

The tool comes in a server-side update with version 7.02 of YouTube Music for Android. There doesn’t look to be any availability information for the iOS release, though it’s most likely headed our way in the near future.

This type of feature isn’t exactly new, even if it’s new to YouTube Music. Google Search rolled out a similar tool back in 2020 and the regular YouTube app began offering something like this last year. Online music streaming platform Deezer also has a “hum to search” tool, released back in 2022.

This article originally appeared on Engadget at https://www.engadget.com/you-can-now-hum-to-find-a-song-on-youtube-music-for-android-190037510.html?src=rss

Sam Altman is ‘embarrassed’ that OpenAI threatened to revoke equity if exiting employees wouldn’t sign an NDA

OpenAI reportedly made exiting employees choose between keeping their vested equity and being able to speak out against the company. According to Vox, which viewed the document in question, employees could “lose all vested equity they earned during their time at the company, which is likely worth millions of dollars” if they didn’t sign a nondisclosure and non-disparagement agreement, thanks to a provision in the off-boarding papers. OpenAI CEO Sam Altman confirmed in a tweet on Saturday evening that such a provision did exist, but said “we have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or don't agree to a non-disparagement agreement).”

An OpenAI spokesperson echoed this in a statement to Vox, and Altman said the company “was already in the process of fixing the standard exit paperwork over the past month or so.” But as Vox notes in its report, at least one former OpenAI employee has spoken publicly about sacrificing equity by declining to sign an NDA upon leaving. Daniel Kokotajlo recently posted on an online forum that this decision led to the loss of equity likely amounting to “about 85 percent of my family's net worth at least.”

In Altman’s response, the CEO apologized and said he was “embarrassed” after finding out about the provision, which he claims he was previously unaware of. “[T]here was a provision about potential equity cancellation in our previous exit docs; although we never clawed anything back, it should never have been something we had in any documents or communication,” he wrote on X. “this is on me and one of the few times i've been genuinely embarrassed running openai; i did not know this was happening and i should have [sic].” In addition to acknowledging that the company is changing the exit paperwork, Altman went on to say, “[I]f any former employee who signed one of those old agreements is worried about it, they can contact me and we'll fix that too.”

All of this comes after two more high-profile resignations from OpenAI this week. OpenAI co-founder and Chief Scientist Ilya Sutskever announced on Wednesday that he was leaving the company, and was followed soon after by Jan Leike, who was a team leader on OpenAI’s now-dissolved “Superalignment” AI safety team.

This article originally appeared on Engadget at https://www.engadget.com/sam-altman-is-embarrassed-that-openai-threatened-to-revoke-equity-if-exiting-employees-wouldnt-sign-an-nda-184000462.html?src=rss