Apple has reportedly resumed talks with OpenAI to build a chatbot for the iPhone

Apple has resumed conversations with OpenAI, the maker of ChatGPT, to power some AI features coming to iOS 18, according to a new report in Bloomberg. Apple is also building its own large language models to power some iOS 18 features, but its talks with OpenAI are centered around a “chatbot/search component,” according to Bloomberg reporter Mark Gurman. 

Apple is also reportedly in talks with Google to license Gemini, Google’s own AI-powered chatbot, for iOS 18. Bloomberg reports that those talks are still on, and things could still go either way because Apple hasn’t made a final decision on which company’s technology to use. It’s conceivable, Gurman says, that Apple could ultimately end up licensing AI tech from both companies or none of them.

So far, Apple has been notably quiet about its AI efforts even as the rest of Silicon Valley has descended into an AI arms race. But it has dropped enough hints to indicate that it’s cooking up something. When the company announced its earnings in February, CEO Tim Cook said that Apple is continuing to work and invest in artificial intelligence and is “excited to share the details of our ongoing work in that space later this year.” It claimed that the brand new M3 MacBook Air that it launched last month was the “world’s best consumer laptop for AI,” and will reportedly start releasing AI-centric laptops and desktops later this year. And earlier this week, Apple also released a handful of open-source large language models that are designed to run locally on devices rather than in the cloud.

It’s still unclear what Apple’s AI features in iPhones and other devices will look like. Generative AI is still notoriously unreliable and prone to making up answers. Recent AI-powered gadgets like the Humane Ai Pin released to disastrous reviews, while others like the Rabbit R1 have yet to prove themselves valuable.

We’ll find out more at WWDC on June 10.

This article originally appeared on Engadget at https://www.engadget.com/apple-has-reportedly-resumed-talks-with-openai-to-build-a-chatbot-for-the-iphone-002302644.html?src=rss

The world’s leading AI companies pledge to protect the safety of children online

Leading artificial intelligence companies including OpenAI, Microsoft, Google, Meta and others have jointly pledged to prevent their AI tools from being used to exploit children and generate child sexual abuse material (CSAM). The initiative was led by child-safety group Thorn and All Tech Is Human, a non-profit focused on responsible tech.

The pledges from AI companies, Thorn said, “set a groundbreaking precedent for the industry and represent a significant leap in efforts to defend children from sexual abuse as a feature with generative AI unfolds.” The goal of the initiative is to prevent the creation of sexually explicit material involving children and take it off social media platforms and search engines. More than 104 million files of suspected child sexual abuse material were reported in the US in 2023 alone, Thorn says. In the absence of collective action, generative AI is poised to make this problem worse and overwhelm law enforcement agencies that are already struggling to identify genuine victims.

On Tuesday, Thorn and All Tech Is Human released a new paper titled “Safety by Design for Generative AI: Preventing Child Sexual Abuse” that outlines strategies and lays out recommendations for companies that build AI tools, search engines, social media platforms, hosting companies and developers to take steps to prevent generative AI from being used to harm children.

One of the recommendations, for instance, asks companies to choose data sets used to train AI models carefully and avoid ones only only containing instances of CSAM but also adult sexual content altogether because of generative AI’s propensity to combine the two concepts. Thorn is also asking social media platforms and search engines to remove links to websites and apps that let people “nudity” images of children, thus creating new AI-generated child sexual abuse material online. A flood of AI-generated CSAM, according to the paper, will make identifying genuine victims of child sexual abuse more difficult by increasing the “haystack problem” — an reference to the amount of content that law enforcement agencies must current sift through.

“This project was intended to make abundantly clear that you don’t need to throw up your hands,” Thorn’s vice president of data science Rebecca Portnoff told the Wall Street Journal. “We want to be able to change the course of this technology to where the existing harms of this technology get cut off at the knees.”

Some companies, Portnoff said, had already agreed to separate images, video and audio that involved children from data sets containing adult content to prevent their models from combining the two. Others also add watermarks to identify AI-generated content, but the method isn’t foolproof — watermarks and metadata can be easily removed.

This article originally appeared on Engadget at https://www.engadget.com/the-worlds-leading-ai-companies-pledge-to-protect-the-safety-of-children-online-213558797.html?src=rss

Adobe Photoshop’s latest beta makes AI-generated images from simple text prompts

Nearly a year after adding generative AI-powered editing capabilities to Photoshop, Adobe is souping up its flagship product with even more AI. On Tuesday, the company announced that Photoshop is getting the ability to generate images with simple text prompts directly within the app. There are also new features to let the AI draw inspiration from reference images to create new ones and generate backgrounds more easily. The tools will make using Photoshop easier for both professionals as well as casual enthusiasts who may have found the app’s learning curve to be steep, Adobe thinks.

“A big, blank canvas can sometimes be the biggest barrier,” Erin Boyce, Photoshop’s senior marketing director, told Engadget in an interview. “This really speeds up time to creation. The idea of getting something from your mind to the canvas has never been easier.” The new feature is simply called “Generate Image” and will be available as an option in Photoshop right alongside the traditional option that lets you import images into the app.

An existing AI-powered feature called Generative Fill that previously let you add, extend or remove specific parts of an image has been upgraded too. It now allows users to add AI-generated images to an existing image that blend in seamlessly with the original. In a demo shown to Engadget, an Adobe executive was able to circle a picture of an empty salad dish, for instance, and ask Photoshop to fill it with a picture of AI-generated tomatoes. She was also able to generate variations of the tomatoes and choose one of them to be part of the final image. In another example, the executive replaced an acoustic guitar held by an AI-generated bear with multiple versions of electric guitars just by using text prompts, and without resorting to Photoshop’s complex tools or brushes.

Adobe's new AI feature in Photoshop let users easy replace parts of an image with a simple text prompt.
Adobe

These updates are powered by Firefly Image 3, the latest version of Adobe’s family of generative AI models that the company also unveiled today. Adobe said Firefly 3 produces images of a higher quality than previous models, provides more variations, and understands your prompts better. The company claims that more than 7 billion images have been generated so far using Firefly.

Adobe is far from the only company stuffing generative AI features into its products. Over the last year, companies, big and small, have revamped up their products and services with AI. Both Google and Microsoft, for instance, have upgraded their cash cows, Search and Office respectively, with AI features. More recently, Meta has started putting its own AI chatbot into Facebook, Messenger, WhatsApp, and Instagram. But while it’s still unclear how these bets will pan out, Adobe’s updates to Photoshop seem more materially useful for creators. The company said Photoshop’s new AI features had driven a 30 percent increase in Photoshop subscriptions.

Meanwhile, generative AI has been in the crosshairs of artists, authors, and other creative professionals, who say that the foundational models that power the tech were trained on copyrighted media without consent or compensation. Generative AI companies are currently battling lawsuits from dozens of artists and authors. Adobe says that Firefly was trained on licensed media from Adobe Stock, since it was designed to create content for commercial use, unlike competitors like Midjourney whose models are trained in part by illegally scraping images off the internet. But a recent report from Bloomberg showed that Firefly, too, was trained, in part, on AI-generated images from the same rivals including Midjourney (an Adobe spokesperson told Bloomberg that less than 5 percent of images in its training data came from other AI rivals).

To address concerns about the use of generative AI to create disinformation, Adobe said that all images created in Photoshop using generative AI tools will automatically include tamper-proof “Content Credentials”, which act like digital “nutrition labels” indicating that an image was generated with AI, in the file’s metadata. However, it's still not a perfect defense against image misuse, with several ways to sidestep metadata and watermarks

The new features will be available in beta in Photoshop starting today and will roll out to everyone later this year. Meanwhile, you can play with Firefly 3 on Adobe’s website for free. 

This article originally appeared on Engadget at https://www.engadget.com/adobe-photoshops-latest-beta-makes-ai-generated-images-from-simple-text-prompts-090056096.html?src=rss

Mozilla urges WhatsApp to combat misinformation ahead of global elections

In 2024, four billion people — about half the world’s population — in 64 countries including large democracies like the US and India, will head to the polls. Social media companies like Meta, YouTube and TikTok, have promised to protect the integrity of those elections, at least as far as discourse and factual claims being made on their platforms are concerned. Missing from the conversation, however, is closed messaging app WhatsApp, which now rivals public social media platforms in both scope and reach. That absence has researchers from non-profit Mozilla worried.

“Almost 90% of the safety interventions pledged by Meta ahead of these elections are focused on Facebook and Instagram,” Odanga Madung, a senior researcher at Mozilla focused on elections and platform integrity, told Engadget. “Why has Meta not publicly committed to a public road map of exactly how it’s going to protect elections within [WhatsApp]?”

Over the last ten years, WhatsApp, which Meta (then Facebook) bought for $19 billion in 2014, has become the default way for most of the world outside the US to communicate. In 2020, WhatsApp announced that it had more than two billion users around the world — a scale that dwarfs every other social or messaging app except Facebook itself.

Despite that scale, Meta’s focus has mostly been only on Facebook when it comes to election-related safety measures. Mozilla’s analysis found that while Facebook had made 95 policy announcements related to elections since 2016, the year the social network came under scrutiny for helping spread fake news and foster extreme political sentiments. WhatsApp only made 14. By comparison, Google and YouTube made 35 and 27 announcements each, while X and TikTok had 34 and 21 announcements respectively. “From what we can tell from its public announcements, Meta’s election efforts seem to overwhelmingly prioritize Facebook,” wrote Madung in the report.

Mozilla is now calling on Meta to make major changes to how WhatsApp functions during polling days and in the months before and after a country’s elections. They include adding disinformation labels to viral content (“Highly forwarded: please verify” instead of the current “forwarded many times), restricting broadcast and Communities features that let people blast messages to hundreds of people at the same time and nudging people to “pause and reflect” before they forward anything. More than 16,000 people have signed Mozilla’s pledge asking WhatsApp to slow the spread of political disinformation, a company spokesperson told Engadget.

WhatsApp first started adding friction to its service after dozens of people were killed in India, the company’s largest market, in a series of lynchings sparked by misinformation that went viral on the platform. This included limiting the number of people and groups that users could forward a piece of content to, and distinguishing forwarded messages with “forwarded” labels. Adding a “forwarded” label was a measure to curb misinformation — the idea was that people might treat forwarded content with greater skepticism.

“Someone in Kenya or Nigeria or India using WhatsApp for the first time is not going to think about the meaning of the ‘forwarded’ label in the context of misinformation,” Madung said. “In fact, it might have the opposite effect — that something has been highly forwarded, so it must be credible. For many communities, social proof is an important factor in establishing the credibility of something.”

The idea of asking people to pause and reflect came from a feature that Twitter once implemented where the app prompted people to actually read an article before retweeting it if they hadn’t opened it first. Twitter said that the prompt led to a 40% increase in people opening articles before retweeting them

And asking WhatsApp to temporarily disable its broadcast and Communities features arose from concerns over their potential to blast messages, forwarded or otherwise, to thousands of people at once. “They’re trying to turn this into the next big social media platform,” Madung said. “But without the consideration for the rollout of safety features.”

“WhatsApp is one of the only technology companies to intentionally constrain sharing by introducing forwarding limits and labeling messages that have been forwarded many times,” a WhatsApp spokesperson told Engadget. “We’ve built new tools to empower users to seek accurate information while protecting them from unwanted contact, which we detail on our website.”

Mozilla’s demands came out of research around platforms and elections that the company did in Brazil, India and Liberia. The former are two of WhatsApp’s largest markets, while most of the population of Liberia lives in rural areas with low internet penetration, making traditional online fact-checking nearly impossible. Across all three countries, Mozilla found political parties using WhatsApp’s broadcast feature heavily to “micro-target” voters with propaganda, and, in some cases, hate speech.

WhatsApp’s encrypted nature also makes it impossible for researchers to monitor what is circulating within the platform’s ecosystem — a limitation that isn’t stopping some of them from trying. In 2022, two Rutgers professors, Kiran Garimella and Simon Chandrachud visited the offices of political parties in India and managed to convince officials to add them to 500 WhatsApp groups that they ran. The data that they gathered formed the basis of an award-winning paper they wrote called “What circulates on Partisan WhatsApp in India?” Although the findings were surprising — Garimella and Chandrachud found that misinformation and hate speech did not, in fact, make up a majority of the content of these groups — the authors clarified that their sample size was small, and they may have deliberately been excluded from groups where hate speech and political misinformation flowed freely.

“Encryption is a red herring to prevent accountability on the platform,” Madung said. “In an electoral context, the problems are not necessarily with the content purely. It’s about the fact that a small group of people can end up significantly influencing groups of people with ease. These apps have removed the friction of the transmission of information through society.”

This article originally appeared on Engadget at https://www.engadget.com/mozilla-urges-whatsapp-to-combat-misinformation-ahead-of-global-elections-200002024.html?src=rss

Mozilla urges WhatsApp to combat misinformation ahead of global elections

In 2024, four billion people — about half the world’s population — in 64 countries including large democracies like the US and India, will head to the polls. Social media companies like Meta, YouTube and TikTok, have promised to protect the integrity of those elections, at least as far as discourse and factual claims being made on their platforms are concerned. Missing from the conversation, however, is closed messaging app WhatsApp, which now rivals public social media platforms in both scope and reach. That absence has researchers from non-profit Mozilla worried.

“Almost 90% of the safety interventions pledged by Meta ahead of these elections are focused on Facebook and Instagram,” Odanga Madung, a senior researcher at Mozilla focused on elections and platform integrity, told Engadget. “Why has Meta not publicly committed to a public road map of exactly how it’s going to protect elections within [WhatsApp]?”

Over the last ten years, WhatsApp, which Meta (then Facebook) bought for $19 billion in 2014, has become the default way for most of the world outside the US to communicate. In 2020, WhatsApp announced that it had more than two billion users around the world — a scale that dwarfs every other social or messaging app except Facebook itself.

Despite that scale, Meta’s focus has mostly been only on Facebook when it comes to election-related safety measures. Mozilla’s analysis found that while Facebook had made 95 policy announcements related to elections since 2016, the year the social network came under scrutiny for helping spread fake news and foster extreme political sentiments. WhatsApp only made 14. By comparison, Google and YouTube made 35 and 27 announcements each, while X and TikTok had 34 and 21 announcements respectively. “From what we can tell from its public announcements, Meta’s election efforts seem to overwhelmingly prioritize Facebook,” wrote Madung in the report.

Mozilla is now calling on Meta to make major changes to how WhatsApp functions during polling days and in the months before and after a country’s elections. They include adding disinformation labels to viral content (“Highly forwarded: please verify” instead of the current “forwarded many times), restricting broadcast and Communities features that let people blast messages to hundreds of people at the same time and nudging people to “pause and reflect” before they forward anything. More than 16,000 people have signed Mozilla’s pledge asking WhatsApp to slow the spread of political disinformation, a company spokesperson told Engadget.

WhatsApp first started adding friction to its service after dozens of people were killed in India, the company’s largest market, in a series of lynchings sparked by misinformation that went viral on the platform. This included limiting the number of people and groups that users could forward a piece of content to, and distinguishing forwarded messages with “forwarded” labels. Adding a “forwarded” label was a measure to curb misinformation — the idea was that people might treat forwarded content with greater skepticism.

“Someone in Kenya or Nigeria or India using WhatsApp for the first time is not going to think about the meaning of the ‘forwarded’ label in the context of misinformation,” Madung said. “In fact, it might have the opposite effect — that something has been highly forwarded, so it must be credible. For many communities, social proof is an important factor in establishing the credibility of something.”

The idea of asking people to pause and reflect came from a feature that Twitter once implemented where the app prompted people to actually read an article before retweeting it if they hadn’t opened it first. Twitter said that the prompt led to a 40% increase in people opening articles before retweeting them

And asking WhatsApp to temporarily disable its broadcast and Communities features arose from concerns over their potential to blast messages, forwarded or otherwise, to thousands of people at once. “They’re trying to turn this into the next big social media platform,” Madung said. “But without the consideration for the rollout of safety features.”

“WhatsApp is one of the only technology companies to intentionally constrain sharing by introducing forwarding limits and labeling messages that have been forwarded many times,” a WhatsApp spokesperson told Engadget. “We’ve built new tools to empower users to seek accurate information while protecting them from unwanted contact, which we detail on our website.”

Mozilla’s demands came out of research around platforms and elections that the company did in Brazil, India and Liberia. The former are two of WhatsApp’s largest markets, while most of the population of Liberia lives in rural areas with low internet penetration, making traditional online fact-checking nearly impossible. Across all three countries, Mozilla found political parties using WhatsApp’s broadcast feature heavily to “micro-target” voters with propaganda, and, in some cases, hate speech.

WhatsApp’s encrypted nature also makes it impossible for researchers to monitor what is circulating within the platform’s ecosystem — a limitation that isn’t stopping some of them from trying. In 2022, two Rutgers professors, Kiran Garimella and Simon Chandrachud visited the offices of political parties in India and managed to convince officials to add them to 500 WhatsApp groups that they ran. The data that they gathered formed the basis of an award-winning paper they wrote called “What circulates on Partisan WhatsApp in India?” Although the findings were surprising — Garimella and Chandrachud found that misinformation and hate speech did not, in fact, make up a majority of the content of these groups — the authors clarified that their sample size was small, and they may have deliberately been excluded from groups where hate speech and political misinformation flowed freely.

“Encryption is a red herring to prevent accountability on the platform,” Madung said. “In an electoral context, the problems are not necessarily with the content purely. It’s about the fact that a small group of people can end up significantly influencing groups of people with ease. These apps have removed the friction of the transmission of information through society.”

This article originally appeared on Engadget at https://www.engadget.com/mozilla-urges-whatsapp-to-combat-misinformation-ahead-of-global-elections-200002024.html?src=rss

Netflix is done telling us how many people use Netflix

Netflix will stop disclosing the number of people who signed up for its service, as well as the revenue it generates from each subscriber from next year, the company announced on Thursday. It will focus, instead, on highlighting revenue growth and the amount of time spent on its platform.

“In our early days, when we had little revenue or profit, membership growth was a strong indicator of our future potential,” the company said in a letter to shareholders. “But now we’re generating very substantial profit and free cash flow.”

Netflix revealed that the service added 9.33 million subscribers over the last few months, bringing the total number of paying households worldwide to nearly 270 million. Despite its decision to stop reporting user numbers each quarter, Netflix said that the company will “announce major subscriber milestones as we cross them,” which means we’ll probably hear about it when it crosses 300 million.

Netflix estimates that more than half a billion people around the world watch TV shows and movies through its service, an audience it is now figuring out how to squeeze even more money out of through new pricing tiers, a crackdown on password-sharing, and showing ads. Over the last few years, it has also steadily added games like the Grand Theft Auto trilogy, Hades, Dead Cells, Braid, and more, to its catalog.

Subscriber metrics are an important signal to Wall Street because they show how quickly a company is growing. But Netflix’s move to stop reporting these is something that we’ve seen from other companies before. In February, Meta announced that it would no longer break out the number of daily and monthly Facebook users each quarter but only reveal how many people collectively used Facebook, WhatsApp, Messenger, and Instagram. In 2018, Apple, too, stopped reporting the number of iPhones, iPads, and Macs it sold each quarter, choosing to focus, instead, on how much money it made in each category.

This article originally appeared on Engadget at https://www.engadget.com/netflix-is-done-telling-us-how-many-people-use-netflix-215149971.html?src=rss

Netflix is done telling us how many people use Netflix

Netflix will stop disclosing the number of people who signed up for its service, as well as the revenue it generates from each subscriber from next year, the company announced on Thursday. It will focus, instead, on highlighting revenue growth and the amount of time spent on its platform.

“In our early days, when we had little revenue or profit, membership growth was a strong indicator of our future potential,” the company said in a letter to shareholders. “But now we’re generating very substantial profit and free cash flow.”

Netflix revealed that the service added 9.33 million subscribers over the last few months, bringing the total number of paying households worldwide to nearly 270 million. Despite its decision to stop reporting user numbers each quarter, Netflix said that the company will “announce major subscriber milestones as we cross them,” which means we’ll probably hear about it when it crosses 300 million.

Netflix estimates that more than half a billion people around the world watch TV shows and movies through its service, an audience it is now figuring out how to squeeze even more money out of through new pricing tiers, a crackdown on password-sharing, and showing ads. Over the last few years, it has also steadily added games like the Grand Theft Auto trilogy, Hades, Dead Cells, Braid, and more, to its catalog.

Subscriber metrics are an important signal to Wall Street because they show how quickly a company is growing. But Netflix’s move to stop reporting these is something that we’ve seen from other companies before. In February, Meta announced that it would no longer break out the number of daily and monthly Facebook users each quarter but only reveal how many people collectively used Facebook, WhatsApp, Messenger, and Instagram. In 2018, Apple, too, stopped reporting the number of iPhones, iPads, and Macs it sold each quarter, choosing to focus, instead, on how much money it made in each category.

This article originally appeared on Engadget at https://www.engadget.com/netflix-is-done-telling-us-how-many-people-use-netflix-215149971.html?src=rss

Meta is stuffing its AI chatbot into your Instagram DMs

On Friday, people around the web noticed a new addition to their Instagram: Meta AI, the company’s general-purpose, AI-powered chatbot that can answer questions, write poetry and generate images with a simple text prompt. The move isn’t surprising. Meta revealed Meta AI in September 2023 and has spent the last few months adding the chatbot to products like Facebook Messenger and WhatsApp, so adding it to Instagram seems like a no-brainer. 

“Our generative AI-powered experiences are under development in various phases, and we’re testing a range of them publicly in a limited capacity,” a Meta spokesperson told Engadget, which suggests that not everyone has the feature available yet. TechCrunch, which first noted the news, said that Meta AI was showing up in Instagram’s search bar. But for some of us at Engadget, the feature actually showed up in the search bar in Instagram’s Direct Messaging inbox. 

Tapping it let me start a conversation with Meta AI just I would DM a friend on Instagram. I was able to ask the chatbot to give me definitions of words, suggest headlines for some stories I’m working on, and generate images of dogs on skateboards. I was also able to ask Meta AI to recommend Reels with cats in them, which it was able to do easily.

But when my colleague Aaron Souppouris asked Meta AI in WhatsApp to recommend Reels, it showed him some Reels in that app too — suggesting that the bot in Instagram isn’t really doing anything specific to Instagram. Instead, Meta is simply shoehorning the same chatbot into every app it owns.

If you tap a hamburger menu within the bot, Meta AI will also show you a long list of possible actions you ask the bot to take.

Meta AI will present a list of actions you can ask the bot to take.
Aaron Souppouris

Why would you want a chatbot in Instagram to suggest tips for dealing with credit card debit, have a debate about cardio versus weights, or suggest hacks to travel with points, I do not know. But the point is that if you want to, you can.

This article originally appeared on Engadget at https://www.engadget.com/meta-is-stuffing-its-ai-chatbot-into-your-instagram-dms-231855991.html?src=rss

Meta is stuffing its AI chatbot into your Instagram DMs

On Friday, people around the web noticed a new addition to their Instagram: Meta AI, the company’s general-purpose, AI-powered chatbot that can answer questions, write poetry and generate images with a simple text prompt. The move isn’t surprising. Meta revealed Meta AI in September 2023 and has spent the last few months adding the chatbot to products like Facebook Messenger and WhatsApp, so adding it to Instagram seems like a no-brainer. 

“Our generative AI-powered experiences are under development in various phases, and we’re testing a range of them publicly in a limited capacity,” a Meta spokesperson told Engadget, which suggests that not everyone has the feature available yet. TechCrunch, which first noted the news, said that Meta AI was showing up in Instagram’s search bar. But for some of us at Engadget, the feature actually showed up in the search bar in Instagram’s Direct Messaging inbox. 

Tapping it let me start a conversation with Meta AI just I would DM a friend on Instagram. I was able to ask the chatbot to give me definitions of words, suggest headlines for some stories I’m working on, and generate images of dogs on skateboards. I was also able to ask Meta AI to recommend Reels with cats in them, which it was able to do easily.

But when my colleague Aaron Souppouris asked Meta AI in WhatsApp to recommend Reels, it showed him some Reels in that app too — suggesting that the bot in Instagram isn’t really doing anything specific to Instagram. Instead, Meta is simply shoehorning the same chatbot into every app it owns.

If you tap a hamburger menu within the bot, Meta AI will also show you a long list of possible actions you ask the bot to take.

Meta AI will present a list of actions you can ask the bot to take.
Aaron Souppouris

Why would you want a chatbot in Instagram to suggest tips for dealing with credit card debit, have a debate about cardio versus weights, or suggest hacks to travel with points, I do not know. But the point is that if you want to, you can.

This article originally appeared on Engadget at https://www.engadget.com/meta-is-stuffing-its-ai-chatbot-into-your-instagram-dms-231855991.html?src=rss

Google’s new AI video generator is more HR than Hollywood

For most of us, creating documents, spreadsheets and slide decks is an inescapable part of work life in 2024. What's not is creating videos. That’s something Google would like to change. On Tuesday, the company announced Google Vids, a video creation app for work that the company says can make everyone a “great storyteller” using the power of AI.

Vids uses Gemini, Google’s latest AI model, to quickly create videos for the workplace. Type in a prompt, feed in some documents, pictures, and videos, and sit back and relax as Vids generates an entire storyboard, script, music and voiceover. "As a storytelling medium, video has become ubiquitous for its immediacy and ability to ‘cut through the noise,’ but it can be daunting to know where to start," said Aparna Pappu, a Google vice president, in a blog post announcing the app. "Vids is your video, writing, production and editing assistant, all in one."

In a promotional video, Google uses Vids to create a video recapping moments from its Cloud Next conference in Las Vegas, an annual event during which it showed off the app. Based on a simple prompt telling it to create a recap video and attaching a document full of information about the event, Vids generates a narrative outline that can be edited. It then lets the user select a template for the video — you can choose between research proposal, new employee intro, team milestone, quarterly business update, and many more — and then crunches for a few moments before spitting out a first draft of a video, complete with a storyboard, stock media, music, transitions, and animation. It even generates a script and a voiceover, although you can also record your own. And you can manually choose photos from Google Drive or Google Photos to drop them seamlessly into the video.


It all looks pretty slick, but it’s important to remember what Vids is not: a replacement for AI-powered video generation tools like OpenAI’s upcoming Sora or Runway’s Gen-2 that create videos from scratch from text prompts. Instead. Google Vids uses AI to understand your prompt, generate a script and a voiceover, and stitch together stock images, videos, music, transitions, and animations to create what is, effectively, a souped up slide deck. And because Vids is a part of Google Workspace, you can collaborate in real time just like Google Docs, Sheets, and Slides.

Who asked for this? My guess is HR departments and chiefs of staff, who frequently need to create onboarding videos for new employees, announce company milestones, or create training materials for teams. But if and when Google chooses to make Vids available beyond Workspace, which is typically used by businesses, I can also see people using this beyond work like easily creating videos for a birthday party or a vacation using their own photos and videos whenever it becomes available more broadly

Vids will be available in June and is first coming to Workspace Labs, which means you’ll need to opt in to test it. It’s not clear yet when it will be available more broadly.

This article originally appeared on Engadget at https://www.engadget.com/googles-new-ai-video-generator-is-more-hr-than-hollywood-120034992.html?src=rss