Instagram’s status update feature is coming to user profiles

Instagram’s status update feature, Notes, will soon be more prominent in the app. Up until now, Notes have only been visible from Instagram’s inbox, but the brief updates will soon also be visible directly on users’ profiles.

The change should increase the visibility of the feature and give people a new place to interact with their friends’ updates. (Instagram added reply functionality to Notes back in December.) The app is also experimenting with “prompts” for Notes, which will allow users to share questions for their friends to answer in their updates, much like the collaborative “add yours” templates for Stories.

Notes are similar to Stories in that the updates only stick around for 24 hours, though they are only visible to mutual followers, so they aren’t meant to be as widely shared as a typical grid or Stories post. The latest updates are another sign of how Meta has used the feature, first introduced in 2022, to encourage users to post more often for smaller, more curated groups of friends.

Separately, the app is also adding a new “cutouts” feature, which allows users to make stickers out of objects in their photos, much like the iOS sticker feature. On Instagram, these stickers can be shared in Stories or in a Reel. Cutouts can also be made from other users’ public posts, effectively giving people a new way to remix content from others (Instagram’s help page notes that users can disable this feature if they prefer for their content to not be reused.)

This article originally appeared on Engadget at https://www.engadget.com/instagrams-status-update-feature-is-coming-to-user-profiles-182621692.html?src=rss

Arturia stuffed almost all of its software emulations into this new keyboard

Arturia just released a new standalone synthesizer called the AstroLab. This 61-key stage keyboard is basically the company’s Analog Lab software in hardware form, which makes it perfect for live performances. The synth boasts ten dedicated sound engines and access to 35 virtual instruments, including the vast majority of the emulations found with the iconic V Collection. It also costs $2,000.

You could recreate this on the cheap by just buying some software instruments and a MIDI controller, but this is a stage keyboard. In other words, it has been designed with live performance in mind. The casing is durable and built to withstand the rigors of touring and there’s plenty of nifty sound design tools that should come in handy when gigging.

There are 12 insert FX options, with four control knobs, and the ability to loop any sound by up to 32 bars. The instrument even captures the MIDI, so people can easily swap out to another instrument and play the same part. The multitimbral feature allows players to set a split point along the keyboard, to make it easy to pull up two instruments at the same time. This is a big deal when playing live, as you never know how long a keyboard will take to load a preset.

If you want to get people dancing to the sound of a robot voice singing “around the world” over and over until 5 AM, AstroLab keyboards ship with a vocoder and a port to plug in a microphone. Of course, the synthesizer features the usual accouterments like mod wheels, an arpeggiator and various chord scale options. Finally, there’s an affiliated mobile app, AstroLab Connect, that lets users organize their presets and download new sounds from the store. The keyboard is available now through Arturia and various retailers.

This article originally appeared on Engadget at https://www.engadget.com/arturia-stuffed-almost-all-of-its-software-emulations-into-this-new-keyboard-190542557.html?src=rss

Arturia stuffed almost all of its software emulations into this new keyboard

Arturia just released a new standalone synthesizer called the AstroLab. This 61-key stage keyboard is basically the company’s Analog Lab software in hardware form, which makes it perfect for live performances. The synth boasts ten dedicated sound engines and access to 35 virtual instruments, including the vast majority of the emulations found with the iconic V Collection. It also costs $2,000.

You could recreate this on the cheap by just buying some software instruments and a MIDI controller, but this is a stage keyboard. In other words, it has been designed with live performance in mind. The casing is durable and built to withstand the rigors of touring and there’s plenty of nifty sound design tools that should come in handy when gigging.

There are 12 insert FX options, with four control knobs, and the ability to loop any sound by up to 32 bars. The instrument even captures the MIDI, so people can easily swap out to another instrument and play the same part. The multitimbral feature allows players to set a split point along the keyboard, to make it easy to pull up two instruments at the same time. This is a big deal when playing live, as you never know how long a keyboard will take to load a preset.

If you want to get people dancing to the sound of a robot voice singing “around the world” over and over until 5 AM, AstroLab keyboards ship with a vocoder and a port to plug in a microphone. Of course, the synthesizer features the usual accouterments like mod wheels, an arpeggiator and various chord scale options. Finally, there’s an affiliated mobile app, AstroLab Connect, that lets users organize their presets and download new sounds from the store. The keyboard is available now through Arturia and various retailers.

This article originally appeared on Engadget at https://www.engadget.com/arturia-stuffed-almost-all-of-its-software-emulations-into-this-new-keyboard-190542557.html?src=rss

You can now lie down while using a Meta Quest 3 headset

Meta is rolling out the latest update for Meta Quest and, as always, there are some handy features. From now on, whenever you're livestreaming to the Meta Quest app, the broadcast will continue when you take the headset off. That should help avoid interruptions. There are some Quest 3-specific upgrades too, including the ability to use an external mic via the USB-C port, along with resolution and image quality improvements for the passthrough mixed reality feature.

That's not all, though. Quest 3 users will be able to take advantage of an experimental feature that allows them to make use of the headset while supine. If you enable the Use Apps While Lying Down option from the Experimental section of the Settings, you'll simply need to hold the menu button to reset your view when you lie down.

As such, you should be able to kick back and relax into immersive media and gaming experiences without having to keep your head upright. Turning your head to see what's going on elsewhere in the environment might be a bit more of a chore though.

Elsewhere, it'll now be easier to meet up with friends in Horizon Worlds, if any of your friends actually use that app. Whenever a buddy is in a public world with their location turned on, you can join them from the People app in the universal menu.

This article originally appeared on Engadget at https://www.engadget.com/you-can-now-lie-down-while-using-a-meta-quest-3-headset-164556039.html?src=rss

You can now lie down while using a Meta Quest 3 headset

Meta is rolling out the latest update for Meta Quest and, as always, there are some handy features. From now on, whenever you're livestreaming to the Meta Quest app, the broadcast will continue when you take the headset off. That should help avoid interruptions. There are some Quest 3-specific upgrades too, including the ability to use an external mic via the USB-C port, along with resolution and image quality improvements for the passthrough mixed reality feature.

That's not all, though. Quest 3 users will be able to take advantage of an experimental feature that allows them to make use of the headset while supine. If you enable the Use Apps While Lying Down option from the Experimental section of the Settings, you'll simply need to hold the menu button to reset your view when you lie down.

As such, you should be able to kick back and relax into immersive media and gaming experiences without having to keep your head upright. Turning your head to see what's going on elsewhere in the environment might be a bit more of a chore though.

Elsewhere, it'll now be easier to meet up with friends in Horizon Worlds, if any of your friends actually use that app. Whenever a buddy is in a public world with their location turned on, you can join them from the People app in the universal menu.

This article originally appeared on Engadget at https://www.engadget.com/you-can-now-lie-down-while-using-a-meta-quest-3-headset-164556039.html?src=rss

Google Gemini chatbots are coming to a customer service interaction near you

More and more companies are choosing to deploy AI-powered chatbots to deal with basic customer service inquiries. At the ongoing Google Cloud Next conference in Las Vegas, the company has revealed the Gemini-powered chatbots its partners are working on, some of which you could end up interacting with. Best Buy, for instance, is using Google's technology to build virtual assistants that can help you troubleshoot product issues and reschedule order deliveries. IHG Hotels & Resorts is working on another that can help you plan a vacation in its mobile app, while Mercedes Benz is using Gemini to improve its own smart sales assistant. 

Security company ADT is also building an agent that can help you set up your home security system. And if you happen to be a radiologist, you may end up interacting with Bayer's Gemini-powered apps for diagnosis assistance. Meanwhile, other partners are using Gemini to create experiences that aren't quite customer-facing: Cintas, Discover and Verizon are using generative AI capabilities in different ways to help their customer service personnel find information more quickly and easily. 

Google has launched the Vertex AI Agency Builder, as well, which it says will help developers "easily build and deploy enterprise-ready gen AI experiences" like OpenAI's GPTs and Microsoft's Copilot Studio. The Builder will provide developers with a set of tools they can use for their projects, including a no-code console that can understand natural language and build AI agents based on Gemini in minutes. Vertex AI has more advanced tools for more complex projects, of course, but their common goal is to simplify the creation and maintenance of personalized AI chatbots and experiences. 

At the same event, Google also announced its new AI-powered video generator for Workspace, as well as its first ARM-based CPU specifically made for data centers. By launching the latter, it's taking on Amazon, which has been using its Graviton processor to power its cloud network over the past few years. 

This article originally appeared on Engadget at https://www.engadget.com/google-gemini-chatbots-are-coming-to-a-customer-service-interaction-near-you-120035393.html?src=rss

Google Gemini chatbots are coming to a customer service interaction near you

More and more companies are choosing to deploy AI-powered chatbots to deal with basic customer service inquiries. At the ongoing Google Cloud Next conference in Las Vegas, the company has revealed the Gemini-powered chatbots its partners are working on, some of which you could end up interacting with. Best Buy, for instance, is using Google's technology to build virtual assistants that can help you troubleshoot product issues and reschedule order deliveries. IHG Hotels & Resorts is working on another that can help you plan a vacation in its mobile app, while Mercedes Benz is using Gemini to improve its own smart sales assistant. 

Security company ADT is also building an agent that can help you set up your home security system. And if you happen to be a radiologist, you may end up interacting with Bayer's Gemini-powered apps for diagnosis assistance. Meanwhile, other partners are using Gemini to create experiences that aren't quite customer-facing: Cintas, Discover and Verizon are using generative AI capabilities in different ways to help their customer service personnel find information more quickly and easily. 

Google has launched the Vertex AI Agency Builder, as well, which it says will help developers "easily build and deploy enterprise-ready gen AI experiences" like OpenAI's GPTs and Microsoft's Copilot Studio. The Builder will provide developers with a set of tools they can use for their projects, including a no-code console that can understand natural language and build AI agents based on Gemini in minutes. Vertex AI has more advanced tools for more complex projects, of course, but their common goal is to simplify the creation and maintenance of personalized AI chatbots and experiences. 

At the same event, Google also announced its new AI-powered video generator for Workspace, as well as its first ARM-based CPU specifically made for data centers. By launching the latter, it's taking on Amazon, which has been using its Graviton processor to power its cloud network over the past few years. 

This article originally appeared on Engadget at https://www.engadget.com/google-gemini-chatbots-are-coming-to-a-customer-service-interaction-near-you-120035393.html?src=rss

Google’s new AI video generator is more HR than Hollywood

For most of us, creating documents, spreadsheets and slide decks is an inescapable part of work life in 2024. What's not is creating videos. That’s something Google would like to change. On Tuesday, the company announced Google Vids, a video creation app for work that the company says can make everyone a “great storyteller” using the power of AI.

Vids uses Gemini, Google’s latest AI model, to quickly create videos for the workplace. Type in a prompt, feed in some documents, pictures, and videos, and sit back and relax as Vids generates an entire storyboard, script, music and voiceover. "As a storytelling medium, video has become ubiquitous for its immediacy and ability to ‘cut through the noise,’ but it can be daunting to know where to start," said Aparna Pappu, a Google vice president, in a blog post announcing the app. "Vids is your video, writing, production and editing assistant, all in one."

In a promotional video, Google uses Vids to create a video recapping moments from its Cloud Next conference in Las Vegas, an annual event during which it showed off the app. Based on a simple prompt telling it to create a recap video and attaching a document full of information about the event, Vids generates a narrative outline that can be edited. It then lets the user select a template for the video — you can choose between research proposal, new employee intro, team milestone, quarterly business update, and many more — and then crunches for a few moments before spitting out a first draft of a video, complete with a storyboard, stock media, music, transitions, and animation. It even generates a script and a voiceover, although you can also record your own. And you can manually choose photos from Google Drive or Google Photos to drop them seamlessly into the video.


It all looks pretty slick, but it’s important to remember what Vids is not: a replacement for AI-powered video generation tools like OpenAI’s upcoming Sora or Runway’s Gen-2 that create videos from scratch from text prompts. Instead. Google Vids uses AI to understand your prompt, generate a script and a voiceover, and stitch together stock images, videos, music, transitions, and animations to create what is, effectively, a souped up slide deck. And because Vids is a part of Google Workspace, you can collaborate in real time just like Google Docs, Sheets, and Slides.

Who asked for this? My guess is HR departments and chiefs of staff, who frequently need to create onboarding videos for new employees, announce company milestones, or create training materials for teams. But if and when Google chooses to make Vids available beyond Workspace, which is typically used by businesses, I can also see people using this beyond work like easily creating videos for a birthday party or a vacation using their own photos and videos whenever it becomes available more broadly

Vids will be available in June and is first coming to Workspace Labs, which means you’ll need to opt in to test it. It’s not clear yet when it will be available more broadly.

This article originally appeared on Engadget at https://www.engadget.com/googles-new-ai-video-generator-is-more-hr-than-hollywood-120034992.html?src=rss

Google’s new AI video generator is more HR than Hollywood

For most of us, creating documents, spreadsheets and slide decks is an inescapable part of work life in 2024. What's not is creating videos. That’s something Google would like to change. On Tuesday, the company announced Google Vids, a video creation app for work that the company says can make everyone a “great storyteller” using the power of AI.

Vids uses Gemini, Google’s latest AI model, to quickly create videos for the workplace. Type in a prompt, feed in some documents, pictures, and videos, and sit back and relax as Vids generates an entire storyboard, script, music and voiceover. "As a storytelling medium, video has become ubiquitous for its immediacy and ability to ‘cut through the noise,’ but it can be daunting to know where to start," said Aparna Pappu, a Google vice president, in a blog post announcing the app. "Vids is your video, writing, production and editing assistant, all in one."

In a promotional video, Google uses Vids to create a video recapping moments from its Cloud Next conference in Las Vegas, an annual event during which it showed off the app. Based on a simple prompt telling it to create a recap video and attaching a document full of information about the event, Vids generates a narrative outline that can be edited. It then lets the user select a template for the video — you can choose between research proposal, new employee intro, team milestone, quarterly business update, and many more — and then crunches for a few moments before spitting out a first draft of a video, complete with a storyboard, stock media, music, transitions, and animation. It even generates a script and a voiceover, although you can also record your own. And you can manually choose photos from Google Drive or Google Photos to drop them seamlessly into the video.


It all looks pretty slick, but it’s important to remember what Vids is not: a replacement for AI-powered video generation tools like OpenAI’s upcoming Sora or Runway’s Gen-2 that create videos from scratch from text prompts. Instead. Google Vids uses AI to understand your prompt, generate a script and a voiceover, and stitch together stock images, videos, music, transitions, and animations to create what is, effectively, a souped up slide deck. And because Vids is a part of Google Workspace, you can collaborate in real time just like Google Docs, Sheets, and Slides.

Who asked for this? My guess is HR departments and chiefs of staff, who frequently need to create onboarding videos for new employees, announce company milestones, or create training materials for teams. But if and when Google chooses to make Vids available beyond Workspace, which is typically used by businesses, I can also see people using this beyond work like easily creating videos for a birthday party or a vacation using their own photos and videos whenever it becomes available more broadly

Vids will be available in June and is first coming to Workspace Labs, which means you’ll need to opt in to test it. It’s not clear yet when it will be available more broadly.

This article originally appeared on Engadget at https://www.engadget.com/googles-new-ai-video-generator-is-more-hr-than-hollywood-120034992.html?src=rss

Apple officially allows retro game emulators on the App Store

In addition to updating its developer guidelines to allow music streaming apps to link to external website, Apple has also added new language that allows game emulators on the App Store. The updated guidelines, first noticed by 9to5Mac, now say that retro gaming console emulator apps are welcome and can even offer downloadable games. Apple also reportedly confirmed to developers in an email that they can create and offer emulators on its marketplace. 

Emulator software wasn't allowed on the App Store prior to this update, though developers have been finding ways to distribute them to iOS users. To be able to install them, users usually need to resort to jailbreaking and downloading sideloading tools or unsanctioned alternate app stores first. This rule update potentially eliminates the need for users to go through all those lengths and could bring more Android emulators to iOS.

Apple warns developers, however, that they "are responsible for all such software offered in [their] app, including ensuring that such software complies with these Guidelines and all applicable laws." Clearly, allowing emulators on the App Store doesn't mean that it's allowing pirated games, as well. Any app offering titles for download that the developer doesn't own the rights to is a no-no, so fans of specific consoles will just have to hope that their companies are planning to release official emulators for iOS. While these latest changes to Apple's developer guidelines seem to be motivated by the EU's Digital Markets Act regulation, which targets big tech companies' anti-competitive practices, the new rule on emulators applies to all developers worldwide. 

This article originally appeared on Engadget at https://www.engadget.com/apple-officially-allows-retro-game-emulators-on-the-app-store-130044937.html?src=rss