Google is updating the Play Store with AI-powered app reviews and curated spaces

Google just announced a suite of updates to the Play Store in an attempt to make it more fun to use. This is part of a larger move by the company to turn its online marketplace into "an end-to-end experience that’s more than a store.” You read that right. They want us to hang out on Google Play.

Here’s what the company has planned. The update brings AI-generated review summaries that pull from user reviews to develop a consensus. You’ve likely already encountered this type of thing on Facebook and while using Google search. The company first announced this feature at this year’s I/O event.

This AI-adjacent approach will also apply to auto-generated FAQs about each app that are powered by Gemini models. Additionally, there will be AI-generated highlights that offer a quick summarization of a particular app. Google showed off a still image of this for a photo editing app in which the highlights included the number of filters and layouts available, in addition to tools and sharing options. This AI approach will also let users quickly compare apps in similar categories.

Google’s also rolling out shared spaces on the Play Store. These aren’t communities or mini social networks, like Reddit or something, but rather splash pages for various topics of interest. The company started this project with a pilot involving cricket. The shared space gave users in India the ability to “explore all their cricket content from across various channels in one, convenient spot.” This included relevant videos, around 100 curated cricket-related apps and some simple user polls. The next curated space will be about Japanese manga. There has been no word as to when this feature will expand into multiple categories available to global users.

The entire “shopping for a new game to play” experience is also getting an upgrade, focused primarily on discovery. Google promises “enriched game details” pages, complete with YouTube videos from developers and clearly-marked promotions, which reminds me of Steam. This even extends to the post-purchase experience, as return users will see updated developer notes and a section for tips and tricks. The program is in early access and currently only available to English language users. There are also some new games coming to Google’s oft-overlooked Play Pass, like Asphalt Legends Unite and Candy Crush Saga, and a feature that lets users play multiple games at once on PC. 

Finally, there’s some personalization stuff in this update. The new Collections feature provides custom categories based on previously-purchased apps. This means that each Google Play homescreen will be different for each user, offering an easy way to continue binging a show or finishing a video game.

Many of these upgrades begin rolling out today, though some are still in the early access stage. Others, like the shared spaces feature, still have some kinks to work out.

This article originally appeared on Engadget at https://www.engadget.com/google-is-updating-the-play-store-with-ai-powered-app-reviews-and-curated-spaces-130036843.html?src=rss

YouTube Shorts gets Instagram’s ‘Add yours’ prompt stickers

YouTube is continuing its mission to compete with Instagram and TikTok. The platform has announced new features on Shorts, notably including an "Add yours" sticker. That's right, it seems to be pretty much the same tool that Instagram launched in 2021, and you'll have seen with prompts like "the most recent photo in your camera roll" or "the first photo of you and your partner." 

In this case, YouTube recommends you create prompts like showing off your dog's newest trick to "inspire your audience" and spark "a chain reaction of adorable content." The sticker looks nearly identical to Instagram's, so if you use it there, it should be an easy transition. This new-to-YouTube sticker should roll out across Shorts over the next few weeks.

The
YouTube

YouTube has also announced you can soon add and edit auto-generated captions on Shorts. You can choose between various colors and fonts to make captions blend in better with your videos. Again, you will recognize this from Instagram, but let's be honest: there's very little reinventing the wheel on social media these days, so it's nice to have options regardless of your platform of choice.

Similarly, YouTube is also rolling out a Text to Speech feature on Shorts. The tool lets you add text after recording a Short and then click the "Add voice" icon at the top left of your screen. From there, YouTube provides four voices you can pick from for the narration.

The last update coming to Youtube Shorts is for Android users. Soon, Auto layout will be available on Android, allowing you to track the subject of your video as you create it. You can hear about all of these updates directly from the YouTube team below.

This article originally appeared on Engadget at https://www.engadget.com/youtube-shorts-gets-instagrams-add-yours-prompt-stickers-160002525.html?src=rss

YouTube Premium’s new features include picture-in-picture for YouTube Shorts

YouTube has recently launched a bunch of new features for Premium subscribers, including a quick way to skip the more boring parts of a video. When users double tap on a video, it will now skip ahead to what YouTube has marked as the more interesting portions of it based on a combination of AI and viewership data. The capability is now live in the US for Android users, though it's rolling out to iOS users in the coming weeks, as well. On Android, Premium subscribers can now also watch Shorts while checking their emails, browsing social media or doing things on other apps in general with the new picture-in-picture capability. 

Paying users will get access to the video hosting website's latest experimental features, as well. One of YouTube's newest test features is smart downloads for Shorts, which automatically saves the service's short-form videos on users' devices that they could then watch offline. In addition, Android users now have access to a conversational AI experience that can answer their questions and suggest related content without having to stop watching whatever's playing on their screens. It's only limited to users in the US at the moment, however, and only for English videos that display an "Ask" button. Finally, Premium subscribers can access YouTube's redesigned watch page for the web that apparently makes it easier to find related content. 

YouTube Premium removes ads from videos and gives subscribers access to offline viewing, Music Premium and other perks. In February, the Google-owned video sharing platform reported that it hit 100 million subscribers for both Premium and Music offerings, but it's been trying to get more people to pay for its services. Aside from introducing new perks, it's also waging a war against ad blockers and recently started preventing ad-blocking apps on mobile from accessing its videos. 

This article originally appeared on Engadget at https://www.engadget.com/youtube-premiums-new-features-include-picture-in-picture-for-youtube-shorts-150029102.html?src=rss

Rabbit R1 security issue allegedly leaves sensitive user data accessible to anybody

The team behind Rabbitude, the community-formed reverse engineering project for the Rabbit R1, has revealed finding a security issue with the company's code that leaves users' sensitive information accessible to everyone. In an update posted on the Rabbitude website, the team said it gained access to the Rabbit codebase on May 16 and found "several critical hardcoded API keys." Those keys allow anybody to read every single response the R1 AI device has ever given, including those containing the users' personal information. They could also be used to brick R1 devices, alter R1's responses and replace the device's voice. 

The API keys they found authenticate users' access to ElevenLabs' text-to-speech service, Azure's speech-to-text system, Yelp (for review lookups) and Google Maps (for location lookups) on the R1 AI device. In a tweet, one of Rabbitude's members said that the company has known about the issue for the past month and "did nothing to fix it." After they posted, they said Rabbit revoked Elevenlabs' API key, though the update broke R1 devices for a bit. 

In a statement sent to Engadget, Rabbit said it was only made aware of an "alleged data breach" on June 25. "Our security team immediately began investigating it," the company continued. "As of right now, we are not aware of any customer data being leaked or any compromise to our systems. If we learn of any other relevant information, we will provide an update once we have more details." It didn't say if it revoked the keys the Rabbitude team said it found in the company's code. 

Rabbit's R1 is a standalone AI assistant device designed by Teenage Engineering. It's meant to help users accomplish certain tasks, like placing food delivery orders, as well as to quickly look up information like the weather. We gave it a pretty low score in our review, because we found that its AI functionality often didn't work. Further, users can simply use their phone instead of having to spend an extra $199 to buy the device.

This article originally appeared on Engadget at https://www.engadget.com/rabbit-r1-security-issue-allegedly-leaves-sensitive-user-data-accessible-to-anybody-120024215.html?src=rss

You can now restrict Instagram Lives to Close Friends

Instagram is rolling out another way for users to engage with a smaller group of friends and followers starting today. Close Friends on Instagram Live does what it says on the tin: you’ll be able to limit the viewership of livestreams to just your list of Close Friends. Up to three other people will be able to join your more-intimate broadcasts.

This could help users plan trips, collaborate on homework or simply catch up, Instagram suggests. The update will also give influencers an option for hosting livestreams for a private (and perhaps paid-up) audience.

Since November, users have been able to limit the reach of posts and Reels to their Close Friends. According to Instagram, users are looking for ways to connect with friends and followers more privately. The popularity of features like DMs, Close Friends and Notes attests to that.

Speaking of Notes, Instagram has flagged a couple of under-the-radar aspects of that feature that it introduced in recent months. You can now essentially post a video as a note. This will temporarily replace your profile photo. You’ll also see an Easter egg (in other words, confetti animations) when you wish a friend a happy birthday in a note. This will appear when you include the words “happy birthday” or use birthday-related words while @-mentioning a pal.

Last but not least, Instagram has introduced a welcome feed update. You now have the option to add music to carousel posts that include videos. Until now, it was only possible to add music to carousels comprised solely of photos.

Instagram screenshots showing a music track being available on carousel feed posts that include videos.
Instagram

This article originally appeared on Engadget at https://www.engadget.com/you-can-now-restrict-instagram-lives-to-close-friends-150023794.html?src=rss

Adobe is updating its terms of service following a backlash over recent changes

Following customer outrage over its latest terms of service (ToS), Adobe is making updates to add more detail around areas like of AI and content ownership, the company said in a blog post. "Your content is yours and will never be used to train any generative AI tool," wrote head of product Scott Belsky and VP of legal and policy Dana Rao. 

Subscribers using products like Photoshop, Premiere Pro and Lightroom were incensed by new, vague language they interpreted to mean that Adobe could freely use their work to train the company's generative AI models. In other words, creators thought that Adobe could use AI to effectively rip off their work and then resell it. 

Other language was thought to mean that the company could actually take ownership of users' copyrighted material (understandably so, when you see it). 

None of that was accurate, Adobe said, noting that the new terms of use were put in place for its product improvement program and content moderation for legal reasons, mostly around CSAM. However, many users didn't see it that way and Belsky admitted that the company "could have been clearer" with the updated ToS.

"In a world where customers are anxious about how their data is used, and how generative AI models are trained, it is the responsibility of companies that host customer data and content to declare their policies not just publicly, but in their legally binding Terms of Use," Belsky said. 

To that end, the company promised to overhaul the ToS using "more plain language and examples to help customers understand what [ToS clauses] mean and why we have them," it wrote.

Adobe didn't help its own cause by releasing an update on June 6th with some minor changes to the same vague language as the original ToS and no sign of an apology. That only seemed to fuel the fire more, with subscribers to its Creative Cloud service threatening to quit en masse. 

In addition, Adobe claims that it only trains its Firefly system on Adobe Stock images. However, multiple artists have noted that their names are used as search terms in Adobe's stock footage site, as Creative Bloq reported. The results yield AI-generated art that occasionally mimics the artists' styles. 

Its latest post is more of a true mea culpa with a detailed explanation of what it plans to change. Along with the AI and copyright areas, the company emphasized that users can opt out of its product improvement programs and that it will more "narrowly tailor" licenses to the activities required. It added that it only scans data on the cloud and never looks at locally stored content. Finally, Adobe said it will be listening to customer feedback around the new changes.

This article originally appeared on Engadget at https://www.engadget.com/adobe-is-updating-its-terms-of-service-following-a-backlash-over-recent-changes-120044152.html?src=rss

Google Sheets’ new tool lets you set specific rules for notifications.

I'm the first to admit that the amount of joy Google Sheets brings me is a bit odd, but I use it for everything from tracking my earnings to planning trip budgets with friends. So, I'm excited to see that Google is making it easier to get notified about specific changes to my spreadsheet without me learning to code (something I've just never gotten into). The company has announced that Google Sheets is getting conditional notifications, meaning you can set rules in spreadsheets that send emails when certain things happen.

For example, you could set it to send you an email notification when a number drops below or above a certain amount or when a column's value changes at all. You can also set rules that align more with a project manager tool, like getting a notification when a task's status or owner changes. This tool only requires edit access, with anyone able to set notifications for themselves or others by entering their email addresses. Don't worry, you can unsubscribe if someone starts sending you unwanted notifications.

To use conditional notifications, go to tools and then conditional notifications or just right-click in a cell. From there, click add rule (you can name the rule or let Google auto-label it) and then select a custom range or column. You can add additional criteria for the rule, such as exactly what a box should say for you to receive a notification. Then, you can manually input email addresses or select a column containing them. However, Google warns that if you do the latter, the number of cells must match the number included in the rule. So, if you have three cells in the rule, you can only highlight three cells with email addresses. If you get confused, Google gets into all the nitty-gritty of it here.

Google Sheet's conditional formatting is available to anyone with the following workplaces: Businesses Standard and Plus, Education Plus and Enterprise Starter, Standard, Plus or Essential. It started rolling out for Rapid Release domains on June 4 and will begin showing up for Standard Release domains on June 18. In both cases, conditional formatting might take up to 15 days to appear.

This article originally appeared on Engadget at https://www.engadget.com/google-sheets-new-tool-lets-you-set-specific-rules-for-notifications-133030113.html?src=rss

Opera is adding Google’s Gemini AI to its browser

Opera users can already rely on the capabilities of OpenAI's large language models (LLMs) whenever they use the browser's Aria built-in AI assistant. But now, the company has also teamed up with Google to integrate its Gemini AI models into Aria. According to Opera, its Composer AI engine can process the user's intent based on their inquiry and then decide which model to use for each particular task.

Google called Gemini the "the most capable model [it has] ever built" when it officially announced the LLM last year. Since then, the company has announced Gemini-powered features across its products and has built the Gemini AI chatbot right into Android. Opera said that thanks to Gemini's integration, its browser "will now be able to provide its users with the most current information, at high performance."

The company's partnership with Google also enables Aria to offer new experimental features as part of its AI Feature Drop program. Users who have the Opera One Developer version of the browser can try a new image generation feature powered by Google's Imagen 2 model for free — in the image above, for instance, the user asked Aria to "make an image of a dog on vacation at a beach having a drink." In addition, users can listen to Aria read out responses in a conversational tone using Google's text-to-audio model. If everything goes well during testing, Opera could roll out the features to everyone, though they can still go through some changes, depending on early adopters' feedback. 

This article originally appeared on Engadget at https://www.engadget.com/opera-is-adding-googles-gemini-ai-to-its-browser-120013023.html?src=rss

Microsoft unveils Team Copilot that can assist groups of users

At this year's Build event, Microsoft has announced Team Copilot, and as you can probably guess from its name, it's a variant of the company's AI tool that can cater to the needs of a group of users. It expands Copilot's abilities beyond that of a personal assistant, so that it can serve a whole team, a department or even an entire organization, the company said in its announcement. The new tool was designed to take on time-consuming tasks to free up personnel, such as managing meeting agenda and taking down minutes that group members can tweak as needed. 

Team Copilot can also serve as a meeting moderator by summarizing important information for latecomers (or for reference after the fact) and answering questions. Finally, it can create and assign tasks in Planner, track their deadlines, and notify team members if they need to contribute to or review a certain task. These features will be available in preview across Copilot for Microsoft 365 — and will be accessible by those paying for its license — starting later this year.

In addition to Team Copilot, Microsoft has also announced new ways customers can personalize the AI assistant. Custom copilots users create from SharePoint can be edited and improved further in Copilot Studio, where they can also make custom copilots that act as agents. The latter would allow companies and business owners to automate business processes, such as end-to-end order fulfillment. Finally, the debut of Copilot connectors in Studio will make it easier for developers to build Copilot extensions that can customize the AI tools' actions. 

Update, May 21, 2024, 1:24AM ET: This story has been updated to clarify that Team Copilot is an assistant that can serve the needs of a group of users and is separate from Copilot for Teams.

This article originally appeared on Engadget at https://www.engadget.com/microsoft-unveils-copilot-for-teams-153059261.html?src=rss

Google’s accessibility app Lookout can use your phone’s camera to find and recognize objects

Google has updated some of its accessibility apps to add capabilities that will make them easier to use for people who need them. It has rolled out a new version of the Lookout app, which can read text and even lengthy documents out loud for people with low vision or blindness. The app can also read food labels, recognize currency and can tell users what it sees through the camera and in an image. Its latest version comes with a new "Find" mode that allows users to choose from seven item categories, including seating, tables, vehicles, utensils and bathrooms.

When users choose a category, the app will be able to recognize objects associated with them as the user moves their camera around a room. It will then tell them the direction or distance to the object, making it easier for users to interact with their surroundings. Google has also launched an in-app capture button, so they can take photos and quickly get AI-generated descriptions. 

A screenshot showing object categories in Google Lookout, such as Seating & Tables, Doors & Windows, Cups, etc.
Google

The company has updated its Look to Speak app, as well. Look to Speak enables users to communicate with other people by selecting from a list of phrases, which they want the app to speak out loud, using eye gestures. Now, Google has added a text-free mode that gives them the option to trigger speech by choosing from a photo book containing various emojis, symbols and photos. Even better, they can personalize what each symbol or image means for them. 

Google has also expanded its screen reader capabilities for Lens in Maps, so that it can tell the user the names and categories of the places it sees, such as ATMs and restaurants. It can also tell them how far away a particular location is. In addition, it's rolling out improvements for detailed voice guidance, which provides audio prompts that tell the user where they're supposed to go. 

Finally, Google has made Maps' wheelchair information accessible on desktop, four years after it launched on Android and iOS. The Accessible Places feature allows users to see if the place they're visiting can accommodate their needs — businesses and public venues with an accessible entrance, for example, will show a wheelchair icon. They can also use the feature to see if a location has accessible washrooms, seating and parking. The company says Maps has accessibility information for over 50 million places at the moment. Those who prefer looking up wheelchair information on Android and iOS will now also be able to easily filter reviews focusing on wheelchair access. 

Google made all these announcements at this year's I/O developer conference, where it also revealed that it open-sourced more code for the Project Gameface hands-free "mouse," allowing Android developers to use it for their apps. The tool allows users to control the cursor with their head movements and facial gestures, so that they can more easily use their computers and phones. 

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-accessibility-app-lookout-can-use-your-phones-camera-to-find-and-recognize-objects-160007994.html?src=rss