Microsoft unveils Team Copilot that can assist groups of users

At this year's Build event, Microsoft has announced Team Copilot, and as you can probably guess from its name, it's a variant of the company's AI tool that can cater to the needs of a group of users. It expands Copilot's abilities beyond that of a personal assistant, so that it can serve a whole team, a department or even an entire organization, the company said in its announcement. The new tool was designed to take on time-consuming tasks to free up personnel, such as managing meeting agenda and taking down minutes that group members can tweak as needed. 

Team Copilot can also serve as a meeting moderator by summarizing important information for latecomers (or for reference after the fact) and answering questions. Finally, it can create and assign tasks in Planner, track their deadlines, and notify team members if they need to contribute to or review a certain task. These features will be available in preview across Copilot for Microsoft 365 — and will be accessible by those paying for its license — starting later this year.

In addition to Team Copilot, Microsoft has also announced new ways customers can personalize the AI assistant. Custom copilots users create from SharePoint can be edited and improved further in Copilot Studio, where they can also make custom copilots that act as agents. The latter would allow companies and business owners to automate business processes, such as end-to-end order fulfillment. Finally, the debut of Copilot connectors in Studio will make it easier for developers to build Copilot extensions that can customize the AI tools' actions. 

Update, May 21, 2024, 1:24AM ET: This story has been updated to clarify that Team Copilot is an assistant that can serve the needs of a group of users and is separate from Copilot for Teams.

This article originally appeared on Engadget at https://www.engadget.com/microsoft-unveils-copilot-for-teams-153059261.html?src=rss

Google’s accessibility app Lookout can use your phone’s camera to find and recognize objects

Google has updated some of its accessibility apps to add capabilities that will make them easier to use for people who need them. It has rolled out a new version of the Lookout app, which can read text and even lengthy documents out loud for people with low vision or blindness. The app can also read food labels, recognize currency and can tell users what it sees through the camera and in an image. Its latest version comes with a new "Find" mode that allows users to choose from seven item categories, including seating, tables, vehicles, utensils and bathrooms.

When users choose a category, the app will be able to recognize objects associated with them as the user moves their camera around a room. It will then tell them the direction or distance to the object, making it easier for users to interact with their surroundings. Google has also launched an in-app capture button, so they can take photos and quickly get AI-generated descriptions. 

A screenshot showing object categories in Google Lookout, such as Seating & Tables, Doors & Windows, Cups, etc.
Google

The company has updated its Look to Speak app, as well. Look to Speak enables users to communicate with other people by selecting from a list of phrases, which they want the app to speak out loud, using eye gestures. Now, Google has added a text-free mode that gives them the option to trigger speech by choosing from a photo book containing various emojis, symbols and photos. Even better, they can personalize what each symbol or image means for them. 

Google has also expanded its screen reader capabilities for Lens in Maps, so that it can tell the user the names and categories of the places it sees, such as ATMs and restaurants. It can also tell them how far away a particular location is. In addition, it's rolling out improvements for detailed voice guidance, which provides audio prompts that tell the user where they're supposed to go. 

Finally, Google has made Maps' wheelchair information accessible on desktop, four years after it launched on Android and iOS. The Accessible Places feature allows users to see if the place they're visiting can accommodate their needs — businesses and public venues with an accessible entrance, for example, will show a wheelchair icon. They can also use the feature to see if a location has accessible washrooms, seating and parking. The company says Maps has accessibility information for over 50 million places at the moment. Those who prefer looking up wheelchair information on Android and iOS will now also be able to easily filter reviews focusing on wheelchair access. 

Google made all these announcements at this year's I/O developer conference, where it also revealed that it open-sourced more code for the Project Gameface hands-free "mouse," allowing Android developers to use it for their apps. The tool allows users to control the cursor with their head movements and facial gestures, so that they can more easily use their computers and phones. 

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-accessibility-app-lookout-can-use-your-phones-camera-to-find-and-recognize-objects-160007994.html?src=rss

Google’s accessibility app Lookout can use your phone’s camera to find and recognize objects

Google has updated some of its accessibility apps to add capabilities that will make them easier to use for people who need them. It has rolled out a new version of the Lookout app, which can read text and even lengthy documents out loud for people with low vision or blindness. The app can also read food labels, recognize currency and can tell users what it sees through the camera and in an image. Its latest version comes with a new "Find" mode that allows users to choose from seven item categories, including seating, tables, vehicles, utensils and bathrooms.

When users choose a category, the app will be able to recognize objects associated with them as the user moves their camera around a room. It will then tell them the direction or distance to the object, making it easier for users to interact with their surroundings. Google has also launched an in-app capture button, so they can take photos and quickly get AI-generated descriptions. 

A screenshot showing object categories in Google Lookout, such as Seating & Tables, Doors & Windows, Cups, etc.
Google

The company has updated its Look to Speak app, as well. Look to Speak enables users to communicate with other people by selecting from a list of phrases, which they want the app to speak out loud, using eye gestures. Now, Google has added a text-free mode that gives them the option to trigger speech by choosing from a photo book containing various emojis, symbols and photos. Even better, they can personalize what each symbol or image means for them. 

Google has also expanded its screen reader capabilities for Lens in Maps, so that it can tell the user the names and categories of the places it sees, such as ATMs and restaurants. It can also tell them how far away a particular location is. In addition, it's rolling out improvements for detailed voice guidance, which provides audio prompts that tell the user where they're supposed to go. 

Finally, Google has made Maps' wheelchair information accessible on desktop, four years after it launched on Android and iOS. The Accessible Places feature allows users to see if the place they're visiting can accommodate their needs — businesses and public venues with an accessible entrance, for example, will show a wheelchair icon. They can also use the feature to see if a location has accessible washrooms, seating and parking. The company says Maps has accessibility information for over 50 million places at the moment. Those who prefer looking up wheelchair information on Android and iOS will now also be able to easily filter reviews focusing on wheelchair access. 

Google made all these announcements at this year's I/O developer conference, where it also revealed that it open-sourced more code for the Project Gameface hands-free "mouse," allowing Android developers to use it for their apps. The tool allows users to control the cursor with their head movements and facial gestures, so that they can more easily use their computers and phones. 

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-accessibility-app-lookout-can-use-your-phones-camera-to-find-and-recognize-objects-160007994.html?src=rss

Google announces new scam detection tools that provide real-time alerts during phone calls

Google just announced forthcoming scam detection tools coming to Android phones later this year, which is a good thing as these scammers keep getting better and better at parting people from their money. The toolset, revealed at Google I/O 2024, is still in the testing stages but uses AI to suss out fraudsters in the middle of a conversation.

You read that right. The AI will be constantly on the hunt for conversation patterns commonly associated with scams. Once detected, you’ll receive a real-time alert on the phone, putting to bed any worries that the person on the other end is actually heading over to deliver a court summons or whatever.

Google gives the example of a “bank representative” asking for personal information, like PINs and passwords. These are uncommon bank requests, so the AI would flag them and issue an alert. Everything happens on the device, so it stays private. This feature isn’t coming to Android 15 right away and the company says it’ll share more details later in the year. We do know that people will have to opt-in to use the tool. 

Google made a big move with Android 15, bringing its Gemini chatbot to actual devices instead of requiring a connection to the cloud. In addition to this scam detection tech, the addition of onboard AI will allow for many more features, like contextual awareness when using apps.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-announces-new-scam-detection-tools-that-provide-real-time-alerts-during-phone-calls-181442091.html?src=rss

Google announces new scam detection tools that provide real-time alerts during phone calls

Google just announced forthcoming scam detection tools coming to Android phones later this year, which is a good thing as these scammers keep getting better and better at parting people from their money. The toolset, revealed at Google I/O 2024, is still in the testing stages but uses AI to suss out fraudsters in the middle of a conversation.

You read that right. The AI will be constantly on the hunt for conversation patterns commonly associated with scams. Once detected, you’ll receive a real-time alert on the phone, putting to bed any worries that the person on the other end is actually heading over to deliver a court summons or whatever.

Google gives the example of a “bank representative” asking for personal information, like PINs and passwords. These are uncommon bank requests, so the AI would flag them and issue an alert. Everything happens on the device, so it stays private. This feature isn’t coming to Android 15 right away and the company says it’ll share more details later in the year. We do know that people will have to opt-in to use the tool. 

Google made a big move with Android 15, bringing its Gemini chatbot to actual devices instead of requiring a connection to the cloud. In addition to this scam detection tech, the addition of onboard AI will allow for many more features, like contextual awareness when using apps.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-announces-new-scam-detection-tools-that-provide-real-time-alerts-during-phone-calls-181442091.html?src=rss

Google’s Gemini Nano brings better image-description smarts to its TalkBack vision tool

The Google I/O event is here, and the company is announcing lots of great updates for your Android device. As we heard earlier, Gemini Nano is getting multimodal support, meaning your Android will still process text but with a better understanding of other factors like sights, sounds and spoken language. Now Google has shared that the new tool is also coming to it's TalkBack feature.

TalkBack is an existing tool that reads aloud a description of an image, whether it's one you captured or from the internet. Gemini Nano's multimodal support should provide a more detailed understanding of the image. According to Google, TalkBack users encounter about 90 images each day that don't have a label. Gemini Nano should be able to provide missing information, such as what an item of clothing looks like or the details of a new photo sent by a friend. 

Gemini Nano works directly on a person's device, meaning it should still function properly without any network connection. While we don't yet have an exact date for when it will arrive, Google says TalkBack will get Gemini Nano's updated features later this year.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-gemini-nano-brings-better-image-description-smarts-to-its-talkback-vision-tool-180759598.html?src=rss

Google’s Gemini Nano brings better image-description smarts to its TalkBack vision tool

The Google I/O event is here, and the company is announcing lots of great updates for your Android device. As we heard earlier, Gemini Nano is getting multimodal support, meaning your Android will still process text but with a better understanding of other factors like sights, sounds and spoken language. Now Google has shared that the new tool is also coming to it's TalkBack feature.

TalkBack is an existing tool that reads aloud a description of an image, whether it's one you captured or from the internet. Gemini Nano's multimodal support should provide a more detailed understanding of the image. According to Google, TalkBack users encounter about 90 images each day that don't have a label. Gemini Nano should be able to provide missing information, such as what an item of clothing looks like or the details of a new photo sent by a friend. 

Gemini Nano works directly on a person's device, meaning it should still function properly without any network connection. While we don't yet have an exact date for when it will arrive, Google says TalkBack will get Gemini Nano's updated features later this year.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-gemini-nano-brings-better-image-description-smarts-to-its-talkback-vision-tool-180759598.html?src=rss

Google’s Project Astra uses your phone’s camera and AI to find noise makers, misplaced items and more.

When Google first showcased its Duplex voice assistant technology at its developer conference in 2018, it was both impressive and concerning. Today, at I/O 2024, the company may be bringing up those same reactions again, this time by showing off another application of its AI smarts with something called Project Astra. 

The company couldn't even wait till its keynote today to tease Project Astra, posting a video to its social media of a camera-based AI app yesterday. At its keynote today, though, Google's DeepMind CEO Demis Hassabis shared that his team has "always wanted to develop universal AI agents that can be helpful in everyday life." Project Astra is the result of progress on that front. 

According to a video that Google showed during a media briefing yesterday, Project Astra appeared to be an app which has a viewfinder as its main interface. A person holding up a phone pointed its camera at various parts of an office and verbally said "Tell me when you see something that makes sound." When a speaker next to a monitor came into view, Gemini responded "I see a speaker, which makes sound."

The person behind the phone stopped and drew an onscreen arrow to the top circle on the speaker and said, "What is that part of the speaker called?" Gemini promptly responded "That is the tweeter. It produces high-frequency sounds."

Then, in the video that Google said was recorded in a single take, the tester moved over to a cup of crayons further down the table and asked "Give me a creative alliteration about these," to which Gemini said "Creative crayons color cheerfully. They certainly craft colorful creations."

The rest of the video goes on to show Gemini in Project Astra identifying and explaining parts of code on a monitor, telling the user what neighborhood they were in based on the view out the window. Most impressively, Astra was able to answer "Do you remember where you saw my glasses?" even though said glasses were completely out of frame and were not previously pointed out. "Yes, I do," Gemini said, adding "Your glasses were on a desk near a red apple."

After Astra located those glasses, the tester put them on and the video shifted to the perspective of what you'd see on the wearable. Using a camera onboard, the glasses scanned the wearer's surroundings to see things like a diagram on a whiteboard. The person in the video then asked "What can I add here to make this system faster?" As they spoke, an onscreen waveform moved to indicate it was listening, and as it responded, text captions appeared in tandem. Astra said "Adding a cache between the server and database could improve speed."

The tester then looked over to a pair of cats doodled on the board and asked "What does this remind you of?" Astra said "Schrodinger's cat." Finally, they picked up a plush tiger toy, put it next to a cute golden retriever and asked for "a band name for this duo." Astra dutifully replied "Golden stripes."

This means that not only was Astra processing visual data in realtime, it was also remembering what it saw and working with an impressive backlog of stored information. This was achieved, according to Hassabis, because these "agents" were "designed to process information faster by continuously encoding video frames, combining the video and speech input into a timeline of events, and caching this information for efficient recall."

It was also worth noting that, at least in the video, Astra was responding quickly. Hassabis noted in a blog post that "While we’ve made incredible progress developing AI systems that can understand multimodal information, getting response time down to something conversational is a difficult engineering challenge."

Google has also been working on giving its AI more range of vocal expression, using its speech models to "enhanced how they sound, giving the agents a wider range of intonations." This sort of mimicry of human expressiveness in responses is reminiscent of Duplex's pauses and utterances that led people to think Google's AI might be a candidate for the Turing test.

While Astra remains an early feature with no discernible plans for launch, Hassabis wrote that in future, these assistants could be available "through your phone or glasses." No word yet on whether those glasses are actually a product or the successor to Google Glass, but Hassabis did write that "some of these capabilities are coming to Google products, like the Gemini app, later this year."

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-project-astra-uses-your-phones-camera-and-ai-to-find-noise-makers-misplaced-items-and-more-172642329.html?src=rss

Instagram’s ‘Add Yours’ sticker now lets you share songs

Instagram just announced some new features coming to Stories, including a suite of interactive stickers. The music one is perhaps the most interesting, as it's an extension of the pre-existing Add Yours feature. The Add Yours Music sticker lets users share their favorite songs, along with a prompt for followers to get in on the fun by sharing their own related tracks. Of course, the song has to already be in Instagram’s music library to work.

To that end, Instagram has partnered with Dua Lipa to promote her new album, Radical Optimism. Many of the songs from the album are available for use in this way, and the artist herself has been posting Stories with Add Your Music stickers.

The Reveal sticker in action.
Instagram

Another nifty sticker added today is called Reveal. Opting for this sticker blurs the visuals of a story post and the only way followers can see the content is to DM the person who shared it. Direct messages have become a key factor behind Instagram’s continued growth, with site head Adam Mosseri stating that teens actually spend more time in DMs than anywhere else on the platform.

He also says that “virtually all” engagement growth over the past few years has come from DMs and Stories, according to reporting by Business Insider. So, yeah, this will most definitely be used as a hack by savvy creators looking to boost their engagement. The thirst traps will be thirstier and trappier than ever before.

The Frames sticker in action.
Instagram

Instagram has also unveiled a sticker called Frames. This tool throws a Polaroid-esque overlay over a photo, turning it into an instant print image. To reveal the contents, followers will have to channel Andre 3000 and shake their phones like a Polaroid picture, though there’s also a button. Creators can add captions which are also revealed upon shaking. This feature was originally revealed at this year’s Coachella festival.

Instagram Cutouts sticker in action.
Instagram

Finally, there’s a feature called Cutouts. This tool turns any part of a video or photo in your camera roll into a sticker, which can then be applied to a story or reel. Once a cutout is created, it gets saved into an easily-accessible sticker tray for future uses. This also works with photos posted to Instagram, though the pictures have to be shared by public accounts.

This has been a big month of changes for Instagram. In addition to the aforementioned new sticker systems, the social media app recently overhauled its algorithm to boost original content and deemphasize aggregator accounts. The company also changed the way Reels works to give smaller accounts a chance to expand their reach, though it remains unclear how this works. Instagram has also recently made Meta’s AI chatbot available in DMs, if you want some confident, yet absolutely wrong, answers to questions.

This article originally appeared on Engadget at https://www.engadget.com/instagrams-add-yours-sticker-now-lets-you-share-songs-180730795.html?src=rss

Snapchat will finally let you edit your chats

Snapchat will finally join most of its messaging app peers and allow users to edit their chats. The feature, which will be rolling out “soon,” will initially be limited to Snapchat+ subscribers, the company said.

With the change, Snapchat users will have a five-minute window to rephrase their message, fix typos or otherwise edit their chats. Messages that have been edited will have a label indicating the text has been changed. The company didn’t say when the feature might be available to more of its users, but the company often brings sought after features to its subscription service first. Snap announced last week that Snapchat+, which costs $3.99 a month, had reached 9 million subscribers.

The app is also adding several non-exclusive features, including updated emoji reactions for chats, the ability to use the My AI assistant to set reminders and AI-generated outfits for Bitmoji. Snap also showed off a new AI lens that transforms users’ selfies into 1990’s-themed snapshots (just don’t look too closely at the wireless headphones appearing in many of the images.)

This article originally appeared on Engadget at https://www.engadget.com/snapchat-will-finally-let-you-edit-your-chats-223643771.html?src=rss