Apple brings eye-tracking to recent iPhones and iPads

Ahead of Global Accessibility Awareness Day this week, Apple is issuing its typical annual set of announcements around its assistive features. Many of these are useful for people with disabilities, but also have broader applications as well. For instance, Personal Voice, which was released last year, helps preserve someone's speaking voice. It can be helpful to those who are at risk of losing their voice or have other reasons for wanting to retain their own vocal signature for loved ones in their absence. Today, Apple is bringing eye-tracking support to recent models of iPhones and iPads, as well as customizable vocal shortcuts, music haptics, vehicle motion cues and more. 

The most intriguing feature of the set is the ability to use the front-facing camera on iPhones or iPads (at least those with the A12 chip or later) to navigate the software without additional hardware or accessories. With this enabled, people can look at their screen to move through elements like apps and menus, then linger on an item to select it. 

That pause to select is something Apple calls Dwell Control, which has already been available elsewhere in the company's ecosystem like in Mac's accessibility settings. The setup and calibration process should only take a few seconds, and on-device AI is at work to understand your gaze. It'll also work with third-party apps from launch, since it's a layer in the OS like Assistive Touch. Since Apple already supported eye-tracking in iOS and iPadOS with eye-detection devices connected, the news today is the ability to do so without extra hardware.

Apple is also working on improving the accessibility of its voice-based controls on iPhones and iPads. It again uses on-device AI to create personalized models for each person setting up a new vocal shortcut. You can set up a command for a single word or phrase, or even an utterance (like "Oy!" perhaps). Siri will understand these and perform your designated shortcut or task. You can have these launch apps or run a series of actions that you define in the Shortcuts app, and once set up, you won't have to first ask Siri to be ready. 

Another improvement coming to vocal interactions is "Listen for Atypical Speech," which has iPhones and iPads use on-device machine learning to recognize speech patterns and customize their voice recognition around your unique way of vocalizing. This sounds similar to Google's Project Relate, which is also designed to help technology better understand those with speech impairments or atypical speech.

To build these tools, Apple worked with the Speech Accessibility Project at the Beckman Institute for Advanced Science and Technology at the University of Illinois Urbana-Champaign. The institute is also collaborating with other tech giants like Google and Amazon to further development in this space across their products.

For those who are deaf or hard of hearing, Apple is bringing haptics to music players on iPhone, starting with millions of songs on its own Music app. When enabled, music haptics will play taps, textures and specialized vibrations in tandem with the audio to bring a new layer of sensation. It'll be available as an API so developers can bring greater accessibility to their apps, too. 

Drivers with disabilities need better systems in their cars, and Apple is addressing some of the issues with its updates to CarPlay. Voice control and color filters are coming to the interface for vehicles, making it easier to control apps by talking and for those with visual impairments to see menus or alerts. To that end, CarPlay is also getting bold and large text support, as well as sound recognition for noises like sirens or honks. When the system identifies such a sound, it will display an alert at the bottom of the screen to let you know what it heard. This works similarly to Apple's existing sound recognition feature in other devices like the iPhone.

A graphic demonstrating Vehicle Motion Cues on an iPhone. On the left is a drawing of a car with two arrows on either side of its rear. The word
Apple

For those who get motion sickness while using their iPhones or iPads in moving vehicles, a new feature called Vehicle Motion Cues might alleviate some of that discomfort. Since motion sickness is based on a sensory conflict from looking at stationary content while being in a moving vehicle, the new feature is meant to better align the conflicting senses through onscreen dots. When enabled, these dots will line the four edges of your screen and sway in response to the motion it detects. If the car moves forward or accelerates, the dots will sway backwards as if in reaction to the increase in speed in that direction.

There are plenty more features coming to the company's suite of products, including Live Captions in VisionOS, a new Reader mode in Magnifier, support for multi-line braille and a virtual trackpad for those who use Assistive Touch. It's not yet clear when all of these announced updates will roll out, though Apple has historically made these features available in upcoming versions of iOS. With its developer conference WWDC just a few weeks away, it's likely many of today's tools get officially released with the next iOS.

This article originally appeared on Engadget at https://www.engadget.com/apple-brings-eye-tracking-to-recent-iphones-and-ipads-140012990.html?src=rss

Google’s Project Gameface hands-free ‘mouse’ launches on Android

At last year's Google I/O developer conference, the company introduced Project Gameface, a hands-free gaming "mouse" that allows users to control a computer's cursor with movements of their head and facial gestures. This year, Google has announced that it has open-sourced more code for Project Gameface, allowing developers to build Android applications that can use the technology. 

The tool relies on the phone's front camera to track facial expressions and head movements, which can be used to control a virtual cursor. A user could smile to "select" items onscreen, for instance, or raise their left eyebrow to go back to the home screen on an Android phone. In addition, users can set thresholds or gesture sizes for each expression, so that they can control how prominent their expressions should be to trigger a specific mouse action. 

The company developed Project Gameface with gaming streamer Lance Carr, who has muscular dystrophy that weakens his muscles. Carr used a head-tracking mouse to game before a fire destroyed his home, along with his expensive equipment. The early version of Project Gameface was focused on gaming and uses a webcam to detect facial expressions, though Google had known from the start that it had a lot of other potential uses. 

For the tool's Android launch, Google teamed up with an Indian organization called Incluzza that supports people with disabilities. The partnership gave the company the chance to learn how Project Gameface can help people with disabilities further their studies, communicate with friends and family more easily and find jobs online. Google has released the project's open source code on GitHub and is hoping that more developers decide to "leverage it to build new experiences."

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-project-gameface-hands-free-mouse-launches-on-android-123029158.html?src=rss

Google’s Project Gameface hands-free ‘mouse’ launches on Android

At last year's Google I/O developer conference, the company introduced Project Gameface, a hands-free gaming "mouse" that allows users to control a computer's cursor with movements of their head and facial gestures. This year, Google has announced that it has open-sourced more code for Project Gameface, allowing developers to build Android applications that can use the technology. 

The tool relies on the phone's front camera to track facial expressions and head movements, which can be used to control a virtual cursor. A user could smile to "select" items onscreen, for instance, or raise their left eyebrow to go back to the home screen on an Android phone. In addition, users can set thresholds or gesture sizes for each expression, so that they can control how prominent their expressions should be to trigger a specific mouse action. 

The company developed Project Gameface with gaming streamer Lance Carr, who has muscular dystrophy that weakens his muscles. Carr used a head-tracking mouse to game before a fire destroyed his home, along with his expensive equipment. The early version of Project Gameface was focused on gaming and uses a webcam to detect facial expressions, though Google had known from the start that it had a lot of other potential uses. 

For the tool's Android launch, Google teamed up with an Indian organization called Incluzza that supports people with disabilities. The partnership gave the company the chance to learn how Project Gameface can help people with disabilities further their studies, communicate with friends and family more easily and find jobs online. Google has released the project's open source code on GitHub and is hoping that more developers decide to "leverage it to build new experiences."

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-project-gameface-hands-free-mouse-launches-on-android-123029158.html?src=rss

The Morning After: The biggest news from Google’s I/O keynote

Google boss, Sundar Pichai, wrapped up the company’s I/O developer conference by noting its almost-two-hour presentation had mentioned AI 121 times. It was everywhere.

Google’s newest AI model, Gemini 1.5 Flash, is built for speed and efficiency. The company said it created Flash because developers wanted a lighter, less expensive model than Gemini Pro to build AI-powered apps and services.

Google says it’ll double Gemini’s context window to two million tokens, enough to process two hours of video, 22 hours of audio, more than 60,000 lines of code or 1.4 million-plus words at the same time.

But the bigger news is how the company is sewing AI into all the things you’re already using. With search, it’ll be able to answer your complex questions (a la Copilot in Bing), but for now, you’ll have to sign up to the company’s Search Labs to try that out. AI-generated answers will also appear alongside typical search results, just in case the AI knows better.

Google Photos was already pretty smart at searching for specific images or videos, but with AI, Google is taking things to the next level. If you’re a Google One subscriber in the US, you will be able to ask Google Photos a complex question, like show me the best photo from each national park I’ve visited. You can also ask Google Photos to generate captions for you.

And, if you have an Android, Gemini is integrating directly into the device. Gemini will know the app, image or video you’re running, and you’ll be able to pull it up as an overlay and ask it context-specific questions, like how to change settings or maybe even who’s displayed on screen. 

While these were the bigger beats, there was an awful lot to chew over. Check out all the headlines right here.

— Mat Smith

Google wants you to relax and have a natural chat with Gemini Live

Google Pixel 8a review

Google unveils Veo and Imagen 3, its latest AI media creation models

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!

TMA
Google

One of Google’s bigger projects is its visual multimodal AI assistant, currently called Project Astra. It taps into your smartphone (or smart glasses) camera and can contextually analyze and answer questions on the things it sees. Project Astra can offer silly wordplay suggestions, as well as identify and define the things it sees. A video demo shows Project Astra identifying the tweeter part of a speaker. It’s equal parts impressive and, well, familiar. We tested it out, right here.

Continue reading.

The increasingly unhinged world of X (Twitter) now considers the term ‘cisgender’ a slur. Owner Elon Musk posted last June, to the delight of his unhingiest users, that “‘cis’ or ‘cisgender’ are considered slurs on this platform.” On Tuesday, X reportedly began posting an official warning. A quick reminder: It’s not a slur.

Continue reading.

Ilya Sutskever announced on X, formerly Twitter, he’s leaving OpenAI almost a decade after he co-founded the company. He’s confident OpenAI “will build [artificial general intelligence] that is both safe and beneficial” under the leadership of CEO Sam Altman, President Greg Brockman and CTO Mira Murati. While Sutskever and Altman praised each other in their farewell messages, the two were embroiled in the company’s biggest scandal, last year. Sutskever, who was a board member then, was involved in both of their dismissals.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-the-biggest-news-from-googles-io-keynote-111531702.html?src=rss

The Morning After: The biggest news from Google’s I/O keynote

Google boss, Sundar Pichai, wrapped up the company’s I/O developer conference by noting its almost-two-hour presentation had mentioned AI 121 times. It was everywhere.

Google’s newest AI model, Gemini 1.5 Flash, is built for speed and efficiency. The company said it created Flash because developers wanted a lighter, less expensive model than Gemini Pro to build AI-powered apps and services.

Google says it’ll double Gemini’s context window to two million tokens, enough to process two hours of video, 22 hours of audio, more than 60,000 lines of code or 1.4 million-plus words at the same time.

But the bigger news is how the company is sewing AI into all the things you’re already using. With search, it’ll be able to answer your complex questions (a la Copilot in Bing), but for now, you’ll have to sign up to the company’s Search Labs to try that out. AI-generated answers will also appear alongside typical search results, just in case the AI knows better.

Google Photos was already pretty smart at searching for specific images or videos, but with AI, Google is taking things to the next level. If you’re a Google One subscriber in the US, you will be able to ask Google Photos a complex question, like show me the best photo from each national park I’ve visited. You can also ask Google Photos to generate captions for you.

And, if you have an Android, Gemini is integrating directly into the device. Gemini will know the app, image or video you’re running, and you’ll be able to pull it up as an overlay and ask it context-specific questions, like how to change settings or maybe even who’s displayed on screen. 

While these were the bigger beats, there was an awful lot to chew over. Check out all the headlines right here.

— Mat Smith

Google wants you to relax and have a natural chat with Gemini Live

Google Pixel 8a review

Google unveils Veo and Imagen 3, its latest AI media creation models

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!

TMA
Google

One of Google’s bigger projects is its visual multimodal AI assistant, currently called Project Astra. It taps into your smartphone (or smart glasses) camera and can contextually analyze and answer questions on the things it sees. Project Astra can offer silly wordplay suggestions, as well as identify and define the things it sees. A video demo shows Project Astra identifying the tweeter part of a speaker. It’s equal parts impressive and, well, familiar. We tested it out, right here.

Continue reading.

The increasingly unhinged world of X (Twitter) now considers the term ‘cisgender’ a slur. Owner Elon Musk posted last June, to the delight of his unhingiest users, that “‘cis’ or ‘cisgender’ are considered slurs on this platform.” On Tuesday, X reportedly began posting an official warning. A quick reminder: It’s not a slur.

Continue reading.

Ilya Sutskever announced on X, formerly Twitter, he’s leaving OpenAI almost a decade after he co-founded the company. He’s confident OpenAI “will build [artificial general intelligence] that is both safe and beneficial” under the leadership of CEO Sam Altman, President Greg Brockman and CTO Mira Murati. While Sutskever and Altman praised each other in their farewell messages, the two were embroiled in the company’s biggest scandal, last year. Sutskever, who was a board member then, was involved in both of their dismissals.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-the-biggest-news-from-googles-io-keynote-111531702.html?src=rss

Watch the Google I/O 2024 Developer keynote live

Editor’s note (5/14/24): The main Google I/O keynote has ended, but the Google I/O Developer Keynote is now underway. Watch it below. 

It’s that time of year again. Google’s annual I/O keynote is upon us. This event is likely to be packed with updates and announcements. We’ll be covering all of the news as it happens and you can stream the full event below. The keynote starts at 1PM ET on May 14 and streams are available via YouTube and the company’s hub page.

In terms of what to expect, the rumor mill has been working overtime. There are multiple reports that the event will largely focus on the Android 15 mobile operating system, which seems like a given since I/O is primarily an event for developers and the beta version is already out in the wild.

So let’s talk about the Android 15 beta and what to expect from the full release. The beta includes an updated Privacy Sandbox feature, partial screen sharing to record a certain app or window instead of the whole screen and system-level app archiving to free up space. There’s also improved satellite connectivity, additional in-app camera controls and a new power efficiency mode.

Despite the beta already existing, it’s highly probable that Google will drop some surprise Android 15 announcements. The company has confirmed that satellite messaging is coming to Android, so maybe that’ll be part of this event. Rumors also suggest that Android 15 will boast a redesigned status bar and an easier way to monitor battery health.

An Android phone.
Sam Rutherford/Engadget

Android 15 won’t be the only thing Google discusses during the event. There’s a little acronym called AI you may have heard about and the company has gone all in. It’s a good bet that Google will spend a fair amount of time announcing updates for its Gemini AI, which could eventually replace Assistant entirely.

Back in December, it was reported that Google was working on an AI assistant called Pixie as an exclusive feature for Pixel devices. The branding is certainly on point. We could hear more about that, as it may debut in the Pixel 9 later this year. 

Google’s most popular products could also get AI-focused redesigns, including Search, Chrome, G Suite and Maps. We might get an update as to what the company plans on doing about third-party cookies and maybe it’ll throw some AI at that problem too.

What not to expect? Don’t get your hopes up for a Pixel 9 or refreshed Pixel Fold for this event, as I/O is more for software than hardware. We’ll likely get details on those releases in the fall. However, rules were made to be broken. Last year, we got a Pixel Fold announcement at I/O, so maybe the line between hardware and software is blurring. We’ll find out soon.

This article originally appeared on Engadget at https://www.engadget.com/how-to-watch-googles-io-2024-keynote-160010787.html?src=rss

Google I/O 2024: Everything revealed including Gemini AI, Android 15 and more

At the end of I/O, Google’s annual developer conference at the Shoreline Amphitheater in Mountain View, Google CEO Sundar Pichai revealed that the company had said “AI” 121 times. That, essentially, was the crux of Google’s two-hour keynote — stuffing AI into every Google app and service used by more than two billion people around the world. Here are all the major updates from Google's big event, along with some additional announcements that came after the keynote.

Gemini Pro
Google

Google announced a brand new AI model called Gemini 1.5 Flash, which it says is optimised for speed and efficiency. Flash sits between Gemini 1.5 Pro and Gemini 1.5 Nano, which its the company’s smallest model that runs locally on device. Google said that it created Flash because developers wanted a lighter and less expensive model than Gemini Pro to build AI-powered apps and services while keeping some of the things like a long context window of one million tokens that differentiates Gemini Pro from competing models. Later this year, Google will double Gemini’s context window to two million tokens, which means that it will be able to process two hours of video, 22 hours of audio, more than 60,000 lines of code or more than 1.4 million words at the same time.

Project Astra
Google

Google showed off Project Astra, an early version of a universal assistant powered by AI that Google’s DeepMind CEO Demis Hassabis said was Google’s version of an AI agent “that can be helpful in everyday life.”

In a video that Google says was shot in a single take, an Astra user moves around Google’s London office holding up their phone and pointing the camera at various things — a speaker, some code on a whiteboard, and out a window — and has a natural conversation with the app about what it seems. In one of the video’s most impressive moments, the correctly tells the user where she left her glasses before without the user ever having brought up the glasses.

The video ends with a twist — when the user finds and wears the missing glasses, we learn that they have an onboard camera system and are capable of using Project Astra to seamlessly carry on a conversation with the user, perhaps indicating that Google might be working on a competitor to Meta’s Ray Ban smart glasses.

Ask Photos
Google

Google Photos was already intelligent when it came to searching for specific images or videos, but with AI, Google is taking things to the next level. If you’re a Google One subscriber in the US, you will be able to ask Google Photos a complex question like “show me the best photo from each national park I’ve visited" when the feature rolls out over the next few months. Google Photos will use GPS information as well as its own judgement of what is “best” to present you with options. You can also ask Google Photos to generate captions to post the photos to social media.

Veo
Google

Google’s new AI-powered media creation engines are called Veo and Imagen 3. Veo is Google’s answer to OpenAI’s Sora. It can produce “high-quality” 1080p videos that can last “beyond a minute”, Google said, and can understand cinematic concepts like a timelapse.

Imagen 3, meanwhile, is a text-to-image generator that Google claims handles text better than its previous version, Imagen 2. The result is the company’s highest quality” text-to-image model with “incredible level of detail” for “photorealistic, lifelike images” and fewer artifacts — essentially pitting it against OpenAI’s DALLE-3.

Google Search
Google

Google is making big changes to how Search fundamentally works. Most of the updates announced today like the ability to ask really complex questions (“Find the best yoga or pilates studios in Boston and show details on their intro offers and walking time from Beacon Hill.”) and using Search to plan meals and vacations won’t be available unless you opt in to Search Labs, the company’s platform that lets people try out experimental features.

But a big new feature that Google is calling AI Overviews and which the company has been testing for a year now, is finally rolling out to millions of people in the US. Google Search will now present AI-generated answers on top of the results by default, and the company says that it will bring the feature to more than a billion users around the world by the end of the year.

Gemini on Android
Google

Google is integrating Gemini directly into Android. When Android 15 releases later this year, Gemini will be aware of the app, image or video that you’re running, and you’ll be able to pull it up as an overlay and ask it context-specific questions. Where does that leave Google Assistant that already does this? Who knows! Google didn’t bring it up at all during today’s keynote.

Google isn't quite ready to roll out the latest version of it smartwatch OS, but it is promising some major battery life improvements when it comes. The company said that Wear OS 5 will consume 20 percent less power than Wear OS 4 if a user runs a marathon. Wear OS 4 already brought battery life improvements to smartwatches that support it, but it could still be a lot better at managing a device's power. Google also provided developers with a new guide on how to conserve power and battery, so that they can create more efficient apps.

Android 15's developer preview may have been rolling for months, but there are still features to come. Theft Detection Lock is a new Android 15 feature that will use AI (there it is again) to predict phone thefts and lock things up accordingly. Google says its algorithms can detect motions associated with theft, like those associated with grabbing the phone and bolting, biking or driving away. If an Android 15 handset pinpoints one of these situations, the phone’s screen will quickly lock, making it much harder for the phone snatcher to access your data.

There were a bunch of other updates too. Google said it would add digital watermarks to AI-generated video and text, make Gemini accessible in the side panel in Gmail and Docs, power a virtual AI teammate in Workspace, listen in on phone calls and detect if you’re being scammed in real time, and a lot more.


Catch up on all the news from Google I/O 2024 right here!

Update May 15, 2:45PM ET: This story was updated after being published to include details on new Android 15 and WearOS 5 announcements made following the I/O 2024 keynote.

This article originally appeared on Engadget at https://www.engadget.com/google-io-2024-everything-revealed-including-gemini-ai-android-15-and-more-210414423.html?src=rss

Google expands digital watermarks to AI-made video and text

As Google starts to make its latest video-generation tools available, the company says it has a plan to ensure transparency around the origins of its increasingly realistic AI-generated clips. All video made by the company’s new Veo model in the VideoFX app will have digital watermarks thanks to Google’s SynthID system. Furthermore, SynthID will be able to watermark AI-generated text that comes from Gemini.

SynthID is Google’s digital watermarking system that started rolling out to AI-generated images last year. The tech embeds imperceptible watermarks into AI-made content so that AI detection tools can recognize that the content was generated by AI. Considering that Veo, the company’s latest video generation model previewed onstage at I/O, can create longer and higher-res clips than what was previously possible, tracking the source of such content will be increasingly important.

As generative AI models advance, more companies have turned to watermarking amid fears that AI could fuel a new wave of misinformation. Watermarking systems would give platforms like Google a framework for detecting AI-generated content that may otherwise be impossible to distinguish. TikTok and Meta have also recently announced plans to support similar detection tools on their platforms and label more AI content in their apps.

Of course, there are still significant questions about whether digital watermarks on their own offer sufficient protection against deceptive AI content. Researchers have shown that watermarks can be easy to evade. But making AI-made content detectable in some way is an important first step toward transparency.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-expands-digital-watermarks-to-ai-made-video-175232320.html?src=rss

Google expands digital watermarks to AI-made video and text

As Google starts to make its latest video-generation tools available, the company says it has a plan to ensure transparency around the origins of its increasingly realistic AI-generated clips. All video made by the company’s new Veo model in the VideoFX app will have digital watermarks thanks to Google’s SynthID system. Furthermore, SynthID will be able to watermark AI-generated text that comes from Gemini.

SynthID is Google’s digital watermarking system that started rolling out to AI-generated images last year. The tech embeds imperceptible watermarks into AI-made content so that AI detection tools can recognize that the content was generated by AI. Considering that Veo, the company’s latest video generation model previewed onstage at I/O, can create longer and higher-res clips than what was previously possible, tracking the source of such content will be increasingly important.

As generative AI models advance, more companies have turned to watermarking amid fears that AI could fuel a new wave of misinformation. Watermarking systems would give platforms like Google a framework for detecting AI-generated content that may otherwise be impossible to distinguish. TikTok and Meta have also recently announced plans to support similar detection tools on their platforms and label more AI content in their apps.

Of course, there are still significant questions about whether digital watermarks on their own offer sufficient protection against deceptive AI content. Researchers have shown that watermarks can be easy to evade. But making AI-made content detectable in some way is an important first step toward transparency.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-expands-digital-watermarks-to-ai-made-video-175232320.html?src=rss

Gemini will be accessible in the side panel on Google apps like Gmail and Docs

Google is adding Gemini-powered AI automation to more tasks in Workspace. In its Tuesday Google I/O keynote, the company said its advanced Gemini 1.5 Pro will soon be available in the Workspace side panel as “the connective tissue across multiple applications with AI-powered workflows,” as AI grows more intelligent, learns more about you and automates more of your workflow.

Gemini’s job in Workspace is to save you the time and effort of digging through files, emails and other data from multiple apps. “Workspace in the Gemini era will continue to unlock new ways of getting things done,” Google Workspace VP Aparna Pappu said at the event.

The refreshed Workspace side panel, coming first to Gmail, Docs, Sheets, Slides and Drive, will let you chat with Gemini about your content. Its longer context window (essentially, its memory) allows it to organize, understand and contextualize your data from different apps without leaving the one you’re in. This includes things like comparing receipt attachments, summarizing (and answering back-and-forth questions about) long email threads, or highlighting key points from meeting recordings.

Workspace slide
Google

Another example Google provided was planning a family reunion when your grandmother asks for hotel information. With the Workspace side panel, you can ask Gemini to find the Google Doc with the booking information by using the prompt, “What is the hotel name and sales manager email listed in @Family Reunion 2024?” Google says it will find the document and give you a quick answer, allowing you to insert it into your reply as you save time by faking human authenticity for poor Grandma.

The email-based changes are coming to the Gmail mobile app, too. “Gemini will soon be able to analyze email threads and provide a summarized view with the key highlights directly in the Gmail app, just as you can in the side panel,” the company said.

Summarizing in the Gmail app is coming to Workspace Labs this month. Meanwhile, the upgraded Workspace side panel will arrive starting Tuesday for Workspace Labs and Gemini for Workspace Alpha users. Google says all the features will arrive for the rest of Workspace customers and Google One AI Premium users next month.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/gemini-will-be-accessible-in-the-side-panel-on-google-apps-like-gmail-and-docs-185406695.html?src=rss