We just wrapped up coverage on Google's I/O 2024 keynote, and we're just so tired of hearing about AI. In this bonus episode, Cherlynn and Devindra dive into the biggest I/O news: Google's intriguing Project Astra AI assistant; new models for creating video and images; and some improvements to Gemini AI. While some of the announcements seem potentially useful, it's still tough to tell if the move towards AI will actually help consumers, or if Google is just fighting to stay ahead of OpenAI.
Listen below or subscribe on your podcast app of choice. If you've got suggestions or topics you'd like covered on the show, be sure to email us or drop a note in the comments! And be sure to check out our other podcast, Engadget News!
Hosts: Cherlynn Low and Devindra Hardawar Music: Dale North
This article originally appeared on Engadget at https://www.engadget.com/engadget-podcast-the-good-the-bad-and-the-ai-of-google-io-2024-221741082.html?src=rss
Editor’s note (5/14/24): The main Google I/O keynote has ended, but the Google I/O Developer Keynote is now underway. Watch it below.
It’s that time of year again. Google’s annual I/O keynote is upon us. This event is likely to be packed with updates and announcements. We’ll be covering all of the news as it happens and you can stream the full event below. The keynote starts at 1PM ET on May 14 and streams are available via YouTube and the company’s hub page.
In terms of what to expect, the rumor mill has been working overtime. There are multiple reports that the event will largely focus on the Android 15 mobile operating system, which seems like a given since I/O is primarily an event for developers and the beta version is already out in the wild.
So let’s talk about the Android 15 beta and what to expect from the full release. The beta includes an updated Privacy Sandbox feature, partial screen sharing to record a certain app or window instead of the whole screen and system-level app archiving to free up space. There’s also improved satellite connectivity, additional in-app camera controls and a new power efficiency mode.
Despite the beta already existing, it’s highly probable that Google will drop some surprise Android 15 announcements. The company has confirmed that satellite messaging is coming to Android, so maybe that’ll be part of this event. Rumors also suggest that Android 15 will boast a redesigned status bar and an easier way to monitor battery health.
Sam Rutherford/Engadget
Android 15 won’t be the only thing Google discusses during the event. There’s a little acronym called AI you may have heard about and the company has gone all in. It’s a good bet that Google will spend a fair amount of time announcing updates for its Gemini AI, which could eventually replace Assistant entirely.
Back in December, it was reported that Google was working on an AI assistant called Pixie as an exclusive feature for Pixel devices. The branding is certainly on point. We could hear more about that, as it may debut in the Pixel 9 later this year.
Google’s most popular products could also get AI-focused redesigns, including Search, Chrome, G Suite and Maps. We might get an update as to what the company plans on doing about third-party cookies and maybe it’ll throw some AI at that problem too.
What not to expect? Don’t get your hopes up for a Pixel 9 or refreshed Pixel Fold for this event, as I/O is more for software than hardware. We’ll likely get details on those releases in the fall. However, rules were made to be broken. Last year, we got a Pixel Fold announcement at I/O, so maybe the line between hardware and software is blurring. We’ll find out soon.
This article originally appeared on Engadget at https://www.engadget.com/how-to-watch-googles-io-2024-keynote-160010787.html?src=rss
The increasingly discriminatory X (Twitter) now considers the term “cisgender” a slur. Owner Elon Musk posted last June, to the delight of his bigoted brigade of blue-check sycophants, that “‘cis’ or ‘cisgender’ are considered slurs on this platform.” On Tuesday, X made good on the regressive provocateur’s stance and reportedly began posting an official warning that the LGBTQ-inclusive terms could result in a ban from the platform. Not that you’d miss much.
TechCrunchreported on Tuesday that trying to publish a post using the terms “cisgender” or “cis” in the X mobile app will pop up a full-screen warning reading, “This post contains language that may be considered a slur by X and could be used in a harmful manner in violation of our rules.” It then gives you the choice of continuing to publish the post or conforming to the backward views of the worst of us and deleting it.
Of course, neither form of the term cisgender is a slur.
As the historically marginalized transgender community finally began finding at least a sliver of widespread and long overdue social acceptance in the 21st century, the term became more commonly used in the mainstream lexicon to describe people whose gender identity matches their sex at birth. Organizations including the American Psychological Association, World Health Organization, American Medical Association, American Psychiatric Association recognize the term.
But some people have a hard time accepting and respecting that some humans are different from others. Those fantasizing (against all evidence and scientific consensus) that the heteronormative ideals they grew up with are absolute gospel sometimes take great offense at being asked to adjust their vocabulary to communicate respect for a community that has spent centuries forced to live in the shadows or risk their safety due to the widespread pathologization of their identities.
Musk seems to consider those the good ol’ days.
This isn’t the billionaire’s first ride on the Transphobe Train. After his backward tweet last June (on the first day of Pride Month, no less), the edgelord’s platform ran a timeline takeover ad from a right-wing nonprofit, plugging a transphobic propaganda film. In case you’re wondering if the group may have anything of value to say, TechCrunch notes that the same organization also doubts climate change and downplays the dehumanizing atrocities of slavery.
The increasingly discriminatory X (Twitter) now considers the term “cisgender” a slur. Owner Elon Musk posted last June, to the delight of his bigoted brigade of blue-check sycophants, that “‘cis’ or ‘cisgender’ are considered slurs on this platform.” On Tuesday, X made good on the regressive provocateur’s stance and reportedly began posting an official warning that the LGBTQ-inclusive terms could result in a ban from the platform. Not that you’d miss much.
TechCrunchreported on Tuesday that trying to publish a post using the terms “cisgender” or “cis” in the X mobile app will pop up a full-screen warning reading, “This post contains language that may be considered a slur by X and could be used in a harmful manner in violation of our rules.” It then gives you the choice of continuing to publish the post or conforming to the backward views of the worst of us and deleting it.
Of course, neither form of the term cisgender is a slur.
As the historically marginalized transgender community finally began finding at least a sliver of widespread and long overdue social acceptance in the 21st century, the term became more commonly used in the mainstream lexicon to describe people whose gender identity matches their sex at birth. Organizations including the American Psychological Association, World Health Organization, American Medical Association, American Psychiatric Association recognize the term.
But some people have a hard time accepting and respecting that some humans are different from others. Those fantasizing (against all evidence and scientific consensus) that the heteronormative ideals they grew up with are absolute gospel sometimes take great offense at being asked to adjust their vocabulary to communicate respect for a community that has spent centuries forced to live in the shadows or risk their safety due to the widespread pathologization of their identities.
Musk seems to consider those the good ol’ days.
This isn’t the billionaire’s first ride on the Transphobe Train. After his backward tweet last June (on the first day of Pride Month, no less), the edgelord’s platform ran a timeline takeover ad from a right-wing nonprofit, plugging a transphobic propaganda film. In case you’re wondering if the group may have anything of value to say, TechCrunch notes that the same organization also doubts climate change and downplays the dehumanizing atrocities of slavery.
At the end of I/O, Google’s annual developer conference at the Shoreline Amphitheater in Mountain View, Google CEO Sundar Pichai revealed that the company had said “AI” 121 times. That, essentially, was the crux of Google’s two-hour keynote — stuffing AI into every Google app and service used by more than two billion people around the world. Here are all the major updates from Google's big event, along with some additional announcements that came after the keynote.
Gemini 1.5 Flash and updates to Gemini 1.5 Pro
Google
Google announced a brand new AI model called Gemini 1.5 Flash, which it says is optimised for speed and efficiency. Flash sits between Gemini 1.5 Pro and Gemini 1.5 Nano, which its the company’s smallest model that runs locally on device. Google said that it created Flash because developers wanted a lighter and less expensive model than Gemini Pro to build AI-powered apps and services while keeping some of the things like a long context window of one million tokens that differentiates Gemini Pro from competing models. Later this year, Google will double Gemini’s context window to two million tokens, which means that it will be able to process two hours of video, 22 hours of audio, more than 60,000 lines of code or more than 1.4 million words at the same time.
Project Astra
Google
Google showed off Project Astra, an early version of a universal assistant powered by AI that Google’s DeepMind CEO Demis Hassabis said was Google’s version of an AI agent “that can be helpful in everyday life.”
In a video that Google says was shot in a single take, an Astra user moves around Google’s London office holding up their phone and pointing the camera at various things — a speaker, some code on a whiteboard, and out a window — and has a natural conversation with the app about what it seems. In one of the video’s most impressive moments, the correctly tells the user where she left her glasses before without the user ever having brought up the glasses.
The video ends with a twist — when the user finds and wears the missing glasses, we learn that they have an onboard camera system and are capable of using Project Astra to seamlessly carry on a conversation with the user, perhaps indicating that Google might be working on a competitor to Meta’s Ray Ban smart glasses.
Ask Google Photos
Google
Google Photos was already intelligent when it came to searching for specific images or videos, but with AI, Google is taking things to the next level. If you’re a Google One subscriber in the US, you will be able to ask Google Photos a complex question like “show me the best photo from each national park I’ve visited" when the feature rolls out over the next few months. Google Photos will use GPS information as well as its own judgement of what is “best” to present you with options. You can also ask Google Photos to generate captions to post the photos to social media.
Veo and Imagen 3
Google
Google’s new AI-powered media creation engines are called Veo and Imagen 3. Veo is Google’s answer to OpenAI’s Sora. It can produce “high-quality” 1080p videos that can last “beyond a minute”, Google said, and can understand cinematic concepts like a timelapse.
Imagen 3, meanwhile, is a text-to-image generator that Google claims handles text better than its previous version, Imagen 2. The result is the company’s highest quality” text-to-image model with “incredible level of detail” for “photorealistic, lifelike images” and fewer artifacts — essentially pitting it against OpenAI’s DALLE-3.
Big updates to Google Search
Google
Google is making big changes to how Search fundamentally works. Most of the updates announced today like the ability to ask really complex questions (“Find the best yoga or pilates studios in Boston and show details on their intro offers and walking time from Beacon Hill.”) and using Search to plan meals and vacations won’t be available unless you opt in to Search Labs, the company’s platform that lets people try out experimental features.
But a big new feature that Google is calling AI Overviews and which the company has been testing for a year now, is finally rolling out to millions of people in the US. Google Search will now present AI-generated answers on top of the results by default, and the company says that it will bring the feature to more than a billion users around the world by the end of the year.
Gemini on Android
Google
Google is integrating Gemini directly into Android. When Android 15 releases later this year, Gemini will be aware of the app, image or video that you’re running, and you’ll be able to pull it up as an overlay and ask it context-specific questions. Where does that leave Google Assistant that already does this? Who knows! Google didn’t bring it up at all during today’s keynote.
WearOS 5 battery life improvements
Google isn't quite ready to roll out the latest version of it smartwatch OS, but it is promising some major battery life improvements when it comes. The company said that Wear OS 5 will consume 20 percent less power than Wear OS 4 if a user runs a marathon. Wear OS 4 already brought battery life improvements to smartwatches that support it, but it could still be a lot better at managing a device's power. Google also provided developers with a new guide on how to conserve power and battery, so that they can create more efficient apps.
Android 15 anti-theft features
Android 15's developer preview may have been rolling for months, but there are still features to come. Theft Detection Lock is a new Android 15 feature that will use AI (there it is again) to predict phone thefts and lock things up accordingly. Google says its algorithms can detect motions associated with theft, like those associated with grabbing the phone and bolting, biking or driving away. If an Android 15 handset pinpoints one of these situations, the phone’s screen will quickly lock, making it much harder for the phone snatcher to access your data.
Catch up on all the news from Google I/O 2024 right here!
Update May 15, 2:45PM ET: This story was updated after being published to include details on new Android 15 and WearOS 5 announcements made following the I/O 2024 keynote.
This article originally appeared on Engadget at https://www.engadget.com/google-io-2024-everything-revealed-including-gemini-ai-android-15-and-more-210414423.html?src=rss
While most players complete the main story in four to six hours, it hasn't taken long for speedrunners to figure out how to blaze through solo developer Billy Basso's eerie labyrinth. YouTubers are already posting runs of under five minutes and the any% record (i.e. the best recorded time without any restrictions) is being smashed over and over.
Within a couple of hours of Hubert0987 claiming the world record with a 4:44 run on Thursday, The DemonSlayer6669 appeared to snag bragging rights with one that was 18 seconds faster and perhaps the first recorded sub-4:30 time. (Don't watch the video just yet if you haven't beaten the game and would like to avoid spoilers.)
Animal Well hasn't even been out for a week, so you can expect records to keep tumbling as runners optimize routes to the game's final plunger. It's cool to already see a speedrunning community form around a new game as skilled players duke it out, perhaps for the chance to show off their skills at the next big Games Done Quick event.
This article originally appeared on Engadget at https://www.engadget.com/animal-well-speedrunners-are-already-beating-the-game-in-under-five-minutes-195259598.html?src=rss
While most players complete the main story in four to six hours, it hasn't taken long for speedrunners to figure out how to blaze through solo developer Billy Basso's eerie labyrinth. YouTubers are already posting runs of under five minutes and the any% record (i.e. the best recorded time without any restrictions) is being smashed over and over.
Within a couple of hours of Hubert0987 claiming the world record with a 4:44 run on Thursday, The DemonSlayer6669 appeared to snag bragging rights with one that was 18 seconds faster and perhaps the first recorded sub-4:30 time. (Don't watch the video just yet if you haven't beaten the game and would like to avoid spoilers.)
Animal Well hasn't even been out for a week, so you can expect records to keep tumbling as runners optimize routes to the game's final plunger. It's cool to already see a speedrunning community form around a new game as skilled players duke it out, perhaps for the chance to show off their skills at the next big Games Done Quick event.
This article originally appeared on Engadget at https://www.engadget.com/animal-well-speedrunners-are-already-beating-the-game-in-under-five-minutes-195259598.html?src=rss
Mercedes-Benz has been a name synonym for panache and luxury ever since it was first established in 1926. Headquartered in Stuttgart, Germany, the automotive giant has set the bar high for four-wheelers of the present and the future world. The AMG GT introduced in 2014 is still one of the most liked supercars in the industry and Concept VISION AVTR is setting the precedence for electric cars of the future already.
While the brand is one of the few big names inclined towards electric concept designs, it is understandable how many concept designers gravitate towards the Mercedes name for building their imaginative four wheels that could someday actually land them at one of the renowned brand’s design nests. The Mercedes-Benz Dresscode is one such iteration that has a unique take on what a British hypercar of the future could be like.
The design direction of the car interprets iconic luxury through the shapes of collar and rich volumes of a formal dress. If you look closely the hypercar adapts the form of a white shirt with a tie on top and a black jacket layered over it. Yes, the white sections represent the shirt with edged surfaces while the black body wraps the entire car in a large volume reminiscent of a jacket. The rear of the vehicle is like the back of a person wearing a suit – simple and chic. Unlike other supercars having luxurious gull-winged doors (or scissor doors) which can be tricky to get in and out, the Dresscode concept has doors designed to feel like the process of taking off a suit which is elegant and easy.
The side profile of the car is inspired by the seam lines on the shoulder of a jacket, wherein the lines flow from the front to the rear. Those wheel parts are rotatable through these lines. Also, the seam stitches on the shoulder of a suit jacket are reinterpreted as Mercedes patterns on the hypercar. In a true sense, this is a tailored Mercedes concept maintaining an aggressive yet elegant stance with its dynamic shape.
As Google starts to make its latest video-generation tools available, the company says it has a plan to ensure transparency around the origins of its increasingly realistic AI-generated clips. All video made by the company’s new Veo model in the VideoFX app will have digital watermarks thanks to Google’s SynthID system. Furthermore, SynthID will be able to watermark AI-generated text that comes from Gemini.
SynthID is Google’s digital watermarking system that started rolling out to AI-generated images last year. The tech embeds imperceptible watermarks into AI-made content so that AI detection tools can recognize that the content was generated by AI. Considering that Veo, the company’s latest video generation model previewed onstage at I/O, can create longer and higher-res clips than what was previously possible, tracking the source of such content will be increasingly important.
As generative AI models advance, more companies have turned to watermarking amid fears that AI could fuel a new wave of misinformation. Watermarking systems would give platforms like Google a framework for detecting AI-generated content that may otherwise be impossible to distinguish. TikTok and Meta have also recently announced plans to support similar detection tools on their platforms and label more AI content in their apps.
Of course, there are still significant questions about whether digital watermarks on their own offer sufficient protection against deceptive AI content. Researchers have shown that watermarks can be easy to evade. But making AI-made content detectable in some way is an important first step toward transparency.
Catch up on all the news from Google I/O 2024 right here!
This article originally appeared on Engadget at https://www.engadget.com/google-expands-digital-watermarks-to-ai-made-video-175232320.html?src=rss
As Google starts to make its latest video-generation tools available, the company says it has a plan to ensure transparency around the origins of its increasingly realistic AI-generated clips. All video made by the company’s new Veo model in the VideoFX app will have digital watermarks thanks to Google’s SynthID system. Furthermore, SynthID will be able to watermark AI-generated text that comes from Gemini.
SynthID is Google’s digital watermarking system that started rolling out to AI-generated images last year. The tech embeds imperceptible watermarks into AI-made content so that AI detection tools can recognize that the content was generated by AI. Considering that Veo, the company’s latest video generation model previewed onstage at I/O, can create longer and higher-res clips than what was previously possible, tracking the source of such content will be increasingly important.
As generative AI models advance, more companies have turned to watermarking amid fears that AI could fuel a new wave of misinformation. Watermarking systems would give platforms like Google a framework for detecting AI-generated content that may otherwise be impossible to distinguish. TikTok and Meta have also recently announced plans to support similar detection tools on their platforms and label more AI content in their apps.
Of course, there are still significant questions about whether digital watermarks on their own offer sufficient protection against deceptive AI content. Researchers have shown that watermarks can be easy to evade. But making AI-made content detectable in some way is an important first step toward transparency.
Catch up on all the news from Google I/O 2024 right here!
This article originally appeared on Engadget at https://www.engadget.com/google-expands-digital-watermarks-to-ai-made-video-175232320.html?src=rss