Android’s Circle to Search can now help students solve math and physics homework

Google has introduced another capability for its Circle to Search feature at the company's annual I/O developer conference, and it's something that could help students better understand potentially difficult class topics. The feature will now be able to show them step-by-step instructions for a "range of physics and math word problems." They just have to activate the feature by long-pressing the home button or navigation bar and then circling the problem that's got them stumped, though some math problems will require users to be signed up for Google's experimental Search Labs feature.

The company says Circle to Search's new capability was made possible by its new family of AI models called LearnLM that was specifically created and fine-tuned for learning. It's also planning to make adjustments to this particular capability and to roll out an upgraded version later this year that could solve even more complex problems "involving symbolic formulas, diagrams, graphs and more." Google launched Circle to Search earlier this year at a Samsung Unpacked event, because the feature was initially available on Galaxy 24, as well as on Pixel 8 devices. It's now also out for the Galaxy S23, Galaxy S22, Z Fold, Z Flip, Pixel 6 and Pixel 7 devices, and it'll likely make its way to more hardware in the future. 

In addition to the new Circle to Search capability, Google has also revealed that devices that can support the Gemini for Android chatbot assistant will now be able to bring it up as an overlay on top of the application that's currently open. Users can then drag and drop images straight from the overlay into apps like Gmail, for instance, or use the overlay to look up information without having to swipe away from whatever they're doing. They can tap "Ask this video" to find specific information within a YouTube video that's open, and if they have access to Gemini Advanced, they can use the "Ask this PDF" option to find information from within lengthy documents. 

Google is also rolling out multimodal capabilities to Nano, the smallest model in the Gemini family that can process information on-device. The updated Gemini Nano, which will be able to process sights, sounds and spoken language, is coming to Google's TalkBack screen reader later this year. Gemini Nano will enable TalkBack to describe images onscreen more quickly and even without an internet connection. Finally, Google is currently testing a Gemini Nano feature that can alert users while a call is ongoing if it detects common conversation patterns associated with scams. Users will be alerted, for instance, if they're talking to someone asking them for their PINs or passwords or to someone asking them to buy gift cards. 

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/androids-circle-to-search-can-now-help-students-solve-math-and-physics-homework-180223229.html?src=rss

Android’s Circle to Search can now help students solve math and physics homework

Google has introduced another capability for its Circle to Search feature at the company's annual I/O developer conference, and it's something that could help students better understand potentially difficult class topics. The feature will now be able to show them step-by-step instructions for a "range of physics and math word problems." They just have to activate the feature by long-pressing the home button or navigation bar and then circling the problem that's got them stumped, though some math problems will require users to be signed up for Google's experimental Search Labs feature.

The company says Circle to Search's new capability was made possible by its new family of AI models called LearnLM that was specifically created and fine-tuned for learning. It's also planning to make adjustments to this particular capability and to roll out an upgraded version later this year that could solve even more complex problems "involving symbolic formulas, diagrams, graphs and more." Google launched Circle to Search earlier this year at a Samsung Unpacked event, because the feature was initially available on Galaxy 24, as well as on Pixel 8 devices. It's now also out for the Galaxy S23, Galaxy S22, Z Fold, Z Flip, Pixel 6 and Pixel 7 devices, and it'll likely make its way to more hardware in the future. 

In addition to the new Circle to Search capability, Google has also revealed that devices that can support the Gemini for Android chatbot assistant will now be able to bring it up as an overlay on top of the application that's currently open. Users can then drag and drop images straight from the overlay into apps like Gmail, for instance, or use the overlay to look up information without having to swipe away from whatever they're doing. They can tap "Ask this video" to find specific information within a YouTube video that's open, and if they have access to Gemini Advanced, they can use the "Ask this PDF" option to find information from within lengthy documents. 

Google is also rolling out multimodal capabilities to Nano, the smallest model in the Gemini family that can process information on-device. The updated Gemini Nano, which will be able to process sights, sounds and spoken language, is coming to Google's TalkBack screen reader later this year. Gemini Nano will enable TalkBack to describe images onscreen more quickly and even without an internet connection. Finally, Google is currently testing a Gemini Nano feature that can alert users while a call is ongoing if it detects common conversation patterns associated with scams. Users will be alerted, for instance, if they're talking to someone asking them for their PINs or passwords or to someone asking them to buy gift cards. 

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/androids-circle-to-search-can-now-help-students-solve-math-and-physics-homework-180223229.html?src=rss

Google’s Gemini will search your videos to help you solve problems

As part of its push toward adding generative AI to search, Google has introduced a new twist: video. Gemini will let you upload video that demonstrates an issue you're trying to resolve, then scour user forums and other areas of the internet to find a solution. 

As an example, Google's Rose Yao talked onstage at I/O 2024 about a used turntable she bought and how she couldn't get the needle to sit on the record. Yao uploaded a video showing the issue, then Gemini quickly found an explainer describing how to balance the arm on that particular make and model. 

Google's Gemini now searches video to answer your questions
Google

"Search is so much more than just words in a text box. Often the questions you have are about the things you see around you, including objects in motion," Google wrote. "Searching with video saves you the time and trouble of finding the right words to describe this issue, and you’ll get an AI Overview with steps and resources to troubleshoot."

If the video alone doesn't make it clear what you're trying to figure out, you can add text or draw arrows that point to the issue in question. 

OpenAI just introduced ChatGPT 4o with the ability to interpret live video in real time, then describe a scene or even sing a song about it. Google, however, is taking a different tack with video by focusing on its Search product for now. Searching with video is coming to Search Labs US users in English to start with, but will expand to more regions over time, the company said.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-gemini-will-search-your-videos-to-help-you-solve-problems-175235105.html?src=rss

Google’s Gemini will search your videos to help you solve problems

As part of its push toward adding generative AI to search, Google has introduced a new twist: video. Gemini will let you upload video that demonstrates an issue you're trying to resolve, then scour user forums and other areas of the internet to find a solution. 

As an example, Google's Rose Yao talked onstage at I/O 2024 about a used turntable she bought and how she couldn't get the needle to sit on the record. Yao uploaded a video showing the issue, then Gemini quickly found an explainer describing how to balance the arm on that particular make and model. 

Google's Gemini now searches video to answer your questions
Google

"Search is so much more than just words in a text box. Often the questions you have are about the things you see around you, including objects in motion," Google wrote. "Searching with video saves you the time and trouble of finding the right words to describe this issue, and you’ll get an AI Overview with steps and resources to troubleshoot."

If the video alone doesn't make it clear what you're trying to figure out, you can add text or draw arrows that point to the issue in question. 

OpenAI just introduced ChatGPT 4o with the ability to interpret live video in real time, then describe a scene or even sing a song about it. Google, however, is taking a different tack with video by focusing on its Search product for now. Searching with video is coming to Search Labs US users in English to start with, but will expand to more regions over time, the company said.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-gemini-will-search-your-videos-to-help-you-solve-problems-175235105.html?src=rss

Google just snuck a pair of AR glasses into a Project Astra demo at I/O

In a video showcasing the prowess of Google's new Project Astra experience at I/O 2024, an unnamed person demonstrating asked Gemini "do you remember where you saw my glasses?" The AI impressively responded "Yes, I do. Your glasses were on a desk near a red apple," despite said object not actually being in view when the question was asked. But these glasses weren't your bog-standard assistive vision aid; these had a camera onboard and some sort of visual interface!

The tester picked up their glasses and put them on, and proceeded to ask the AI more questions about things they were looking at. Clearly, there is a camera on the device that's helping it take in the surroundings, and we were shown some sort of interface where a waveform moved to indicate it was listening. Onscreen captions appeared to reflect the answer that was being read aloud to the wearer, as well. So if we're keeping track, that's at least a microphone and speaker onboard too, along with some kind of processor and battery to power the whole thing. 

We only caught a brief glimpse of the wearable, but from the sneaky seconds it was in view, a few things were evident. The glasses had a simple black frame and didn't look at all like Google Glass. They didn't appear very bulky, either. 

In all likelihood, Google is not ready to actually launch a pair of glasses at I/O. It breezed right past the wearable's appearance and barely mentioned them, only to say that Project Astra and the company's vision of "universal agents" could come to devices like our phones or glasses. We don't know much else at the moment, but if you've been mourning Google Glass or the company's other failed wearable products, this might instill some hope yet.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-just-snuck-a-pair-of-ar-glasses-into-a-project-astra-demo-at-io-172824539.html?src=rss

Google’s Project Astra uses your phone’s camera and AI to find noise makers, misplaced items and more.

When Google first showcased its Duplex voice assistant technology at its developer conference in 2018, it was both impressive and concerning. Today, at I/O 2024, the company may be bringing up those same reactions again, this time by showing off another application of its AI smarts with something called Project Astra. 

The company couldn't even wait till its keynote today to tease Project Astra, posting a video to its social media of a camera-based AI app yesterday. At its keynote today, though, Google's DeepMind CEO Demis Hassabis shared that his team has "always wanted to develop universal AI agents that can be helpful in everyday life." Project Astra is the result of progress on that front. 

According to a video that Google showed during a media briefing yesterday, Project Astra appeared to be an app which has a viewfinder as its main interface. A person holding up a phone pointed its camera at various parts of an office and verbally said "Tell me when you see something that makes sound." When a speaker next to a monitor came into view, Gemini responded "I see a speaker, which makes sound."

The person behind the phone stopped and drew an onscreen arrow to the top circle on the speaker and said, "What is that part of the speaker called?" Gemini promptly responded "That is the tweeter. It produces high-frequency sounds."

Then, in the video that Google said was recorded in a single take, the tester moved over to a cup of crayons further down the table and asked "Give me a creative alliteration about these," to which Gemini said "Creative crayons color cheerfully. They certainly craft colorful creations."

The rest of the video goes on to show Gemini in Project Astra identifying and explaining parts of code on a monitor, telling the user what neighborhood they were in based on the view out the window. Most impressively, Astra was able to answer "Do you remember where you saw my glasses?" even though said glasses were completely out of frame and were not previously pointed out. "Yes, I do," Gemini said, adding "Your glasses were on a desk near a red apple."

After Astra located those glasses, the tester put them on and the video shifted to the perspective of what you'd see on the wearable. Using a camera onboard, the glasses scanned the wearer's surroundings to see things like a diagram on a whiteboard. The person in the video then asked "What can I add here to make this system faster?" As they spoke, an onscreen waveform moved to indicate it was listening, and as it responded, text captions appeared in tandem. Astra said "Adding a cache between the server and database could improve speed."

The tester then looked over to a pair of cats doodled on the board and asked "What does this remind you of?" Astra said "Schrodinger's cat." Finally, they picked up a plush tiger toy, put it next to a cute golden retriever and asked for "a band name for this duo." Astra dutifully replied "Golden stripes."

This means that not only was Astra processing visual data in realtime, it was also remembering what it saw and working with an impressive backlog of stored information. This was achieved, according to Hassabis, because these "agents" were "designed to process information faster by continuously encoding video frames, combining the video and speech input into a timeline of events, and caching this information for efficient recall."

It was also worth noting that, at least in the video, Astra was responding quickly. Hassabis noted in a blog post that "While we’ve made incredible progress developing AI systems that can understand multimodal information, getting response time down to something conversational is a difficult engineering challenge."

Google has also been working on giving its AI more range of vocal expression, using its speech models to "enhanced how they sound, giving the agents a wider range of intonations." This sort of mimicry of human expressiveness in responses is reminiscent of Duplex's pauses and utterances that led people to think Google's AI might be a candidate for the Turing test.

While Astra remains an early feature with no discernible plans for launch, Hassabis wrote that in future, these assistants could be available "through your phone or glasses." No word yet on whether those glasses are actually a product or the successor to Google Glass, but Hassabis did write that "some of these capabilities are coming to Google products, like the Gemini app, later this year."

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-project-astra-uses-your-phones-camera-and-ai-to-find-noise-makers-misplaced-items-and-more-172642329.html?src=rss

Google’s new Gemini 1.5 Flash AI model is lighter than Gemini Pro and more accessible

Google announced updates to its Gemini family of AI models at I/O, the company’s annual conference for developers, on Tuesday. It’s rolling out a new model called Gemini 1.5 Flash, which it says is optimized for speed and efficiency.

“[Gemini] 1.5 Flash excels at summarization, chat applications, image and video captioning, data extraction from long documents and tables, and more,” wrote Demis Hassabis, CEO of Google DeepMind, in a blog post. Hassabis added that Google created Gemini 1.5 Flash because developers needed a model that was lighter and less expensive than the Pro version, which Google announced in February. Gemini 1.5 Pro is more efficient and powerful than the company’s original Gemini model announced late last year.

Gemini 1.5 Flash sits between Gemini 1.5 Pro and Gemini 1.5 Nano, Google’s smallest model that runs locally on devices. Despite being lighter weight then Gemini Pro, however, it is just as powerful. Google said that this was achieved through a process called “distillation”, where the most essential knowledge and skills from Gemini 1.5 Pro were transferred to the smaller model. This means that Gemini 1.5 Flash will get the same multimodal capabilities of Pro, as well as its long context window – the amount of data that an AI model can ingest at once – of one million tokens. This, according to Google, means that Gemini 1.5 Flash will be capable of analyzing a 1,500-page document or a codebase with more than 30,000 lines at once. 

Gemini 1.5 Flash (or any of these models) aren’t really meant for consumers. Instead, it’s a faster and less expensive way for developers building their own AI products and services using tech designed by Google.

In addition to launching Gemini 1.5 Flash, Google is also upgrading Gemini 1.5 Pro. The company said that it had “enhanced” the model’s abilities to write code, reason and parse audio and images. But the biggest update is yet to come – Google announced it will double the model’s existing context window to two million tokens later this year. That would make it capable of processing two hours of video, 22 hours of audio, more than 60,000 lines of code or more than 1.4 million words at the same time.

Both Gemini 1.5 Flash and Pro are now available in public preview in Google’s AI Studio and Vertex AI. The company also announced today a new version of its Gemma open model, called Gemma 2. But unless you’re a developer or someone who likes to tinker around with building AI apps and services, these updates aren’t really meant for the average consumer.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-new-gemini-15-flash-ai-model-is-lighter-than-gemini-pro-and-more-accessible-172353657.html?src=rss

Apple’s M1 iPad Air drops to a new low of $399

Apple’s M1 iPad Air has dropped to a new low price of $399, just as the latest model prepares to hit store shelves. This sale is from Amazon and it doesn’t include every color, though both blue and purple are covered by this steep discount. The other colors are also on sale, but the deals aren’t quite as spicy. Amazon’s sale is for the base 64GB model.

This device tops our list of the best iPads, though that’s likely to change once the new models enter the chat. No matter what happens with our list in the future, however, this is still a powerful and highly capable tablet with plenty of bells and whistles. We love the gorgeous screen, which is a serious step up from the bottom rung 10th-gen iPad. This one also gets you a more powerful chip.

We also enjoyed the form factor. It’s called the iPad Air and it shows. This is a lighter-than-average tablet that’s easy to hold and maneuver, even for long periods of time. The M1 chip is powerful enough to handle just about any app or game you throw at it and the 10.9-inch display is bright, sharp and accurate. It’s pretty much the Platonic ideal of a tablet. We even called it “the closest to being universally appealing and the best iPad for most people.” 

There’s no Face ID, which isn’t a huge deal by my estimation as tablets are harder than phones to wrangle into that sweet spot for a quick facial scan. The 64GB of available storage is also on the smaller side, making this device more of a content consumption machine than anything else. The only major downside is that the new iPad Air is a hair better in just about every aspect, though it’s also at least $200 more expensive.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/apples-m1-ipad-air-drops-to-a-new-low-of-399-155816959.html?src=rss

One of our favorite Roku streaming sticks is on sale for only $34

The Roku Streaming Stick 4K is on sale via Amazon for just $34, which is a savings of 32 percent and one of the best prices we’ve seen all year. As the name suggests, this is a streaming stick that provides 4K visuals and ships with a voice remote that works with Siri, Alexa and Hey Google. Of course, this remote also has buttons.

The stick easily made our list of the best streaming devices, for a great many reasons. We were impressed by the sheer amount of free and live content available via Roku’s ecosystem. There’s a diverse array of free linear channels and video-on-demand (VOD) services here, with thousands of series and films to choose from. Not having to pony up for yet another subscription is always nice.

The Roku Streaming Stick 4K can also access all of those paid subscription services, from Disney+ to Peacock and beyond. The interface is uncluttered and easy to navigate, with a simple content list at the left and an app grid on the right. In addition to 4K, the device supports HDR10+ and Dolby Vision. The player even supports Apple AirPlay 2 for streaming audio and video from a tablet or phone.

If we had to nitpick, and that’s pretty much our job, the device’s What to Watch menu prioritizes the aforementioned free content over titles pulled from paid apps. It’d be nice if things were a bit even, just in case people need a little reminder to finish Sugar on Apple TV+ or Shogun on Hulu. However, it’s tough to be too miffed, as free content is where this Roku device really shines.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/one-of-our-favorite-roku-streaming-sticks-is-on-sale-for-only-34-145718364.html?src=rss

The Morning After: Our verdict on the new iPad Pro

Apple’s new iPad Pro is one of the most divisive (and thinnest) devices the company has made in years. Sure, it’s an undeniable feat of engineering and thinner than an iPod nano. Apple squeezed a new M4 chip and “tandem” OLED panel into its latest flagship tablet.

The new OLED enables more brightness and improved HDR performance compared to the old iPad Pro—standard screen brightness is up to 1,000 nits, compared to 600 nits for the last model. It’s so powerful and so beautiful. But this cutting-edge tech makes it more expensive than ever, putting it out of reach of most and pitting it against flagship laptops, price-wise.

As Nathan Ingraham explains in his review, the iPad Pro lineup has always been about showing off just how good an Apple tablet can be, but this one truly is without compromise. For the rest of us, there's the new iPad Air.

Later today, Google I/O’s big keynote will reveal the company’s latest AI ambitions. We’ll be reporting live, later today.

— Mat Smith

What to expect at Google I/O 2024: Gemini, Android 15, WearOS and more details

Meta’s next hardware project might be AI-infused headphones with cameras

iOS 17.5 has support for web-based app downloads in the EU

iPad Air (2024) review: This is the iPad to get

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!

TMA
OpenAI

OpenAI on Monday announced GPT-4o, a brand-new AI model the company says is one step closer to “much more natural human–computer interaction.” The new model accepts any combination of text, audio and images as input and can generate output in all three formats. It also sounds a lot more like digital assistant Samantha from the movie Her. During the presentation, OpenAI showed GPT-4o translating live between English and Italian, helping a researcher solve a linear equation in real time on paper and providing guidance on deep breathing. OpenAI’s demonstrator even used the smartphone’s camera to show how GPT-4o would describe the room they were in. It could infer they were in a studio, filming video or possibly a livestream. OpenAI is making the new model available to everyone, including free ChatGPT users, over the next few weeks.

Continue reading.

Not to be outdone, ahead of Google I/O (kicks off later today — stay tuned for all the news right here), Google teased its own incoming AI camera features. It’s not exactly clear what the feature is, but it bears some similarities to Google Lens, the company’s camera-powered search feature. What’s shown in the teaser video, however, appears to be working in real-time and responding to voice commands.

Continue reading.

TMA
Dyson

It’s a new direction for Dyson: a floor cleaner without mention of suction, cyclone technology or any of its usual vacuum vocabulary. The Wash G1 is the company’s debut hard-floor cleaner, and it swaps suction for high-speed rollers, water and nylon bristles. It’ll go on sale later this year for $700 — we got to test it at Dyson HQ, ahead of launch.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-our-verdict-on-the-new-ipad-pro-111537244.html?src=rss