Google’s accessibility app Lookout can use your phone’s camera to find and recognize objects

Google has updated some of its accessibility apps to add capabilities that will make them easier to use for people who need them. It has rolled out a new version of the Lookout app, which can read text and even lengthy documents out loud for people with low vision or blindness. The app can also read food labels, recognize currency and can tell users what it sees through the camera and in an image. Its latest version comes with a new "Find" mode that allows users to choose from seven item categories, including seating, tables, vehicles, utensils and bathrooms.

When users choose a category, the app will be able to recognize objects associated with them as the user moves their camera around a room. It will then tell them the direction or distance to the object, making it easier for users to interact with their surroundings. Google has also launched an in-app capture button, so they can take photos and quickly get AI-generated descriptions. 

A screenshot showing object categories in Google Lookout, such as Seating & Tables, Doors & Windows, Cups, etc.
Google

The company has updated its Look to Speak app, as well. Look to Speak enables users to communicate with other people by selecting from a list of phrases, which they want the app to speak out loud, using eye gestures. Now, Google has added a text-free mode that gives them the option to trigger speech by choosing from a photo book containing various emojis, symbols and photos. Even better, they can personalize what each symbol or image means for them. 

Google has also expanded its screen reader capabilities for Lens in Maps, so that it can tell the user the names and categories of the places it sees, such as ATMs and restaurants. It can also tell them how far away a particular location is. In addition, it's rolling out improvements for detailed voice guidance, which provides audio prompts that tell the user where they're supposed to go. 

Finally, Google has made Maps' wheelchair information accessible on desktop, four years after it launched on Android and iOS. The Accessible Places feature allows users to see if the place they're visiting can accommodate their needs — businesses and public venues with an accessible entrance, for example, will show a wheelchair icon. They can also use the feature to see if a location has accessible washrooms, seating and parking. The company says Maps has accessibility information for over 50 million places at the moment. Those who prefer looking up wheelchair information on Android and iOS will now also be able to easily filter reviews focusing on wheelchair access. 

Google made all these announcements at this year's I/O developer conference, where it also revealed that it open-sourced more code for the Project Gameface hands-free "mouse," allowing Android developers to use it for their apps. The tool allows users to control the cursor with their head movements and facial gestures, so that they can more easily use their computers and phones. 

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-accessibility-app-lookout-can-use-your-phones-camera-to-find-and-recognize-objects-160007994.html?src=rss

Google’s accessibility app Lookout can use your phone’s camera to find and recognize objects

Google has updated some of its accessibility apps to add capabilities that will make them easier to use for people who need them. It has rolled out a new version of the Lookout app, which can read text and even lengthy documents out loud for people with low vision or blindness. The app can also read food labels, recognize currency and can tell users what it sees through the camera and in an image. Its latest version comes with a new "Find" mode that allows users to choose from seven item categories, including seating, tables, vehicles, utensils and bathrooms.

When users choose a category, the app will be able to recognize objects associated with them as the user moves their camera around a room. It will then tell them the direction or distance to the object, making it easier for users to interact with their surroundings. Google has also launched an in-app capture button, so they can take photos and quickly get AI-generated descriptions. 

A screenshot showing object categories in Google Lookout, such as Seating & Tables, Doors & Windows, Cups, etc.
Google

The company has updated its Look to Speak app, as well. Look to Speak enables users to communicate with other people by selecting from a list of phrases, which they want the app to speak out loud, using eye gestures. Now, Google has added a text-free mode that gives them the option to trigger speech by choosing from a photo book containing various emojis, symbols and photos. Even better, they can personalize what each symbol or image means for them. 

Google has also expanded its screen reader capabilities for Lens in Maps, so that it can tell the user the names and categories of the places it sees, such as ATMs and restaurants. It can also tell them how far away a particular location is. In addition, it's rolling out improvements for detailed voice guidance, which provides audio prompts that tell the user where they're supposed to go. 

Finally, Google has made Maps' wheelchair information accessible on desktop, four years after it launched on Android and iOS. The Accessible Places feature allows users to see if the place they're visiting can accommodate their needs — businesses and public venues with an accessible entrance, for example, will show a wheelchair icon. They can also use the feature to see if a location has accessible washrooms, seating and parking. The company says Maps has accessibility information for over 50 million places at the moment. Those who prefer looking up wheelchair information on Android and iOS will now also be able to easily filter reviews focusing on wheelchair access. 

Google made all these announcements at this year's I/O developer conference, where it also revealed that it open-sourced more code for the Project Gameface hands-free "mouse," allowing Android developers to use it for their apps. The tool allows users to control the cursor with their head movements and facial gestures, so that they can more easily use their computers and phones. 

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-accessibility-app-lookout-can-use-your-phones-camera-to-find-and-recognize-objects-160007994.html?src=rss

Intel’s Thunderbolt Share makes it easier to move large files between PCs

Intel has launched a new software application called Thunderbolt Share that will make controlling two or more PCs a more seamless experience. It will allow you to sync files between PCs through its interface, or see multiple computers' folders so you can drag and drop and specific documents, images and other file types. That makes collaborations easy if you're transferring particularly hefty files, say raw photos or unedited videos, between you and a colleague. You can also use the app to transfer data from an old PC to a new one, so you don't have to use an external drive to facilitate the move. 

When it comes to screen sharing, Intel says the software can retain the resolution of the source PC without compression, so long as the maximum specs only reach Full HD at up to 60 frames per second. The mouse cursor and keyboard also remain smooth and responsive between PCs, thanks to the Thunderbolt technology's high bandwidth and low latency. 

The company says it's licensing Thunderbolt Share to OEMs as a value-add feature for their upcoming PCs and accessories. You will need Windows computers with Thunderbolt 4 or 5 ports to be able to use it, and they have to be directly connected with a Thunderbolt cable, or connected to the same Thunderbolt dock or monitor. The first devices that support the application will be available in the second half of 2024 and will be coming from various manufacturers, including Lenovo, Acer, MSI, Razer, Kensington and Belkin.

This article originally appeared on Engadget at https://www.engadget.com/intels-thunderbolt-share-makes-it-easier-to-move-large-files-between-pcs-123011505.html?src=rss

Intel’s Thunderbolt Share makes it easier to move large files between PCs

Intel has launched a new software application called Thunderbolt Share that will make controlling two or more PCs a more seamless experience. It will allow you to sync files between PCs through its interface, or see multiple computers' folders so you can drag and drop and specific documents, images and other file types. That makes collaborations easy if you're transferring particularly hefty files, say raw photos or unedited videos, between you and a colleague. You can also use the app to transfer data from an old PC to a new one, so you don't have to use an external drive to facilitate the move. 

When it comes to screen sharing, Intel says the software can retain the resolution of the source PC without compression, so long as the maximum specs only reach Full HD at up to 60 frames per second. The mouse cursor and keyboard also remain smooth and responsive between PCs, thanks to the Thunderbolt technology's high bandwidth and low latency. 

The company says it's licensing Thunderbolt Share to OEMs as a value-add feature for their upcoming PCs and accessories. You will need Windows computers with Thunderbolt 4 or 5 ports to be able to use it, and they have to be directly connected with a Thunderbolt cable, or connected to the same Thunderbolt dock or monitor. The first devices that support the application will be available in the second half of 2024 and will be coming from various manufacturers, including Lenovo, Acer, MSI, Razer, Kensington and Belkin.

This article originally appeared on Engadget at https://www.engadget.com/intels-thunderbolt-share-makes-it-easier-to-move-large-files-between-pcs-123011505.html?src=rss

Sony PSP emulator PPSSPP hits the iOS App Store

PPSSPP, an app that's capable of emulating PSP games, has joined the growing number of retro game emulators on the iOS App Store. The program has been around for almost 12 years, but prior to this, you could only install it on your device through workarounds. "Thanks to Apple for relaxing their policies, allowing retro games console emulators on the store," its developer Henrik Rydgård wrote in his announcement. If you'll recall, Apple updated its developer guidelines in early April, and since then, the company has approved an app that can emulate Game Boy and DS games and another that can play PS1 titles

Rydgård's app is free to download, but as he told The Verge, there's $5 gold version coming, as well. While the paid version of PPSSPP for Android does have some extra features, it's mostly available so that you can support his work. At the moment, the emulator you can download from the App Store doesn't support Magic Keyboard for the iPad, because he originally enabled compatibility using an undocumented API. Retro Achievements is also currently unavailable. Rydgård said they'll be re-added in future updates.

The emulator's other versions support the Just-in-time (JIT) compiler, which optimizes code to make it run more smoothly on a particular platform. However, the one on the App Store doesn't and will not ever support it unless Apple changes its rules. Rydgård says iOS devices are "generally fast enough" to run almost all PSP games at full speed, though, so you may not notice much of a difference. Of course, the PPSSPP program only contains the emulator itself — you're responsible for finding games you can play on the app, since Apple will not allow developers to upload games they don't own the rights to. 

This article originally appeared on Engadget at https://www.engadget.com/sony-psp-emulator-ppsspp-hits-the-ios-app-store-052506248.html?src=rss

Sony PSP emulator PPSSPP hits the iOS App Store

PPSSPP, an app that's capable of emulating PSP games, has joined the growing number of retro game emulators on the iOS App Store. The program has been around for almost 12 years, but prior to this, you could only install it on your device through workarounds. "Thanks to Apple for relaxing their policies, allowing retro games console emulators on the store," its developer Henrik Rydgård wrote in his announcement. If you'll recall, Apple updated its developer guidelines in early April, and since then, the company has approved an app that can emulate Game Boy and DS games and another that can play PS1 titles

Rydgård's app is free to download, but as he told The Verge, there's $5 gold version coming, as well. While the paid version of PPSSPP for Android does have some extra features, it's mostly available so that you can support his work. At the moment, the emulator you can download from the App Store doesn't support Magic Keyboard for the iPad, because he originally enabled compatibility using an undocumented API. Retro Achievements is also currently unavailable. Rydgård said they'll be re-added in future updates.

The emulator's other versions support the Just-in-time (JIT) compiler, which optimizes code to make it run more smoothly on a particular platform. However, the one on the App Store doesn't and will not ever support it unless Apple changes its rules. Rydgård says iOS devices are "generally fast enough" to run almost all PSP games at full speed, though, so you may not notice much of a difference. Of course, the PPSSPP program only contains the emulator itself — you're responsible for finding games you can play on the app, since Apple will not allow developers to upload games they don't own the rights to. 

This article originally appeared on Engadget at https://www.engadget.com/sony-psp-emulator-ppsspp-hits-the-ios-app-store-052506248.html?src=rss

Google I/O 2024: Everything revealed including Gemini AI, Android 15 and more

At the end of I/O, Google’s annual developer conference at the Shoreline Amphitheater in Mountain View, Google CEO Sundar Pichai revealed that the company had said “AI” 121 times. That, essentially, was the crux of Google’s two-hour keynote — stuffing AI into every Google app and service used by more than two billion people around the world. Here are all the major updates from Google's big event, along with some additional announcements that came after the keynote.

Gemini Pro
Google

Google announced a brand new AI model called Gemini 1.5 Flash, which it says is optimised for speed and efficiency. Flash sits between Gemini 1.5 Pro and Gemini 1.5 Nano, which its the company’s smallest model that runs locally on device. Google said that it created Flash because developers wanted a lighter and less expensive model than Gemini Pro to build AI-powered apps and services while keeping some of the things like a long context window of one million tokens that differentiates Gemini Pro from competing models. Later this year, Google will double Gemini’s context window to two million tokens, which means that it will be able to process two hours of video, 22 hours of audio, more than 60,000 lines of code or more than 1.4 million words at the same time.

Project Astra
Google

Google showed off Project Astra, an early version of a universal assistant powered by AI that Google’s DeepMind CEO Demis Hassabis said was Google’s version of an AI agent “that can be helpful in everyday life.”

In a video that Google says was shot in a single take, an Astra user moves around Google’s London office holding up their phone and pointing the camera at various things — a speaker, some code on a whiteboard, and out a window — and has a natural conversation with the app about what it seems. In one of the video’s most impressive moments, the correctly tells the user where she left her glasses before without the user ever having brought up the glasses.

The video ends with a twist — when the user finds and wears the missing glasses, we learn that they have an onboard camera system and are capable of using Project Astra to seamlessly carry on a conversation with the user, perhaps indicating that Google might be working on a competitor to Meta’s Ray Ban smart glasses.

Ask Photos
Google

Google Photos was already intelligent when it came to searching for specific images or videos, but with AI, Google is taking things to the next level. If you’re a Google One subscriber in the US, you will be able to ask Google Photos a complex question like “show me the best photo from each national park I’ve visited" when the feature rolls out over the next few months. Google Photos will use GPS information as well as its own judgement of what is “best” to present you with options. You can also ask Google Photos to generate captions to post the photos to social media.

Veo
Google

Google’s new AI-powered media creation engines are called Veo and Imagen 3. Veo is Google’s answer to OpenAI’s Sora. It can produce “high-quality” 1080p videos that can last “beyond a minute”, Google said, and can understand cinematic concepts like a timelapse.

Imagen 3, meanwhile, is a text-to-image generator that Google claims handles text better than its previous version, Imagen 2. The result is the company’s highest quality” text-to-image model with “incredible level of detail” for “photorealistic, lifelike images” and fewer artifacts — essentially pitting it against OpenAI’s DALLE-3.

Google Search
Google

Google is making big changes to how Search fundamentally works. Most of the updates announced today like the ability to ask really complex questions (“Find the best yoga or pilates studios in Boston and show details on their intro offers and walking time from Beacon Hill.”) and using Search to plan meals and vacations won’t be available unless you opt in to Search Labs, the company’s platform that lets people try out experimental features.

But a big new feature that Google is calling AI Overviews and which the company has been testing for a year now, is finally rolling out to millions of people in the US. Google Search will now present AI-generated answers on top of the results by default, and the company says that it will bring the feature to more than a billion users around the world by the end of the year.

Gemini on Android
Google

Google is integrating Gemini directly into Android. When Android 15 releases later this year, Gemini will be aware of the app, image or video that you’re running, and you’ll be able to pull it up as an overlay and ask it context-specific questions. Where does that leave Google Assistant that already does this? Who knows! Google didn’t bring it up at all during today’s keynote.

Google isn't quite ready to roll out the latest version of it smartwatch OS, but it is promising some major battery life improvements when it comes. The company said that Wear OS 5 will consume 20 percent less power than Wear OS 4 if a user runs a marathon. Wear OS 4 already brought battery life improvements to smartwatches that support it, but it could still be a lot better at managing a device's power. Google also provided developers with a new guide on how to conserve power and battery, so that they can create more efficient apps.

Android 15's developer preview may have been rolling for months, but there are still features to come. Theft Detection Lock is a new Android 15 feature that will use AI (there it is again) to predict phone thefts and lock things up accordingly. Google says its algorithms can detect motions associated with theft, like those associated with grabbing the phone and bolting, biking or driving away. If an Android 15 handset pinpoints one of these situations, the phone’s screen will quickly lock, making it much harder for the phone snatcher to access your data.

There were a bunch of other updates too. Google said it would add digital watermarks to AI-generated video and text, make Gemini accessible in the side panel in Gmail and Docs, power a virtual AI teammate in Workspace, listen in on phone calls and detect if you’re being scammed in real time, and a lot more.


Catch up on all the news from Google I/O 2024 right here!

Update May 15, 2:45PM ET: This story was updated after being published to include details on new Android 15 and WearOS 5 announcements made following the I/O 2024 keynote.

This article originally appeared on Engadget at https://www.engadget.com/google-io-2024-everything-revealed-including-gemini-ai-android-15-and-more-210414423.html?src=rss

Google lets third-party developers into Home through new APIs

Google is opening up its Home platform to third-party developers through new APIs. As such, any app will eventually be able to tap into the more than 600 million devices that are connected to Home, even if they're not necessarily smart home-oriented apps. Google suggests, for instance, that a food delivery app might be able to switch on the outdoor lights before the courier shows up with dinner.

The APIs build on the foundation of Matter and Google says it created them with privacy and security at the forefront. For one thing, developers who tap into the APIs will need to pass certification before rolling out their app. In addition, apps won't be able to access someone's smart home devices without a user's explicit consent.

Developers are already starting to integrate the APIs, which include one focused on automation. Eve, for instance, will let you set up your smart blinds to lower automatically when the temperature dips at night. A workout app might switch on a fan for you before you start working up a sweat.

Google is taking things a little slow with the APIs, as there's a waitlist and it's working with select partners. It plans to open up access to the APIs on a rolling basis, and the first apps using them will hit the Play Store and App Store this fall.

Meanwhile, Google is turning TVs into smart home hubs. Starting later this year, you'll be able to control smart home devices via Chromecast with Google TV and certain models with Google TV running Android 14 or higher, as well as some LG TVs.

This article originally appeared on Engadget at https://www.engadget.com/google-lets-third-party-developers-into-home-through-new-apis-180420068.html?src=rss

Google lets third-party developers into Home through new APIs

Google is opening up its Home platform to third-party developers through new APIs. As such, any app will eventually be able to tap into the more than 600 million devices that are connected to Home, even if they're not necessarily smart home-oriented apps. Google suggests, for instance, that a food delivery app might be able to switch on the outdoor lights before the courier shows up with dinner.

The APIs build on the foundation of Matter and Google says it created them with privacy and security at the forefront. For one thing, developers who tap into the APIs will need to pass certification before rolling out their app. In addition, apps won't be able to access someone's smart home devices without a user's explicit consent.

Developers are already starting to integrate the APIs, which include one focused on automation. Eve, for instance, will let you set up your smart blinds to lower automatically when the temperature dips at night. A workout app might switch on a fan for you before you start working up a sweat.

Google is taking things a little slow with the APIs, as there's a waitlist and it's working with select partners. It plans to open up access to the APIs on a rolling basis, and the first apps using them will hit the Play Store and App Store this fall.

Meanwhile, Google is turning TVs into smart home hubs. Starting later this year, you'll be able to control smart home devices via Chromecast with Google TV and certain models with Google TV running Android 14 or higher, as well as some LG TVs.

This article originally appeared on Engadget at https://www.engadget.com/google-lets-third-party-developers-into-home-through-new-apis-180420068.html?src=rss

Google announced an update for Android Auto with new apps and casting support

Google just announced at an update coming to Android for Cars that should make paying attention to the road just a tiny bit harder. The automobile-based OS is getting new apps, screen casting and more, which were revealed at Google I/O 2024.

First up, select car models are getting a suite of new entertainment apps, like Max and Peacock. The apps are coming to car models with Google built-in that support video, including Polestar, Volvo and Renault. More entertainment options are never a bad thing.

To that end, Angry Birds is coming to cars with Google built-in, for those who want another game to fool around with. The once-iconic bird-flinging simulator is likely the best known gaming IP on the platform, as Android Auto’s other games include stuff like Pin the UFO and Zoo Boom. Both Angry Birds and the aforementioned entertainment options are only available when parked.

Cars with Android Automotive OS are getting Google Cast as part of a forthcoming update, which will let users stream content from phones and tablets. Rivian models will be the first to get this particular feature, with more manufacturers to come.

Google’s also rolling out new developer tools to make it easier for folks to create new apps and experiences for Android Auto. There’s even a new program that should make it much easier to convert pre-existing mobile apps into car-ready experiences.

Android Automotive OS is becoming the de facto standard when it comes to car-based operating systems. Google also used the event to announce that there are now over 200 million cars on the road compatible with Android Auto. Recent updates to the platform allow users to instantly check on EV battery levels and take Zoom calls while on the road.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-announced-an-update-for-android-auto-with-new-apps-and-casting-support-170831358.html?src=rss