Intel has launched a new software application called Thunderbolt Share that will make controlling two or more PCs a more seamless experience. It will allow you to sync files between PCs through its interface, or see multiple computers' folders so you can drag and drop and specific documents, images and other file types. That makes collaborations easy if you're transferring particularly hefty files, say raw photos or unedited videos, between you and a colleague. You can also use the app to transfer data from an old PC to a new one, so you don't have to use an external drive to facilitate the move.
When it comes to screen sharing, Intel says the software can retain the resolution of the source PC without compression, so long as the maximum specs only reach Full HD at up to 60 frames per second. The mouse cursor and keyboard also remain smooth and responsive between PCs, thanks to the Thunderbolt technology's high bandwidth and low latency.
The company says it's licensing Thunderbolt Share to OEMs as a value-add feature for their upcoming PCs and accessories. You will need Windows computers with Thunderbolt 4 or 5 ports to be able to use it, and they have to be directly connected with a Thunderbolt cable, or connected to the same Thunderbolt dock or monitor. The first devices that support the application will be available in the second half of 2024 and will be coming from various manufacturers, including Lenovo, Acer, MSI, Razer, Kensington and Belkin.
This article originally appeared on Engadget at https://www.engadget.com/intels-thunderbolt-share-makes-it-easier-to-move-large-files-between-pcs-123011505.html?src=rss
PPSSPP, an app that's capable of emulating PSP games, has joined the growing number of retro game emulators on the iOS App Store. The program has been around for almost 12 years, but prior to this, you could only install it on your device through workarounds. "Thanks to Apple for relaxing their policies, allowing retro games console emulators on the store," its developer Henrik Rydgård wrote in his announcement. If you'll recall, Apple updated its developer guidelines in early April, and since then, the company has approved an app that can emulate Game Boy and DS games and another that can play PS1 titles.
Rydgård's app is free to download, but as he told The Verge, there's $5 gold version coming, as well. While the paid version of PPSSPP for Android does have some extra features, it's mostly available so that you can support his work. At the moment, the emulator you can download from the App Store doesn't support Magic Keyboard for the iPad, because he originally enabled compatibility using an undocumented API. Retro Achievements is also currently unavailable. Rydgård said they'll be re-added in future updates.
The emulator's other versions support the Just-in-time (JIT) compiler, which optimizes code to make it run more smoothly on a particular platform. However, the one on the App Store doesn't and will not ever support it unless Apple changes its rules. Rydgård says iOS devices are "generally fast enough" to run almost all PSP games at full speed, though, so you may not notice much of a difference. Of course, the PPSSPP program only contains the emulator itself — you're responsible for finding games you can play on the app, since Apple will not allow developers to upload games they don't own the rights to.
This article originally appeared on Engadget at https://www.engadget.com/sony-psp-emulator-ppsspp-hits-the-ios-app-store-052506248.html?src=rss
PPSSPP, an app that's capable of emulating PSP games, has joined the growing number of retro game emulators on the iOS App Store. The program has been around for almost 12 years, but prior to this, you could only install it on your device through workarounds. "Thanks to Apple for relaxing their policies, allowing retro games console emulators on the store," its developer Henrik Rydgård wrote in his announcement. If you'll recall, Apple updated its developer guidelines in early April, and since then, the company has approved an app that can emulate Game Boy and DS games and another that can play PS1 titles.
Rydgård's app is free to download, but as he told The Verge, there's $5 gold version coming, as well. While the paid version of PPSSPP for Android does have some extra features, it's mostly available so that you can support his work. At the moment, the emulator you can download from the App Store doesn't support Magic Keyboard for the iPad, because he originally enabled compatibility using an undocumented API. Retro Achievements is also currently unavailable. Rydgård said they'll be re-added in future updates.
The emulator's other versions support the Just-in-time (JIT) compiler, which optimizes code to make it run more smoothly on a particular platform. However, the one on the App Store doesn't and will not ever support it unless Apple changes its rules. Rydgård says iOS devices are "generally fast enough" to run almost all PSP games at full speed, though, so you may not notice much of a difference. Of course, the PPSSPP program only contains the emulator itself — you're responsible for finding games you can play on the app, since Apple will not allow developers to upload games they don't own the rights to.
This article originally appeared on Engadget at https://www.engadget.com/sony-psp-emulator-ppsspp-hits-the-ios-app-store-052506248.html?src=rss
At the end of I/O, Google’s annual developer conference at the Shoreline Amphitheater in Mountain View, Google CEO Sundar Pichai revealed that the company had said “AI” 121 times. That, essentially, was the crux of Google’s two-hour keynote — stuffing AI into every Google app and service used by more than two billion people around the world. Here are all the major updates from Google's big event, along with some additional announcements that came after the keynote.
Gemini 1.5 Flash and updates to Gemini 1.5 Pro
Google
Google announced a brand new AI model called Gemini 1.5 Flash, which it says is optimised for speed and efficiency. Flash sits between Gemini 1.5 Pro and Gemini 1.5 Nano, which its the company’s smallest model that runs locally on device. Google said that it created Flash because developers wanted a lighter and less expensive model than Gemini Pro to build AI-powered apps and services while keeping some of the things like a long context window of one million tokens that differentiates Gemini Pro from competing models. Later this year, Google will double Gemini’s context window to two million tokens, which means that it will be able to process two hours of video, 22 hours of audio, more than 60,000 lines of code or more than 1.4 million words at the same time.
Project Astra
Google
Google showed off Project Astra, an early version of a universal assistant powered by AI that Google’s DeepMind CEO Demis Hassabis said was Google’s version of an AI agent “that can be helpful in everyday life.”
In a video that Google says was shot in a single take, an Astra user moves around Google’s London office holding up their phone and pointing the camera at various things — a speaker, some code on a whiteboard, and out a window — and has a natural conversation with the app about what it seems. In one of the video’s most impressive moments, the correctly tells the user where she left her glasses before without the user ever having brought up the glasses.
The video ends with a twist — when the user finds and wears the missing glasses, we learn that they have an onboard camera system and are capable of using Project Astra to seamlessly carry on a conversation with the user, perhaps indicating that Google might be working on a competitor to Meta’s Ray Ban smart glasses.
Ask Google Photos
Google
Google Photos was already intelligent when it came to searching for specific images or videos, but with AI, Google is taking things to the next level. If you’re a Google One subscriber in the US, you will be able to ask Google Photos a complex question like “show me the best photo from each national park I’ve visited" when the feature rolls out over the next few months. Google Photos will use GPS information as well as its own judgement of what is “best” to present you with options. You can also ask Google Photos to generate captions to post the photos to social media.
Veo and Imagen 3
Google
Google’s new AI-powered media creation engines are called Veo and Imagen 3. Veo is Google’s answer to OpenAI’s Sora. It can produce “high-quality” 1080p videos that can last “beyond a minute”, Google said, and can understand cinematic concepts like a timelapse.
Imagen 3, meanwhile, is a text-to-image generator that Google claims handles text better than its previous version, Imagen 2. The result is the company’s highest quality” text-to-image model with “incredible level of detail” for “photorealistic, lifelike images” and fewer artifacts — essentially pitting it against OpenAI’s DALLE-3.
Big updates to Google Search
Google
Google is making big changes to how Search fundamentally works. Most of the updates announced today like the ability to ask really complex questions (“Find the best yoga or pilates studios in Boston and show details on their intro offers and walking time from Beacon Hill.”) and using Search to plan meals and vacations won’t be available unless you opt in to Search Labs, the company’s platform that lets people try out experimental features.
But a big new feature that Google is calling AI Overviews and which the company has been testing for a year now, is finally rolling out to millions of people in the US. Google Search will now present AI-generated answers on top of the results by default, and the company says that it will bring the feature to more than a billion users around the world by the end of the year.
Gemini on Android
Google
Google is integrating Gemini directly into Android. When Android 15 releases later this year, Gemini will be aware of the app, image or video that you’re running, and you’ll be able to pull it up as an overlay and ask it context-specific questions. Where does that leave Google Assistant that already does this? Who knows! Google didn’t bring it up at all during today’s keynote.
WearOS 5 battery life improvements
Google isn't quite ready to roll out the latest version of it smartwatch OS, but it is promising some major battery life improvements when it comes. The company said that Wear OS 5 will consume 20 percent less power than Wear OS 4 if a user runs a marathon. Wear OS 4 already brought battery life improvements to smartwatches that support it, but it could still be a lot better at managing a device's power. Google also provided developers with a new guide on how to conserve power and battery, so that they can create more efficient apps.
Android 15 anti-theft features
Android 15's developer preview may have been rolling for months, but there are still features to come. Theft Detection Lock is a new Android 15 feature that will use AI (there it is again) to predict phone thefts and lock things up accordingly. Google says its algorithms can detect motions associated with theft, like those associated with grabbing the phone and bolting, biking or driving away. If an Android 15 handset pinpoints one of these situations, the phone’s screen will quickly lock, making it much harder for the phone snatcher to access your data.
Catch up on all the news from Google I/O 2024 right here!
Update May 15, 2:45PM ET: This story was updated after being published to include details on new Android 15 and WearOS 5 announcements made following the I/O 2024 keynote.
This article originally appeared on Engadget at https://www.engadget.com/google-io-2024-everything-revealed-including-gemini-ai-android-15-and-more-210414423.html?src=rss
Google is opening up its Home platform to third-party developers through new APIs. As such, any app will eventually be able to tap into the more than 600 million devices that are connected to Home, even if they're not necessarily smart home-oriented apps. Google suggests, for instance, that a food delivery app might be able to switch on the outdoor lights before the courier shows up with dinner.
The APIs build on the foundation of Matter and Google says it created them with privacy and security at the forefront. For one thing, developers who tap into the APIs will need to pass certification before rolling out their app. In addition, apps won't be able to access someone's smart home devices without a user's explicit consent.
Developers are already starting to integrate the APIs, which include one focused on automation. Eve, for instance, will let you set up your smart blinds to lower automatically when the temperature dips at night. A workout app might switch on a fan for you before you start working up a sweat.
Google is taking things a little slow with the APIs, as there's a waitlist and it's working with select partners. It plans to open up access to the APIs on a rolling basis, and the first apps using them will hit the Play Store and App Store this fall.
Meanwhile, Google is turning TVs into smart home hubs. Starting later this year, you'll be able to control smart home devices via Chromecast with Google TV and certain models with Google TV running Android 14 or higher, as well as some LG TVs.
This article originally appeared on Engadget at https://www.engadget.com/google-lets-third-party-developers-into-home-through-new-apis-180420068.html?src=rss
Google is opening up its Home platform to third-party developers through new APIs. As such, any app will eventually be able to tap into the more than 600 million devices that are connected to Home, even if they're not necessarily smart home-oriented apps. Google suggests, for instance, that a food delivery app might be able to switch on the outdoor lights before the courier shows up with dinner.
The APIs build on the foundation of Matter and Google says it created them with privacy and security at the forefront. For one thing, developers who tap into the APIs will need to pass certification before rolling out their app. In addition, apps won't be able to access someone's smart home devices without a user's explicit consent.
Developers are already starting to integrate the APIs, which include one focused on automation. Eve, for instance, will let you set up your smart blinds to lower automatically when the temperature dips at night. A workout app might switch on a fan for you before you start working up a sweat.
Google is taking things a little slow with the APIs, as there's a waitlist and it's working with select partners. It plans to open up access to the APIs on a rolling basis, and the first apps using them will hit the Play Store and App Store this fall.
Meanwhile, Google is turning TVs into smart home hubs. Starting later this year, you'll be able to control smart home devices via Chromecast with Google TV and certain models with Google TV running Android 14 or higher, as well as some LG TVs.
This article originally appeared on Engadget at https://www.engadget.com/google-lets-third-party-developers-into-home-through-new-apis-180420068.html?src=rss
Google just announced at an update coming to Android for Cars that should make paying attention to the road just a tiny bit harder. The automobile-based OS is getting new apps, screen casting and more, which were revealed at Google I/O 2024.
First up, select car models are getting a suite of new entertainment apps, like Max and Peacock. The apps are coming to car models with Google built-in that support video, including Polestar, Volvo and Renault. More entertainment options are never a bad thing.
To that end, Angry Birds is coming to cars with Google built-in, for those who want another game to fool around with. The once-iconic bird-flinging simulator is likely the best known gaming IP on the platform, as Android Auto’s other games include stuff like Pin the UFO and Zoo Boom. Both Angry Birds and the aforementioned entertainment options are only available when parked.
Cars with Android Automotive OS are getting Google Cast as part of a forthcoming update, which will let users stream content from phones and tablets. Rivian models will be the first to get this particular feature, with more manufacturers to come.
Google’s also rolling out new developer tools to make it easier for folks to create new apps and experiences for Android Auto. There’s even a new program that should make it much easier to convert pre-existing mobile apps into car-ready experiences.
Android Automotive OS is becoming the de facto standard when it comes to car-based operating systems. Google also used the event to announce that there are now over 200 million cars on the road compatible with Android Auto. Recent updates to the platform allow users to instantly check on EV battery levels and take Zoom calls while on the road.
Catch up on all the news from Google I/O 2024 right here!
This article originally appeared on Engadget at https://www.engadget.com/google-announced-an-update-for-android-auto-with-new-apps-and-casting-support-170831358.html?src=rss
Google just announced at an update coming to Android for Cars that should make paying attention to the road just a tiny bit harder. The automobile-based OS is getting new apps, screen casting and more, which were revealed at Google I/O 2024.
First up, select car models are getting a suite of new entertainment apps, like Max and Peacock. The apps are coming to car models with Google built-in that support video, including Polestar, Volvo and Renault. More entertainment options are never a bad thing.
To that end, Angry Birds is coming to cars with Google built-in, for those who want another game to fool around with. The once-iconic bird-flinging simulator is likely the best known gaming IP on the platform, as Android Auto’s other games include stuff like Pin the UFO and Zoo Boom. Both Angry Birds and the aforementioned entertainment options are only available when parked.
Cars with Android Automotive OS are getting Google Cast as part of a forthcoming update, which will let users stream content from phones and tablets. Rivian models will be the first to get this particular feature, with more manufacturers to come.
Google’s also rolling out new developer tools to make it easier for folks to create new apps and experiences for Android Auto. There’s even a new program that should make it much easier to convert pre-existing mobile apps into car-ready experiences.
Android Automotive OS is becoming the de facto standard when it comes to car-based operating systems. Google also used the event to announce that there are now over 200 million cars on the road compatible with Android Auto. Recent updates to the platform allow users to instantly check on EV battery levels and take Zoom calls while on the road.
Catch up on all the news from Google I/O 2024 right here!
This article originally appeared on Engadget at https://www.engadget.com/google-announced-an-update-for-android-auto-with-new-apps-and-casting-support-170831358.html?src=rss
Ahead of Global Accessibility Awareness Day this week, Apple is issuing its typical annual set of announcements around its assistive features. Many of these are useful for people with disabilities, but also have broader applications as well. For instance, Personal Voice, which was released last year, helps preserve someone's speaking voice. It can be helpful to those who are at risk of losing their voice or have other reasons for wanting to retain their own vocal signature for loved ones in their absence. Today, Apple is bringing eye-tracking support to recent models of iPhones and iPads, as well as customizable vocal shortcuts, music haptics, vehicle motion cues and more.
Built-in eye-tracking for iPhones and iPads
The most intriguing feature of the set is the ability to use the front-facing camera on iPhones or iPads (at least those with the A12 chip or later) to navigate the software without additional hardware or accessories. With this enabled, people can look at their screen to move through elements like apps and menus, then linger on an item to select it.
That pause to select is something Apple calls Dwell Control, which has already been available elsewhere in the company's ecosystem like in Mac's accessibility settings. The setup and calibration process should only take a few seconds, and on-device AI is at work to understand your gaze. It'll also work with third-party apps from launch, since it's a layer in the OS like Assistive Touch. Since Apple already supported eye-tracking in iOS and iPadOS with eye-detection devices connected, the news today is the ability to do so without extra hardware.
Vocal shortcuts for easier hands-free control
Apple is also working on improving the accessibility of its voice-based controls on iPhones and iPads. It again uses on-device AI to create personalized models for each person setting up a new vocal shortcut. You can set up a command for a single word or phrase, or even an utterance (like "Oy!" perhaps). Siri will understand these and perform your designated shortcut or task. You can have these launch apps or run a series of actions that you define in the Shortcuts app, and once set up, you won't have to first ask Siri to be ready.
Another improvement coming to vocal interactions is "Listen for Atypical Speech," which has iPhones and iPads use on-device machine learning to recognize speech patterns and customize their voice recognition around your unique way of vocalizing. This sounds similar to Google's Project Relate, which is also designed to help technology better understand those with speech impairments or atypical speech.
To build these tools, Apple worked with the Speech Accessibility Project at the Beckman Institute for Advanced Science and Technology at the University of Illinois Urbana-Champaign. The institute is also collaborating with other tech giants like Google and Amazon to further development in this space across their products.
Music haptics in Apple Music and other apps
For those who are deaf or hard of hearing, Apple is bringing haptics to music players on iPhone, starting with millions of songs on its own Music app. When enabled, music haptics will play taps, textures and specialized vibrations in tandem with the audio to bring a new layer of sensation. It'll be available as an API so developers can bring greater accessibility to their apps, too.
Help in cars — motion sickness and CarPlay
Drivers with disabilities need better systems in their cars, and Apple is addressing some of the issues with its updates to CarPlay. Voice control and color filters are coming to the interface for vehicles, making it easier to control apps by talking and for those with visual impairments to see menus or alerts. To that end, CarPlay is also getting bold and large text support, as well as sound recognition for noises like sirens or honks. When the system identifies such a sound, it will display an alert at the bottom of the screen to let you know what it heard. This works similarly to Apple's existing sound recognition feature in other devices like the iPhone.
Apple
For those who get motion sickness while using their iPhones or iPads in moving vehicles, a new feature called Vehicle Motion Cues might alleviate some of that discomfort. Since motion sickness is based on a sensory conflict from looking at stationary content while being in a moving vehicle, the new feature is meant to better align the conflicting senses through onscreen dots. When enabled, these dots will line the four edges of your screen and sway in response to the motion it detects. If the car moves forward or accelerates, the dots will sway backwards as if in reaction to the increase in speed in that direction.
Other Apple Accessibility updates
There are plenty more features coming to the company's suite of products, including Live Captions in VisionOS, a new Reader mode in Magnifier, support for multi-line braille and a virtual trackpad for those who use Assistive Touch. It's not yet clear when all of these announced updates will roll out, though Apple has historically made these features available in upcoming versions of iOS. With its developer conference WWDC just a few weeks away, it's likely many of today's tools get officially released with the next iOS.
This article originally appeared on Engadget at https://www.engadget.com/apple-brings-eye-tracking-to-recent-iphones-and-ipads-140012990.html?src=rss
Ahead of Global Accessibility Awareness Day this week, Apple is issuing its typical annual set of announcements around its assistive features. Many of these are useful for people with disabilities, but also have broader applications as well. For instance, Personal Voice, which was released last year, helps preserve someone's speaking voice. It can be helpful to those who are at risk of losing their voice or have other reasons for wanting to retain their own vocal signature for loved ones in their absence. Today, Apple is bringing eye-tracking support to recent models of iPhones and iPads, as well as customizable vocal shortcuts, music haptics, vehicle motion cues and more.
Built-in eye-tracking for iPhones and iPads
The most intriguing feature of the set is the ability to use the front-facing camera on iPhones or iPads (at least those with the A12 chip or later) to navigate the software without additional hardware or accessories. With this enabled, people can look at their screen to move through elements like apps and menus, then linger on an item to select it.
That pause to select is something Apple calls Dwell Control, which has already been available elsewhere in the company's ecosystem like in Mac's accessibility settings. The setup and calibration process should only take a few seconds, and on-device AI is at work to understand your gaze. It'll also work with third-party apps from launch, since it's a layer in the OS like Assistive Touch. Since Apple already supported eye-tracking in iOS and iPadOS with eye-detection devices connected, the news today is the ability to do so without extra hardware.
Vocal shortcuts for easier hands-free control
Apple is also working on improving the accessibility of its voice-based controls on iPhones and iPads. It again uses on-device AI to create personalized models for each person setting up a new vocal shortcut. You can set up a command for a single word or phrase, or even an utterance (like "Oy!" perhaps). Siri will understand these and perform your designated shortcut or task. You can have these launch apps or run a series of actions that you define in the Shortcuts app, and once set up, you won't have to first ask Siri to be ready.
Another improvement coming to vocal interactions is "Listen for Atypical Speech," which has iPhones and iPads use on-device machine learning to recognize speech patterns and customize their voice recognition around your unique way of vocalizing. This sounds similar to Google's Project Relate, which is also designed to help technology better understand those with speech impairments or atypical speech.
To build these tools, Apple worked with the Speech Accessibility Project at the Beckman Institute for Advanced Science and Technology at the University of Illinois Urbana-Champaign. The institute is also collaborating with other tech giants like Google and Amazon to further development in this space across their products.
Music haptics in Apple Music and other apps
For those who are deaf or hard of hearing, Apple is bringing haptics to music players on iPhone, starting with millions of songs on its own Music app. When enabled, music haptics will play taps, textures and specialized vibrations in tandem with the audio to bring a new layer of sensation. It'll be available as an API so developers can bring greater accessibility to their apps, too.
Help in cars — motion sickness and CarPlay
Drivers with disabilities need better systems in their cars, and Apple is addressing some of the issues with its updates to CarPlay. Voice control and color filters are coming to the interface for vehicles, making it easier to control apps by talking and for those with visual impairments to see menus or alerts. To that end, CarPlay is also getting bold and large text support, as well as sound recognition for noises like sirens or honks. When the system identifies such a sound, it will display an alert at the bottom of the screen to let you know what it heard. This works similarly to Apple's existing sound recognition feature in other devices like the iPhone.
Apple
For those who get motion sickness while using their iPhones or iPads in moving vehicles, a new feature called Vehicle Motion Cues might alleviate some of that discomfort. Since motion sickness is based on a sensory conflict from looking at stationary content while being in a moving vehicle, the new feature is meant to better align the conflicting senses through onscreen dots. When enabled, these dots will line the four edges of your screen and sway in response to the motion it detects. If the car moves forward or accelerates, the dots will sway backwards as if in reaction to the increase in speed in that direction.
Other Apple Accessibility updates
There are plenty more features coming to the company's suite of products, including Live Captions in VisionOS, a new Reader mode in Magnifier, support for multi-line braille and a virtual trackpad for those who use Assistive Touch. It's not yet clear when all of these announced updates will roll out, though Apple has historically made these features available in upcoming versions of iOS. With its developer conference WWDC just a few weeks away, it's likely many of today's tools get officially released with the next iOS.
This article originally appeared on Engadget at https://www.engadget.com/apple-brings-eye-tracking-to-recent-iphones-and-ipads-140012990.html?src=rss