Apple Intelligence: What devices and features will actually be supported?

Apple Intelligence is coming, but not to every iPhone out there. In fact, you'll need to have a device with an A17 Pro processor or M-series chip to use many of the features unveiled during the Apple Intelligence portion of WWDC 2024. That means only iPhone 15 Pro owners (and those with an M-series iPad or MacBook) will get the iOS 18-related Apple Intelligence (AI?) updates like Genmoji, Image Playground, the redesigned Siri and Writing Tools. Then there are things like Math Notes and Smart Script on iPadOS 18 and the new features in Messages coming via iOS 18 that will be arriving for anyone that can upgrade to the latest platforms. It's confusing, and the best way to anticipate what you're getting is to know what processor is in your iPhone, iPad or Mac.

It's not evident exactly why older devices using an A16 chip (like the iPhone 14 Pro) won't work with Apple Intelligence, given its neural engine seems more than capable compared to the M1. A closer look at the specs sheets of those two processors show that the main differences appear to be in memory and GPU prowess. Specifically, the A16 Bionic can only support a maximum of 6GB of RAM onboard while the M1 starts at 8GB and goes up to 16GB. In fact, all the supported devices have at least 8GB of RAM and that could hint at why your iPhone 14 Pro will not be able to handle making Genmojis, perhaps.

Though it might not seem quite fair that owners of a relatively recent iPhone won't get to use Apple Intelligence features, you'll still be getting a healthy amount of updates via iOS 18. Here's a quick breakdown of what is coming via iOS 18, and what's only coming if your iPhone supports Apple Intelligence.

Basically everything described during the iOS portion of yesterday's WWDC 2024 keynote is coming to all iPhones (that can update to iOS 18). That includes the customizable home screen, Control Center, dedicated Passwords app, redesigned Photos app, new Tapback emoji reactions, text effects, scheduled sending and more. Messages via Satellite is only coming to iPhone 14 or newer, and you'll be able to send text messages, emojis and Tapbacks, but not images or videos. 

You'll also be tied to the same satellite service plan that you got at the time of your purchase of an iPhone 14. If you bought your iPhone 14 in January 2024, you received a free two-year subscription to be able to use Emergency SOS via Satellite and other satellite communication features that now include texting. That means that to continue texting people via satellite after January 2026, you'll need to start paying for a plan. 

There are a whole host of updates coming with iOS 18 that Apple didn't quite cover in its keynote either, and I'll be putting up a separate guide about that in a bit. But suffice to say that apps like Maps, Safari, Calendar and Journal are getting new functions that, together with the other changes mentioned so far, add up to a meaty OS upgrade.

In short, all of them. If you have an iPhone 15 Pro or an iPad (or Mac) with an M-series chip, you'll get a redesigned Siri, Genmoji and Image Playground, as well as writing tools baked into the system. That means tools like proofreading, summarizing or helping you adjust your tone in apps like Mail, Notes and Keynote are limited to the AI-supported devices. If you don't have one of those, you'll get none of this. 

The redesigned Siri, which is only coming through Apple Intelligence, will be able to understand what's on your screen to contextually answer your queries. If you've been texting with your friend about which baseball player is the best, you can ask Siri (by long pressing the power button or just saying Hey Siri) "How many homeruns has he done?" The assistant will know who "he" is in this context, and understand you're referring to the athlete, not the friend you're chatting with. 

Apple Intelligence is also what brings the ability to type to Siri — and you can invoke this keyboard to talk to the assistant by double tapping the bottom of the screen. 

This also means that new glowing edge animation that appears when Siri is triggered is limited to the Apple Intelligence-supported devices. You'll still be looking at that little orb at the bottom of your screen when you talk to the assistant on an iPhone 14 Pro or older.

There are loads more features coming via Apple Intelligence, which appears to be set for release later this year. 

Catch up here for all the news out of Apple's WWDC 2024.

This article originally appeared on Engadget at https://www.engadget.com/apple-intelligence-what-devices-and-features-will-actually-be-supported-185850732.html?src=rss

Fitbit Ace LTE hands-on: Wearable gaming to make exercise fun (but not too fun)

Google is crossing genres with its latest wearable for kids, combining a gaming system and an activity tracker in the Fitbit Ace LTE. The company is pitching this as a “first-of-its-kind connected smartwatch that transforms exercise into play and safely helps kids lead more active, independent lives.” Basically, think of it as a Nintendo Switch pared down into an activity tracker for children aged 7 and up, with a few safety and connectivity features built in.

The main idea here is to get kids up and moving, in exchange for progress on the Ace LTE’s onboard games. But there are also basic tools that let parents (and trusted contacts) stay in touch with the wearer. Through the new Fitbit Ace app (that adults can install on iOS or Android), guardians can set play time, monitor activity progress and send calls or messages. On the watch itself, kids can also use the onscreen keyboard or microphone to type or dictate texts or choose an emoji.

Since the Fitbit Ace LTE uses a simplified version of the hardware on the Pixel Watch 2, it’s pretty responsive. One major difference, though, is that the kid-friendly tracker uses Gorilla Glass 3 on its cover, in addition to the 5 ATMs of water-resistance that both models share. Google does include a protective case with each Ace LTE, and it doesn’t add much weight.

There are also other obvious differences because the Pixel Watch 2 has a circular face while the Fitbit Ace LTE has a “squircle” (square with rounded corners) OLED with two large buttons on the right side. The latter’s band is also a lot narrower, and it comes “with technology built in,” according to Google’s vice president of product management Anil Sabharwal. That's just a fancy way to say that the Ace LTE recognizes when you swap in a new strap and each accessory comes with unique content.

The Fitbit Ace LTE on a wrist held in mid-air, with a cartoon room on the screen.
Cherlynn Low for Engadget

 

 The company is calling these straps “Cartridges” — another reminder of how the Fitbit Ace LTE is a gaming console wannabe. When you snap a new one on, you’ll see an animation of all the bonus material you just got. They include new backgrounds and items for your Tamagotchi-esque pet called “eejie.” Separate bands also add unique cartoony strips, called Noodles, that make their way around the edges of the watch's display every day which chart the wearer’s progress towards daily goals, similar to Apple's activity rings.

I’m dancing around the main part of the Fitbit Ace LTE’s proposition, because I wanted to get the hardware out of the way. The most interesting concept here is the idea of a wearable gaming system. The Ace LTE’s home screen looks fairly typical. It shows you the time and the Noodle activity ring around it, as well as some small font at the very bottom showing the number of points collected.

To the left of this page is what Sabharwal called a “playlist” — a collection of daily quests. Like on other iOS or Android games, this is a bunch of targets to hit within a dictated time frame to ensure you’re engaged, and achieving these goals leads to rewards.

Most of these rewards are things you can use to jazz up your digital pet’s home over on the right of the home screen. Google calls these things “eejies” — that name doesn’t actually mean anything. Some engineers in a room looked at the letters “I” “J” and “I” and sounded them out and thought sure, why not. (No, those letters don't actually stand for anything, either.)

The Fitbit Ace LTE on a wrist held in mid-air, with a digital character inside a pink bedroom on the screen. At the top is the word
Cherlynn Low for Engadget

According to Google, “Eejies are customizable creatures that feed off daily activity — the more kids reach their movement goals, the more healthy and happy their eejie gets.” When daily activities are completed and each child earns arcade tickets (or when a new watch strap is attached), they can exchange them for new outfit or furniture items for their eejies.

Even though they’re supposed to be “customizable creatures,” the eejies are anthropomorphic and look like… well, kids. Depending on how you style them, they sort of look like sullen teenagers, even. Don’t expect a cute Pikachu or Digimon to play with, these eejie are two-legged beings with heads, arms and necks. I’d prefer something cuter, but perhaps the target demographic likes feeding and playing with a strange avatar of themselves.

When multiple Ace LTE wearers meet up, their eejie can visit each other and leave emoji messages. Of course, how fun that is depends on how many of your (kid’s) friends have Ace LTEs.

Even without that social component though, the Ace LTE can be quite a lot of fun. It is the home of Fitbit Arcade, a new library of games built specifically for this wearable. So far, I’ve only seen about six games in the collection, including a room escape game, a fishing simulator and a Mario Kart-like racer.

The first game I tried at Google’s briefing was Smoky Lake, the fishing game. After a quick intro, I tapped on a shadow of a fish in the water, and flung my arm out. I waited till the Ace LTE buzzed, then pulled my wrist in. I was told that I had caught a puffer fish, and swiped through to see more information about past catches. I earned five arcade tickets with this catch. 

I gleefully tried again and caught what I was told was the “biggest pineapple gillfish” acquired that day. Other hauls the Ace LTE I was wearing had acquired included a “ramen squid” and a “blob fish,” and tapping an icon on the upper left brought up my library of things that had been caught.

The Fitbit Ace LTE on a wrist held in mid-air, with the words
Cherlynn Low for Engadget

I then played a round of Pollo 13, a racing game where I played as a chicken in a bathtub competing in an intergalactic space match against my arch nemesis. There, I tilted my wrist in all directions to steer, keeping my vehicle on track or swerving to collect items that sped me up. Just as I expected based on my prior Mario Kart experience (and also my general lack of skill at driving in real life), I sucked at this game and came in last. Sabharwal gently informed me that this was the poorest result they had seen all day.

I didn’t get to check out other titles installed, like Galaxy Rangers, Jelly Jam or Sproutlings but I was most intrigued by a room escape game, which is my favorite genre.

Google doesn’t want to encourage obsession or addiction to the Ace LTE’s games, though. “We don’t want kids to overexercise. We don’t want kids to feel like they have a streak and if they miss a day, ‘Oh my God, the world is over!’” Sabharwal said.

To that end, progress in each game is built around encouraging the wearer to meet movement goals to advance to new stages. Every two to three minutes, you’ll be prompted to get up and move. In Smokey Lake, for instance, you’ll be told that you’ve run out of bait and have to walk a few hundred steps to go to the bait shop. This can be achieved by walking a number of steps or doing any activity that meets similar requirements. Google is calling this “interval-based gaming,” playing on the idea of “interval-based training.” After about five to 10 sessions, the company thinks each wearer will hit the 60 to 90 minutes of daily required activity recommended by the World Health Organization.

The Fitbit Ace LTE on a wrist held in mid-air, with two game titles on a carousel in view:
Cherlynn Low for Engadget

The idea of activity as currency for games isn’t exactly novel, but Google’s being quite careful in its approach. Not only is it trying to avoid addiction, which for the target age group is a real concern, but the company also says it built the Ace LTE “responsibly from the ground up” by working with “experts in child psychology, public health, privacy and digital wellbeing.” It added that the device was “built with privacy in mind, front and center,” and that only parents will ever be shown a child’s location or activity data in their apps. Location data is deleted after 24 hours, while activity data is deleted after a maximum of 35 days. Google also said “there are no third-party apps or ads on the device.”

While activity is the main goal at launch, there is potential for the Ace LTE to track sleep and other aspects of health to count towards goals. Parts of the Ace LTE interface appeared similar to other Fitbit trackers, with movement reminders and a Today-esque dashboard. But from my brief hands-on, it was hard to fully explore and compare.

Though I like the idea of the Ace LTE and was definitely entertained by some of the games, I still have some reservations. I was concerned that the device I tried on felt warm, although Sabharwal explained it was likely because the demo units had been charging on and off all day. I also didn’t care for the thick bezels around the screen, though that didn’t really adversely impact my experience. What did seem more of a problem was the occasional lag I encountered waiting for games to load or to go to the home screen. I’m not sure if that was a product of early software or if the final retail units will have similar delays, and will likely need to run a full review to find out.

The Fitbit Ace LTE is available for pre-order today for $230 on the Google Store or Amazon and it arrives on June 5. You’ll need to pay an extra $10 a month for the Ace Pass plan, which includes LTE service (on Google’s Fi) and access to Fitbit Arcade and regular content updates. If you spring for an annual subscription, you’ll get a collectable Ace Band (six are available at launch) and from now till August 31, the yearly fee is discounted at 50 percent off, making it about $5 a month.

Update, May 29, 3:15PM ET: This story has been edited to clarify that the Fitbit Ace LTE's hardware is a simplified version of the Pixel Watch 2. It is not capable of sleep or stress tracking.

This article originally appeared on Engadget at https://www.engadget.com/fitbit-ace-lte-hands-on-wearable-gaming-to-make-exercise-fun-but-not-too-fun-140059054.html?src=rss

Fitbit Ace LTE hands-on: Wearable gaming to make exercise fun (but not too fun)

Google is crossing genres with its latest wearable for kids, combining a gaming system and an activity tracker in the Fitbit Ace LTE. The company is pitching this as a “first-of-its-kind connected smartwatch that transforms exercise into play and safely helps kids lead more active, independent lives.” Basically, think of it as a Nintendo Switch pared down into an activity tracker for children aged 7 and up, with a few safety and connectivity features built in.

The main idea here is to get kids up and moving, in exchange for progress on the Ace LTE’s onboard games. But there are also basic tools that let parents (and trusted contacts) stay in touch with the wearer. Through the new Fitbit Ace app (that adults can install on iOS or Android), guardians can set play time, monitor activity progress and send calls or messages. On the watch itself, kids can also use the onscreen keyboard or microphone to type or dictate texts or choose an emoji.

Since the Fitbit Ace LTE uses a simplified version of the hardware on the Pixel Watch 2, it’s pretty responsive. One major difference, though, is that the kid-friendly tracker uses Gorilla Glass 3 on its cover, in addition to the 5 ATMs of water-resistance that both models share. Google does include a protective case with each Ace LTE, and it doesn’t add much weight.

There are also other obvious differences because the Pixel Watch 2 has a circular face while the Fitbit Ace LTE has a “squircle” (square with rounded corners) OLED with two large buttons on the right side. The latter’s band is also a lot narrower, and it comes “with technology built in,” according to Google’s vice president of product management Anil Sabharwal. That's just a fancy way to say that the Ace LTE recognizes when you swap in a new strap and each accessory comes with unique content.

The Fitbit Ace LTE on a wrist held in mid-air, with a cartoon room on the screen.
Cherlynn Low for Engadget

 

 The company is calling these straps “Cartridges” — another reminder of how the Fitbit Ace LTE is a gaming console wannabe. When you snap a new one on, you’ll see an animation of all the bonus material you just got. They include new backgrounds and items for your Tamagotchi-esque pet called “eejie.” Separate bands also add unique cartoony strips, called Noodles, that make their way around the edges of the watch's display every day which chart the wearer’s progress towards daily goals, similar to Apple's activity rings.

I’m dancing around the main part of the Fitbit Ace LTE’s proposition, because I wanted to get the hardware out of the way. The most interesting concept here is the idea of a wearable gaming system. The Ace LTE’s home screen looks fairly typical. It shows you the time and the Noodle activity ring around it, as well as some small font at the very bottom showing the number of points collected.

To the left of this page is what Sabharwal called a “playlist” — a collection of daily quests. Like on other iOS or Android games, this is a bunch of targets to hit within a dictated time frame to ensure you’re engaged, and achieving these goals leads to rewards.

Most of these rewards are things you can use to jazz up your digital pet’s home over on the right of the home screen. Google calls these things “eejies” — that name doesn’t actually mean anything. Some engineers in a room looked at the letters “I” “J” and “I” and sounded them out and thought sure, why not. (No, those letters don't actually stand for anything, either.)

The Fitbit Ace LTE on a wrist held in mid-air, with a digital character inside a pink bedroom on the screen. At the top is the word
Cherlynn Low for Engadget

According to Google, “Eejies are customizable creatures that feed off daily activity — the more kids reach their movement goals, the more healthy and happy their eejie gets.” When daily activities are completed and each child earns arcade tickets (or when a new watch strap is attached), they can exchange them for new outfit or furniture items for their eejies.

Even though they’re supposed to be “customizable creatures,” the eejies are anthropomorphic and look like… well, kids. Depending on how you style them, they sort of look like sullen teenagers, even. Don’t expect a cute Pikachu or Digimon to play with, these eejie are two-legged beings with heads, arms and necks. I’d prefer something cuter, but perhaps the target demographic likes feeding and playing with a strange avatar of themselves.

When multiple Ace LTE wearers meet up, their eejie can visit each other and leave emoji messages. Of course, how fun that is depends on how many of your (kid’s) friends have Ace LTEs.

Even without that social component though, the Ace LTE can be quite a lot of fun. It is the home of Fitbit Arcade, a new library of games built specifically for this wearable. So far, I’ve only seen about six games in the collection, including a room escape game, a fishing simulator and a Mario Kart-like racer.

The first game I tried at Google’s briefing was Smoky Lake, the fishing game. After a quick intro, I tapped on a shadow of a fish in the water, and flung my arm out. I waited till the Ace LTE buzzed, then pulled my wrist in. I was told that I had caught a puffer fish, and swiped through to see more information about past catches. I earned five arcade tickets with this catch. 

I gleefully tried again and caught what I was told was the “biggest pineapple gillfish” acquired that day. Other hauls the Ace LTE I was wearing had acquired included a “ramen squid” and a “blob fish,” and tapping an icon on the upper left brought up my library of things that had been caught.

The Fitbit Ace LTE on a wrist held in mid-air, with the words
Cherlynn Low for Engadget

I then played a round of Pollo 13, a racing game where I played as a chicken in a bathtub competing in an intergalactic space match against my arch nemesis. There, I tilted my wrist in all directions to steer, keeping my vehicle on track or swerving to collect items that sped me up. Just as I expected based on my prior Mario Kart experience (and also my general lack of skill at driving in real life), I sucked at this game and came in last. Sabharwal gently informed me that this was the poorest result they had seen all day.

I didn’t get to check out other titles installed, like Galaxy Rangers, Jelly Jam or Sproutlings but I was most intrigued by a room escape game, which is my favorite genre.

Google doesn’t want to encourage obsession or addiction to the Ace LTE’s games, though. “We don’t want kids to overexercise. We don’t want kids to feel like they have a streak and if they miss a day, ‘Oh my God, the world is over!’” Sabharwal said.

To that end, progress in each game is built around encouraging the wearer to meet movement goals to advance to new stages. Every two to three minutes, you’ll be prompted to get up and move. In Smokey Lake, for instance, you’ll be told that you’ve run out of bait and have to walk a few hundred steps to go to the bait shop. This can be achieved by walking a number of steps or doing any activity that meets similar requirements. Google is calling this “interval-based gaming,” playing on the idea of “interval-based training.” After about five to 10 sessions, the company thinks each wearer will hit the 60 to 90 minutes of daily required activity recommended by the World Health Organization.

The Fitbit Ace LTE on a wrist held in mid-air, with two game titles on a carousel in view:
Cherlynn Low for Engadget

The idea of activity as currency for games isn’t exactly novel, but Google’s being quite careful in its approach. Not only is it trying to avoid addiction, which for the target age group is a real concern, but the company also says it built the Ace LTE “responsibly from the ground up” by working with “experts in child psychology, public health, privacy and digital wellbeing.” It added that the device was “built with privacy in mind, front and center,” and that only parents will ever be shown a child’s location or activity data in their apps. Location data is deleted after 24 hours, while activity data is deleted after a maximum of 35 days. Google also said “there are no third-party apps or ads on the device.”

While activity is the main goal at launch, there is potential for the Ace LTE to track sleep and other aspects of health to count towards goals. Parts of the Ace LTE interface appeared similar to other Fitbit trackers, with movement reminders and a Today-esque dashboard. But from my brief hands-on, it was hard to fully explore and compare.

Though I like the idea of the Ace LTE and was definitely entertained by some of the games, I still have some reservations. I was concerned that the device I tried on felt warm, although Sabharwal explained it was likely because the demo units had been charging on and off all day. I also didn’t care for the thick bezels around the screen, though that didn’t really adversely impact my experience. What did seem more of a problem was the occasional lag I encountered waiting for games to load or to go to the home screen. I’m not sure if that was a product of early software or if the final retail units will have similar delays, and will likely need to run a full review to find out.

The Fitbit Ace LTE is available for pre-order today for $230 on the Google Store or Amazon and it arrives on June 5. You’ll need to pay an extra $10 a month for the Ace Pass plan, which includes LTE service (on Google’s Fi) and access to Fitbit Arcade and regular content updates. If you spring for an annual subscription, you’ll get a collectable Ace Band (six are available at launch) and from now till August 31, the yearly fee is discounted at 50 percent off, making it about $5 a month.

Update, May 29, 3:15PM ET: This story has been edited to clarify that the Fitbit Ace LTE's hardware is a simplified version of the Pixel Watch 2. It is not capable of sleep or stress tracking.

This article originally appeared on Engadget at https://www.engadget.com/fitbit-ace-lte-hands-on-wearable-gaming-to-make-exercise-fun-but-not-too-fun-140059054.html?src=rss

HP Omnibook X hands-on: Vintage branding in the new era of AI

All over the PC industry today, we’re learning of new systems and products launching in conjunction with Microsoft’s Copilot+ push. But HP isn’t just showing off new Snapdragon-powered laptops as part of the program. The company up and decided to nuke its entire product portfolio altogether and unify most of its sub-series.

While HP was never the worst offender in the world of awful product names — I’m looking at you, Sony, LG and Lenovo — being able to quickly identify the make and model of a device is crucial when you’re deciding what to buy. HP’s vice president of consumer PC products Pierre-Antoine Robineau admits as much, saying “to be fair, we don’t make things easy with our portfolio.” He referred to the company’s brands like Spectre, Pavilion and Envy, saying that if you ask ChatGPT what they are, the answers you’d get might refer to a ghost or a gazebo.

To simplify things, HP is getting rid of all those names on its consumer product portfolio and unifying everything under the Omni label. It’ll use Omnibook to refer to laptops, Omnidesk for desktops and Omnistudio for all-in-ones. For each category, it’ll add a label saying “3,” “5,” “7,” “X” or “Ultra” to indicate how premium or high-end the model is. That means the Omnibook Ultra is the highest-tier laptop, while the Omnidesk 3 might be the most basic or entry-level desktop system. That sort of numbering echoes Sony’s recent streamlined nomenclature of its home theater and personal audio offerings.

If Omnibook sounds familiar, that’s because HP actually had a product with that name, and it was available from 1993 to about 2002. The Omni moniker makes sense now in the 2020s, HP says, because these are devices that can do just about anything and act as multiple things at once. (As long as they don’t claim to be omniscient, omnipresent or omnipotent, I’ll let this slide.)

The company is also cleaning things up on the commercial side of its business, where the word “Elitebook” has traditionally been the most recognized label. It’s keeping that name, adopting the same Elitebook, Elitedesk and Elitestudio distinctions across categories and using the same “Ultra” and “X” labels to denote each model’s tier. However, instead of “3,” “5” or “7” here, HP is using even numbers (2, 4, 6 or 8), in part because it has used even series numbers like “1040” and “1060” in the Elitebook line before. Keeping similar numbers around can help IT managers with the shift in names, HP said.

The first new laptops under this new naming system are the Omnibook X and the Elitebook Ultra. They share very similar specs, with the Elitebook offering software that make them easier for IT managers to deploy to employees. Both of these come with 14-inch 2.2K touchscreens that were, at least in my brief time with them during a recent hands-on, bright and colorful.

I didn’t get to explore much of the new Windows 11, since the units available either ran existing software or were locked. I presume, though, that these would have other Copilot+ PC goodies that Microsoft announced earlier today.

What I can tell you is that I prefer the aesthetic of HP’s older Spectre models. The company’s machines turned heads and caught eyes thanks to their shiny edges and uniquely cut-off corners. I’m a sucker for razor sharp edges and gold or silver finishes, so that line of laptops really called to me.

In contrast, the HP Omnibook X seems plain. It comes in white or silver (the Elitebook is available in blue) and has a uniform thickness along its edges. It’s still thin and light, at 14mm (or about 0.55 inches) and 1.33 kilograms (or 2.93 pounds). But it’s certainly lost a little flavor, and I crave some spice in a device.

That’s not to say the Omnibook is hideous. It’s fine! I actually like the color accents on the keyboard deck. The power button is a different shade of blue depending on the version you get, while the row of function keys is a light shade of gray or blue. Typing on the demo units felt comfortable, too, though I miss the clicky feedback on older Elitebooks and would like a tad more travel on the keyboard.

You might also need to invest in a dongle for a card reader or if you have lots of accessories, but the two USB-C sockets and one USB-A might be enough in a pinch. Thankfully, there’s a headphone jack, too. Like every other Copilot+ PC announced today, the Omnibook and Elitebook are both powered by Qualcomm’s Snapdragon X Elite processor and promise 26 hours of battery life when playing local video. HP says its “next-gen AI PCs” have dedicated NPUs that are “capable of 45 trillion operations per second (TOPS),” which is slightly more than the 40 TOPS Microsoft is claiming for its Copilot+ PCs.

The company is also distinguishing its own AI PCs by adorning them with a logo that’s the letters “A” and “I” twisted into a sort of DNA helix. You’ll find it on the keyboard deck and the spine of the machine. It’s not big enough to be annoying, though you’ll certainly see it.

If you're already a fan of the HP Omnibook X or Elitebook Ultra, you can pre-order them today. The Omnibook X will start at $1,200 and come with 1 TB of storage, while the Elitebook Ultra starts at $1,700. Both systems will begin shipping on June 18.

Catch up on all the news from Microsoft's Copilot AI and Surface event today!

This article originally appeared on Engadget at https://www.engadget.com/hp-omnibook-x-hands-on-vintage-branding-in-the-new-era-of-ai-180038627.html?src=rss

Apple brings eye-tracking to recent iPhones and iPads

Ahead of Global Accessibility Awareness Day this week, Apple is issuing its typical annual set of announcements around its assistive features. Many of these are useful for people with disabilities, but also have broader applications as well. For instance, Personal Voice, which was released last year, helps preserve someone's speaking voice. It can be helpful to those who are at risk of losing their voice or have other reasons for wanting to retain their own vocal signature for loved ones in their absence. Today, Apple is bringing eye-tracking support to recent models of iPhones and iPads, as well as customizable vocal shortcuts, music haptics, vehicle motion cues and more. 

The most intriguing feature of the set is the ability to use the front-facing camera on iPhones or iPads (at least those with the A12 chip or later) to navigate the software without additional hardware or accessories. With this enabled, people can look at their screen to move through elements like apps and menus, then linger on an item to select it. 

That pause to select is something Apple calls Dwell Control, which has already been available elsewhere in the company's ecosystem like in Mac's accessibility settings. The setup and calibration process should only take a few seconds, and on-device AI is at work to understand your gaze. It'll also work with third-party apps from launch, since it's a layer in the OS like Assistive Touch. Since Apple already supported eye-tracking in iOS and iPadOS with eye-detection devices connected, the news today is the ability to do so without extra hardware.

Apple is also working on improving the accessibility of its voice-based controls on iPhones and iPads. It again uses on-device AI to create personalized models for each person setting up a new vocal shortcut. You can set up a command for a single word or phrase, or even an utterance (like "Oy!" perhaps). Siri will understand these and perform your designated shortcut or task. You can have these launch apps or run a series of actions that you define in the Shortcuts app, and once set up, you won't have to first ask Siri to be ready. 

Another improvement coming to vocal interactions is "Listen for Atypical Speech," which has iPhones and iPads use on-device machine learning to recognize speech patterns and customize their voice recognition around your unique way of vocalizing. This sounds similar to Google's Project Relate, which is also designed to help technology better understand those with speech impairments or atypical speech.

To build these tools, Apple worked with the Speech Accessibility Project at the Beckman Institute for Advanced Science and Technology at the University of Illinois Urbana-Champaign. The institute is also collaborating with other tech giants like Google and Amazon to further development in this space across their products.

For those who are deaf or hard of hearing, Apple is bringing haptics to music players on iPhone, starting with millions of songs on its own Music app. When enabled, music haptics will play taps, textures and specialized vibrations in tandem with the audio to bring a new layer of sensation. It'll be available as an API so developers can bring greater accessibility to their apps, too. 

Drivers with disabilities need better systems in their cars, and Apple is addressing some of the issues with its updates to CarPlay. Voice control and color filters are coming to the interface for vehicles, making it easier to control apps by talking and for those with visual impairments to see menus or alerts. To that end, CarPlay is also getting bold and large text support, as well as sound recognition for noises like sirens or honks. When the system identifies such a sound, it will display an alert at the bottom of the screen to let you know what it heard. This works similarly to Apple's existing sound recognition feature in other devices like the iPhone.

A graphic demonstrating Vehicle Motion Cues on an iPhone. On the left is a drawing of a car with two arrows on either side of its rear. The word
Apple

For those who get motion sickness while using their iPhones or iPads in moving vehicles, a new feature called Vehicle Motion Cues might alleviate some of that discomfort. Since motion sickness is based on a sensory conflict from looking at stationary content while being in a moving vehicle, the new feature is meant to better align the conflicting senses through onscreen dots. When enabled, these dots will line the four edges of your screen and sway in response to the motion it detects. If the car moves forward or accelerates, the dots will sway backwards as if in reaction to the increase in speed in that direction.

There are plenty more features coming to the company's suite of products, including Live Captions in VisionOS, a new Reader mode in Magnifier, support for multi-line braille and a virtual trackpad for those who use Assistive Touch. It's not yet clear when all of these announced updates will roll out, though Apple has historically made these features available in upcoming versions of iOS. With its developer conference WWDC just a few weeks away, it's likely many of today's tools get officially released with the next iOS.

This article originally appeared on Engadget at https://www.engadget.com/apple-brings-eye-tracking-to-recent-iphones-and-ipads-140012990.html?src=rss

Apple brings eye-tracking to recent iPhones and iPads

Ahead of Global Accessibility Awareness Day this week, Apple is issuing its typical annual set of announcements around its assistive features. Many of these are useful for people with disabilities, but also have broader applications as well. For instance, Personal Voice, which was released last year, helps preserve someone's speaking voice. It can be helpful to those who are at risk of losing their voice or have other reasons for wanting to retain their own vocal signature for loved ones in their absence. Today, Apple is bringing eye-tracking support to recent models of iPhones and iPads, as well as customizable vocal shortcuts, music haptics, vehicle motion cues and more. 

The most intriguing feature of the set is the ability to use the front-facing camera on iPhones or iPads (at least those with the A12 chip or later) to navigate the software without additional hardware or accessories. With this enabled, people can look at their screen to move through elements like apps and menus, then linger on an item to select it. 

That pause to select is something Apple calls Dwell Control, which has already been available elsewhere in the company's ecosystem like in Mac's accessibility settings. The setup and calibration process should only take a few seconds, and on-device AI is at work to understand your gaze. It'll also work with third-party apps from launch, since it's a layer in the OS like Assistive Touch. Since Apple already supported eye-tracking in iOS and iPadOS with eye-detection devices connected, the news today is the ability to do so without extra hardware.

Apple is also working on improving the accessibility of its voice-based controls on iPhones and iPads. It again uses on-device AI to create personalized models for each person setting up a new vocal shortcut. You can set up a command for a single word or phrase, or even an utterance (like "Oy!" perhaps). Siri will understand these and perform your designated shortcut or task. You can have these launch apps or run a series of actions that you define in the Shortcuts app, and once set up, you won't have to first ask Siri to be ready. 

Another improvement coming to vocal interactions is "Listen for Atypical Speech," which has iPhones and iPads use on-device machine learning to recognize speech patterns and customize their voice recognition around your unique way of vocalizing. This sounds similar to Google's Project Relate, which is also designed to help technology better understand those with speech impairments or atypical speech.

To build these tools, Apple worked with the Speech Accessibility Project at the Beckman Institute for Advanced Science and Technology at the University of Illinois Urbana-Champaign. The institute is also collaborating with other tech giants like Google and Amazon to further development in this space across their products.

For those who are deaf or hard of hearing, Apple is bringing haptics to music players on iPhone, starting with millions of songs on its own Music app. When enabled, music haptics will play taps, textures and specialized vibrations in tandem with the audio to bring a new layer of sensation. It'll be available as an API so developers can bring greater accessibility to their apps, too. 

Drivers with disabilities need better systems in their cars, and Apple is addressing some of the issues with its updates to CarPlay. Voice control and color filters are coming to the interface for vehicles, making it easier to control apps by talking and for those with visual impairments to see menus or alerts. To that end, CarPlay is also getting bold and large text support, as well as sound recognition for noises like sirens or honks. When the system identifies such a sound, it will display an alert at the bottom of the screen to let you know what it heard. This works similarly to Apple's existing sound recognition feature in other devices like the iPhone.

A graphic demonstrating Vehicle Motion Cues on an iPhone. On the left is a drawing of a car with two arrows on either side of its rear. The word
Apple

For those who get motion sickness while using their iPhones or iPads in moving vehicles, a new feature called Vehicle Motion Cues might alleviate some of that discomfort. Since motion sickness is based on a sensory conflict from looking at stationary content while being in a moving vehicle, the new feature is meant to better align the conflicting senses through onscreen dots. When enabled, these dots will line the four edges of your screen and sway in response to the motion it detects. If the car moves forward or accelerates, the dots will sway backwards as if in reaction to the increase in speed in that direction.

There are plenty more features coming to the company's suite of products, including Live Captions in VisionOS, a new Reader mode in Magnifier, support for multi-line braille and a virtual trackpad for those who use Assistive Touch. It's not yet clear when all of these announced updates will roll out, though Apple has historically made these features available in upcoming versions of iOS. With its developer conference WWDC just a few weeks away, it's likely many of today's tools get officially released with the next iOS.

This article originally appeared on Engadget at https://www.engadget.com/apple-brings-eye-tracking-to-recent-iphones-and-ipads-140012990.html?src=rss

Google just snuck a pair of AR glasses into a Project Astra demo at I/O

In a video showcasing the prowess of Google's new Project Astra experience at I/O 2024, an unnamed person demonstrating asked Gemini "do you remember where you saw my glasses?" The AI impressively responded "Yes, I do. Your glasses were on a desk near a red apple," despite said object not actually being in view when the question was asked. But these glasses weren't your bog-standard assistive vision aid; these had a camera onboard and some sort of visual interface!

The tester picked up their glasses and put them on, and proceeded to ask the AI more questions about things they were looking at. Clearly, there is a camera on the device that's helping it take in the surroundings, and we were shown some sort of interface where a waveform moved to indicate it was listening. Onscreen captions appeared to reflect the answer that was being read aloud to the wearer, as well. So if we're keeping track, that's at least a microphone and speaker onboard too, along with some kind of processor and battery to power the whole thing. 

We only caught a brief glimpse of the wearable, but from the sneaky seconds it was in view, a few things were evident. The glasses had a simple black frame and didn't look at all like Google Glass. They didn't appear very bulky, either. 

In all likelihood, Google is not ready to actually launch a pair of glasses at I/O. It breezed right past the wearable's appearance and barely mentioned them, only to say that Project Astra and the company's vision of "universal agents" could come to devices like our phones or glasses. We don't know much else at the moment, but if you've been mourning Google Glass or the company's other failed wearable products, this might instill some hope yet.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-just-snuck-a-pair-of-ar-glasses-into-a-project-astra-demo-at-io-172824539.html?src=rss

Google’s Project Astra uses your phone’s camera and AI to find noise makers, misplaced items and more.

When Google first showcased its Duplex voice assistant technology at its developer conference in 2018, it was both impressive and concerning. Today, at I/O 2024, the company may be bringing up those same reactions again, this time by showing off another application of its AI smarts with something called Project Astra. 

The company couldn't even wait till its keynote today to tease Project Astra, posting a video to its social media of a camera-based AI app yesterday. At its keynote today, though, Google's DeepMind CEO Demis Hassabis shared that his team has "always wanted to develop universal AI agents that can be helpful in everyday life." Project Astra is the result of progress on that front. 

According to a video that Google showed during a media briefing yesterday, Project Astra appeared to be an app which has a viewfinder as its main interface. A person holding up a phone pointed its camera at various parts of an office and verbally said "Tell me when you see something that makes sound." When a speaker next to a monitor came into view, Gemini responded "I see a speaker, which makes sound."

The person behind the phone stopped and drew an onscreen arrow to the top circle on the speaker and said, "What is that part of the speaker called?" Gemini promptly responded "That is the tweeter. It produces high-frequency sounds."

Then, in the video that Google said was recorded in a single take, the tester moved over to a cup of crayons further down the table and asked "Give me a creative alliteration about these," to which Gemini said "Creative crayons color cheerfully. They certainly craft colorful creations."

The rest of the video goes on to show Gemini in Project Astra identifying and explaining parts of code on a monitor, telling the user what neighborhood they were in based on the view out the window. Most impressively, Astra was able to answer "Do you remember where you saw my glasses?" even though said glasses were completely out of frame and were not previously pointed out. "Yes, I do," Gemini said, adding "Your glasses were on a desk near a red apple."

After Astra located those glasses, the tester put them on and the video shifted to the perspective of what you'd see on the wearable. Using a camera onboard, the glasses scanned the wearer's surroundings to see things like a diagram on a whiteboard. The person in the video then asked "What can I add here to make this system faster?" As they spoke, an onscreen waveform moved to indicate it was listening, and as it responded, text captions appeared in tandem. Astra said "Adding a cache between the server and database could improve speed."

The tester then looked over to a pair of cats doodled on the board and asked "What does this remind you of?" Astra said "Schrodinger's cat." Finally, they picked up a plush tiger toy, put it next to a cute golden retriever and asked for "a band name for this duo." Astra dutifully replied "Golden stripes."

This means that not only was Astra processing visual data in realtime, it was also remembering what it saw and working with an impressive backlog of stored information. This was achieved, according to Hassabis, because these "agents" were "designed to process information faster by continuously encoding video frames, combining the video and speech input into a timeline of events, and caching this information for efficient recall."

It was also worth noting that, at least in the video, Astra was responding quickly. Hassabis noted in a blog post that "While we’ve made incredible progress developing AI systems that can understand multimodal information, getting response time down to something conversational is a difficult engineering challenge."

Google has also been working on giving its AI more range of vocal expression, using its speech models to "enhanced how they sound, giving the agents a wider range of intonations." This sort of mimicry of human expressiveness in responses is reminiscent of Duplex's pauses and utterances that led people to think Google's AI might be a candidate for the Turing test.

While Astra remains an early feature with no discernible plans for launch, Hassabis wrote that in future, these assistants could be available "through your phone or glasses." No word yet on whether those glasses are actually a product or the successor to Google Glass, but Hassabis did write that "some of these capabilities are coming to Google products, like the Gemini app, later this year."

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-project-astra-uses-your-phones-camera-and-ai-to-find-noise-makers-misplaced-items-and-more-172642329.html?src=rss

Ask Google Photos to help make sense of your gallery

Google is inserting more of its Gemini AI into many of its product and the next target in its sights is Photos. At its I/O developer conference today, the company's CEO Sundar Pichai announced a feature called Ask Photos, which is designed to help you find specific images in your gallery by talking to Gemini. 

Ask Photos will show up as a new tab at the bottom of your Google Photos app. It'll start rolling out to One subscribers first, starting in US English over the upcoming months. When you tap over to that panel, you'll see the Gemini star icon and a welcome message above a bar that prompts you to "search or ask about Photos."

According to Google, you can ask things like "show me the best photo from each national park I've visited," which not only draws upon GPS information but also requires the AI to exercise some judgement in determining what is "best." The company's VP for Photos Shimrit Ben-Yair told Engadget that you'll be able to provide feedback to the AI and let it know which pictures you preferred instead. "Learning is key," Ben-Yair said.

You can also ask Photos to find your top photos from a recent vacation and generate a caption to describe them so you can more quickly share them to social media. Again, if you didn't like what Gemini suggested, you can also make tweaks later on.

For now, you'll have to type your query to Ask Photos — voice input isn't yet supported. And as the feature rolls out, those who opt in to use it will see their existing search feature get "upgraded" to Ask. However, Google said that "key search functionality, like quick access to your face groups or the map view, won't be lost."

The company explained that there are three parts to the Ask Photos process: "Understanding your question," "crafting a response" and "ensuring safety and remembering corrections." Though safety is only mentioned in the final stage, it should be baked in the entire time. The company acknowledged that "the information in your photos can be deeply personal, and we take the responsibility of protecting it very seriously."

To that end, queries are not stored anywhere, though they are processed in the cloud (not on device). People will not review conversations or personal data in Ask Photos, except "in rare cases to address abuse or harm." Google also said it doesn't train "any generative AI product outside of Google Photos on this personal data, including other Gemini models and products."

Your media continues to be protected by the same security and privacy measures that cover your use of Google Photos. That's a good thing, since one of the potentially more helpful ways to use Ask Photos might be to get information like passport or license expiry dates from pictures you might have snapped years ago. It uses Gemini's multimodal capabilities to read text in images to find answers, too.

Of course, AI isn't new in Google Photos. You've always been able to search the app for things like "credit card" or a specific friend, using the company's facial and object recognition algorithms. But Gemini AI brings generative processing so Photos can do a lot more than just deliver pictures with certain people or items in them.

Other applications include getting Photos to tell you what themes you might have used for the last few birthday parties you threw for your partner or child. Gemini AI is at work here to study your pictures and figure out what themes you already adopted.

There are a lot of promising use cases for Ask Photos, which is an experimental feature at the moment and that is "starting to roll out soon." Like other Photos tools, it might begin as a premium feature for One subscribers and Pixel owners before trickling down to all who use the free app. There's no official word yet on when or whether that might happen, though.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/ask-google-photos-to-get-help-making-sense-of-your-gallery-170734062.html?src=rss

Rabbit R1 hands-on: Already more fun and accessible than the Humane AI Pin

At CES this January, startup Rabbit unveiled its first device, just in time for the end of the year of the rabbit according to the lunar calendar. It’s a cute little orange square that was positioned as a “pocket companion that moves AI from words to action.” In other words, it’s basically a dedicated AI machine that acts kind of like a walkie talkie to a virtual assistant.

Sound familiar? You’re probably thinking of the Humane AI Pin, which was announced last year and started shipping this month. I awarded it a score of 50 (out of 100) earlier this month, while outlets like Wired and The Verge gave it similarly low marks of 4 out of 10.

The people at Rabbit have been paying close attention to the aftermath of the Humane AI Pin launch and reviews. It was evident in founder and CEO Jesse Lyu's address at an unboxing event at the TWA hotel in New York last night, where the company showed off the Rabbit R1 and eager early adopters listened rapturously before picking up their pre-orders. Engadget's sample unit is on its way to Devindra Hardawar, who will be tackling this review. But I was in attendance last night to check out units at the event that industry peers were unboxing (thanks to Max Weinbach for the assistance!).

As a refresher, the Rabbit R1 is a bright orange square, co-engineered by Teenage Engineering and Rabbit. It has a 2.88-inch color display built in, an 8-megapixel camera that can face both ways and a scroll wheel reminiscent of the crank on the Playdate. The latter, by the way, is a compact gaming handheld that was also designed by Teenage Engineering, and the Rabbit R1 shares its adorable retro aesthetic. Again, like the Humane AI Pin, the Rabbit R1 is supposed to be your portal to an AI-powered assistant and operating system. However, there are a few key differences, which Lyu covered extensively at the launch event last night.

Let's get this out of the way: The Rabbit R1 already looks a lot more appealing than the Humane AI Pin. First of all, it costs $199 — less than a third of the AI Pin's $700. Humane also requires a monthly $24 subscription fee or its device will be rendered basically useless. Rabbit, as Lyu repeatedly reiterated all night, does not require such a fee. You'll just be responsible for your own cellular service (4G LTE only, no 5G), and can bring your own SIM card or just default to good old Wi-Fi. There, you'll also find the USB-C charging port.

The R1's advantages over the Pin don't end there. By virtue of its integrated screen (instead of a wonky, albeit intriguing projector), the orange square is more versatile and a lot easier to interact with. You can use the wheel to scroll through elements and press the button on the right side to confirm a choice. You could also tap the screen or push down a button to start talking to the software.

Now, I haven’t taken a photo with the device myself, but I was pleasantly surprised by the quality of images I saw on its screen. Maybe my expectations were pretty low, but when reviewers in a media room were setting up their devices by using the onboard cameras to scan QR codes, I found the images on the screens clear and impressively vibrant. Users won’t just be capturing photos, videos and QR codes with the Rabbit R1, by the way. It also has a Vision feature like the Humane AI Pin that will analyze an image you take and tell you what’s in it. In Lyu’s demo, the R1 told him that it saw a crowd of people at “an event or concert venue.”

A Rabbit R1 unit on top of a table, with a USB-C cable plugged in to its left edge. The screen is on and says
Cherlynn Low for Engadget

We’ll have to wait till Devindra actually takes some pictures with our R1 unit and downloads them from the web-based portal that Rabbit cleverly calls the Rabbit Hole. Its name for camera-based features is Rabbit Eye, which is just kind of delightful. In fact, another thing that distinguishes Rabbit from Humane is the former’s personality. The R1 just oozes character. From the witty feature names to the retro aesthetic to the onscreen animation and the fact that the AI will actually make (cheesy) jokes, Rabbit and Teenage Engineering have developed something that’s got a lot more flavor than Humane’s almost clinical appearance and approach.

Of all the things Lyu took shots at Humane about last night, though, talk of the R1’s thermal performance or the AI Pin’s heat issues was conspicuously absent. To be clear, the R1 is slightly bigger than the Humane device, and it uses an octa-core MediaTek MT6765 processor, compared to the AI Pin’s Snapdragon chip. There’s no indication at the moment that the Rabbit device will run as hot as Humane’s Pin, but I’ve been burned (metaphorically) before and remain cautious.

I am also slightly concerned about the R1’s glossy plastic build. It looks nice and feels lighter than expected, weighing just 115 grams or about a quarter of a pound. The scroll wheel moved smoothly when I pushed it up and down, and there were no physical grooves or notches, unlike the rotating hinge on Samsung’s Galaxy watches. The camera housing lay flush with the rest of the R1’s case, and in general the unit felt refined and finished.

Most of my other impressions of the Rabbit R1 come from Lyu’s onstage demos, where I was surprised by how quickly his device responded to his queries. He was able to type on the R1’s screen and tilted it so that the controls sat below the display instead of to its right. That way, there was enough room for an onscreen keyboard that Lyu said was the same width as the one on the original iPhone.

Rabbit also drew attention for its so-called Large Action Model (LAM), which acts as an interpreter to convert popular apps like Spotify or Doordash into interfaces that work on the R1’s simple-looking operating system. Lyu also showed off some of these at the event last night, but I’d much rather wait for us to test these out for ourselves.

Lyu made many promises to the audience, seeming to acknowledge that the R1 might not be fully featured when it arrives in their hands. Even on the company’s website, there’s a list of features that are planned, in the works or being explored. For one thing, an alarm is coming this summer, along with a calendar, contacts app, GPS support, memory recall and more. Throughout his speech, Lyu repeated the phrase “we’re gonna work on” amid veiled references to Humane (for instance, emphasizing that Rabbit doesn’t require an additional subscription fee). Ultimately, Lyu said “we just keep adding value to this thing,” in reference to a roadmap of upcoming features.

Hopefully, Lyu and his team are able to deliver on the promises they’ve made. I’m already very intrigued by a “teach mode” he teased, which is basically a way to generate macros by recording an action on the R1, and letting it learn what you want to do when you tell it something. Rabbit’s approach certainly seems more tailored to tinkerers and enthusiasts, whereas Humane’s is ambitious and yet closed off. This feels like Google and Apple all over again, except whether the AI device race will ever reach the same scale remains to be seen.

Last night’s event also made it clear what Rabbit wants us to think. It was hosted at the TWA hotel, which itself used to be the head house of the TWA Flight Center. The entire place is an homage to retro vibes, and the entry to Rabbit’s event was lined with display cases containing gadgets like a Pokedex, a Sony Watchman, a Motorola pager, Game Boy Color and more. Every glass box I walked by made me squeal, bringing up a pleasant sense memory that also resurfaced when I played with the R1. It didn't feel good in that it's premium or durable; it felt good because it reminded me of my childhood.

Whether Rabbit is successful with the R1 depends on how you define success. The company has already sold more than 100,000 units this quarter and looks poised to sell at least one more (I’m already whipping out my credit card). I remain skeptical about the usefulness of AI devices, but, in large part due to its price and ability to work with third-party apps at launch, Rabbit has already succeeded in making me feel like Alice entering Wonderland.

This article originally appeared on Engadget at https://www.engadget.com/rabbit-r1-hands-on-already-more-fun-and-accessible-than-the-humane-ai-pin-163622560.html?src=rss