The best sous vide machines for 2024

For those looking to elevate their cooking, a sous vide machine might be the perfect addition to your toolkit. Previously, these gadgets were almost exclusively used by high-end restaurants. But more recently, prices have come down to where they can be relatively affordable additions to your kitchen. These devices make preparing perfectly cooked steaks a breeze while taking all the guesswork and hassle out of dishes like pulled pork or brisket. And it’s not just for meat either, as a sous vide machine can make easy work out of soft-boiled eggs, homemade yogurt or fish. And while some may say you need a lot of accessories like vacuum sealers or special bags to get the best results, starting with the right appliance will get you 90 percent of the way. So to help you figure out which sous vide machine is right for you, we’ve assembled a list of our favorite gadgets on sale right now.

While they might have a fancy name, the main things we look for in a quality sous vide device are quite straightforward: ease-of-use, reliability and a good design. It should be easy to clean and have clear, no-nonsense controls. It should also have some way of attaching to a tank or pot, whether by magnet or adjustable clamp, so it doesn’t become dislodged during use. And most importantly, it should have a strong heating element and motor that can deliver consistent water temperatures to ensure your food hits the correct level of doneness every time without overcooking.

This article originally appeared on Engadget at https://www.engadget.com/best-sous-vide-133025288.html?src=rss

What to expect from Microsoft Build 2024: The Surface event, Windows 11 and AI

If you can't tell by now, just about every tech company is eager to pray at the altar of AI, for better or worse. Google's recent I/O developer conference was dominated by AI features, like its seemingly life-like Project Astra assistant. Just before that, OpenAI debuted GPT 4o, a free and conversational AI model that's disturbingly flirty. Next up is Microsoft Build 2024, the company's developer conference that's kicking off next week in Seattle.

Normally, Build is a fairly straightforward celebration of Microsoft's devotion to productivity, with a dash of on-stage coding to excite the developer crowd. But this year, the company is gearing up to make some more huge AI moves, following its debut of the ChatGPT-powered Bing Chat in early 2023. Take that together with rumors around new Surface hardware, and Build 2024 could potentially be one of the most important events Microsoft has ever held.

But prior to Build, Microsoft is hosting a showcase for new Surfaces and AI in Windows 11 on May 20. Build kicks off a day later on May 21. For the average Joe, the Surface event is shaping up to be the more impactful of the two, as rumors suggest we will see some of the first systems featuring Qualcomm’s Arm-based Snapdragon X Elite chip alongside new features coming in the next major Windows 11 update.

That's not to say it's all rosy for the Windows maker. Build 2024 is the point where we'll see if AI will make or break Microsoft. Will the billions in funding towards OpenAI and Copilot projects actually pay off with useful tools for consumers? Or is the push for AI, and the fabled idea of "artificial general intelligence," inherently foolhardy as it makes computers more opaque and potentially untrustworthy? (How, exactly, do generative AI models come up with their answers? It's not always clear.)

Here are a few things we expect to see at Build 2024:

While Microsoft did push out updates to the Surface family earlier this spring, those machines were more meant for enterprise customers, so they aren’t available for purchase in regular retail stores. A Microsoft spokesperson told us at the time that it "absolutely remain[s] committed to consumer devices," and that the commercial focused announcement was "only the first part of this effort."

Instead, the company's upcoming refresh for its consumer PCs is expected to consist of new 13 and 15-inch Surface Laptop 6 models with thinner bezels, larger trackpads, improved port selection and the aforementioned X Elite chip. There’s a good chance that at the May 20th showcase, we’ll also see an Arm-based version of the Surface Pro 10, which will sport a similar design to the business model that came out in March, but with revamped accessories including a Type Cover with a dedicated Copilot key.

According to The Verge, Microsoft is confident that these new systems could outmatch Apple's M3-powered MacBook Air in raw speed and AI performance.

The company has also reportedly revamped emulation for x86 software in its Arm-based version of Windows 11. That's a good thing, since poor emulation was one of the main reasons we hated the Surface Pro 9 5G, a confounding system powered by Microsoft's SQ3 Arm chip. That mobile processor was based on Qualcomm's Snapdragon 8cx Gen 3, which was unproven in laptops at the time. Using the Surface Pro 9 5G was so frustrating we felt genuinely offended that Microsoft was selling it as a "Pro" device. So you can be sure we're skeptical about any amazing performance gains from another batch of Qualcomm Arm chips.

It'll also be interesting to see if Microsoft's new consumer devices look any different than their enterprise counterparts, which were basically just chip swaps inside of the cases from the Surface Pro 9 and Laptop 5. If Microsoft is actually betting on mobile chips for its consumer Surfaces, there's room for a complete rethinking of its designs, just like how Apple refashioned its entire laptop lineup around its M-series chips.

Aside from updated hardware, one of the biggest upgrades on these new Surfaces should be vastly improved on-device AI and machine learning performance thanks to the Snapdragon X Elite chip, which can deliver up to 45 TOPS (trillions of operations per second) from its neural processing unit (NPU). This is key because Microsoft has previously said PCs will need at least 40 TOPs in order to run Windows AI features locally. This leads us to some of the additions coming in the next major build of Microsoft’s OS, including something the company is calling its AI Explorer, expanded Studio effects and more.

According to Windows Central, AI Explorer is going to be Microsoft’s catch-all term covering a range of machine learning-based features. This is expected to include a revamped search tool that lets users look up everything from websites to files using natural language input. There may also be a new timeline that will allow people to scroll back through anything they've done recently on their computer and the addition of contextual suggestions that appear based on whatever they're currently looking at. And building off of some of the Copilot features we’ve seen previously, it seems Microsoft is planning to add support for tools like live captions, expanded Studio effects (including real-time filters) and local generative AI tools that can help create photos and more on the spot.

Microsoft wants an AI Copilot in everything. The company first launched Github Copilot in 2021 as a way to let programmers use AI to deal with mundane coding tasks. At this point, all of the company's other AI tools have also been rebranded as "Microsoft Copilot" (that includes Bing Chat, and Microsoft 365 Copilot for productivity apps). With Copilot Pro, a $20 monthly offering launched earlier this year, the company provides access to the latest GPT models from OpenAI, along with other premium features.

But there's still one downside to all of Microsoft's Copilot tools: They require an internet connection. Very little work is actually happening locally, on your device. That could change soon, though, as Intel confirmed that Microsoft is already working on ways to make Copilot local. That means it may be able to answer simpler questions, like basic math or queries about files on your system, more quickly without hitting the internet at all. As impressive as Microsoft's AI assistant can be, it still typically takes a few seconds to deal with your questions.

After all the new hardware and software are announced, Build is positioned to help developers lay even more groundwork to better support those new AI and expanded Copilot features. Microsoft has already teased things like Copilot on Edge and Copilot Plugins for 365 apps, so we’re expecting to hear more on how those will work. And by taking a look at some of the sessions already scheduled for Build, we can see there’s a massive focus on everything AI-related, with breakouts for Customizing Microsoft Copilot, Copilot in Teams, Copilot Extensions and more.

While Microsoft will surely draw a lot of attention, it’s important to mention that it won’t be the only manufacturer coming out with new AI PCs. That’s because alongside revamped Surfaces, we’re expecting to see a whole host of other laptops featuring Qualcomm’s Snapdragon X Elite Chip (or possibly the X Plus) from other major vendors like Dell, Lenovo and more.

Admittedly, following the intense focus Google put on AI at I/O 2024, the last thing people may want to hear about is yet more AI. But at this point, like most of its rivals, Microsoft is betting big on machine learning to grow and expand the capabilities of Windows PCs.

This article originally appeared on Engadget at https://www.engadget.com/what-to-expect-from-microsoft-build-2024-the-surface-event-windows-11-and-ai-182010326.html?src=rss

What to expect from Microsoft Build 2024: The Surface event, Windows 11 and AI

If you can't tell by now, just about every tech company is eager to pray at the altar of AI, for better or worse. Google's recent I/O developer conference was dominated by AI features, like its seemingly life-like Project Astra assistant. Just before that, OpenAI debuted GPT 4o, a free and conversational AI model that's disturbingly flirty. Next up is Microsoft Build 2024, the company's developer conference that's kicking off next week in Seattle.

Normally, Build is a fairly straightforward celebration of Microsoft's devotion to productivity, with a dash of on-stage coding to excite the developer crowd. But this year, the company is gearing up to make some more huge AI moves, following its debut of the ChatGPT-powered Bing Chat in early 2023. Take that together with rumors around new Surface hardware, and Build 2024 could potentially be one of the most important events Microsoft has ever held.

But prior to Build, Microsoft is hosting a showcase for new Surfaces and AI in Windows 11 on May 20. (It won't be livestreamed, but Engadget will be liveblogging the Surface event starting 1 PM ET.) Build kicks off a day later on May 21 (you can watch the Build event livestream on Engadget). For the average Joe, the Surface event is shaping up to be the more impactful of the two, as rumors suggest we will see some of the first systems featuring Qualcomm’s Arm-based Snapdragon X Elite chip alongside new features coming in the next major Windows 11 update.

That's not to say it's all rosy for the Windows maker. Build 2024 is the point where we'll see if AI will make or break Microsoft. Will the billions in funding towards OpenAI and Copilot projects actually pay off with useful tools for consumers? Or is the push for AI, and the fabled idea of "artificial general intelligence," inherently foolhardy as it makes computers more opaque and potentially untrustworthy? (How, exactly, do generative AI models come up with their answers? It's not always clear.)

Here are a few things we expect to see at Build 2024:

While Microsoft did push out updates to the Surface family earlier this spring, those machines were more meant for enterprise customers, so they aren’t available for purchase in regular retail stores. A Microsoft spokesperson told us at the time that it "absolutely remain[s] committed to consumer devices," and that the commercial focused announcement was "only the first part of this effort."

Instead, the company's upcoming refresh for its consumer PCs is expected to consist of new 13 and 15-inch Surface Laptop 6 models with thinner bezels, larger trackpads, improved port selection and the aforementioned X Elite chip. There’s a good chance that at the May 20th showcase, we’ll also see an Arm-based version of the Surface Pro 10, which will sport a similar design to the business model that came out in March, but with revamped accessories including a Type Cover with a dedicated Copilot key.

According to The Verge, Microsoft is confident that these new systems could outmatch Apple's M3-powered MacBook Air in raw speed and AI performance.

The company has also reportedly revamped emulation for x86 software in its Arm-based version of Windows 11. That's a good thing, since poor emulation was one of the main reasons we hated the Surface Pro 9 5G, a confounding system powered by Microsoft's SQ3 Arm chip. That mobile processor was based on Qualcomm's Snapdragon 8cx Gen 3, which was unproven in laptops at the time. Using the Surface Pro 9 5G was so frustrating we felt genuinely offended that Microsoft was selling it as a "Pro" device. So you can be sure we're skeptical about any amazing performance gains from another batch of Qualcomm Arm chips.

It'll also be interesting to see if Microsoft's new consumer devices look any different than their enterprise counterparts, which were basically just chip swaps inside of the cases from the Surface Pro 9 and Laptop 5. If Microsoft is actually betting on mobile chips for its consumer Surfaces, there's room for a complete rethinking of its designs, just like how Apple refashioned its entire laptop lineup around its M-series chips.

Aside from updated hardware, one of the biggest upgrades on these new Surfaces should be vastly improved on-device AI and machine learning performance thanks to the Snapdragon X Elite chip, which can deliver up to 45 TOPS (trillions of operations per second) from its neural processing unit (NPU). This is key because Microsoft has previously said PCs will need at least 40 TOPs in order to run Windows AI features locally. This leads us to some of the additions coming in the next major build of Microsoft’s OS, including something the company is calling its AI Explorer, expanded Studio effects and more.

According to Windows Central, AI Explorer is going to be Microsoft’s catch-all term covering a range of machine learning-based features. This is expected to include a revamped search tool that lets users look up everything from websites to files using natural language input. There may also be a new timeline that will allow people to scroll back through anything they've done recently on their computer and the addition of contextual suggestions that appear based on whatever they're currently looking at. And building off of some of the Copilot features we’ve seen previously, it seems Microsoft is planning to add support for tools like live captions, expanded Studio effects (including real-time filters) and local generative AI tools that can help create photos and more on the spot.

Microsoft wants an AI Copilot in everything. The company first launched Github Copilot in 2021 as a way to let programmers use AI to deal with mundane coding tasks. At this point, all of the company's other AI tools have also been rebranded as "Microsoft Copilot" (that includes Bing Chat, and Microsoft 365 Copilot for productivity apps). With Copilot Pro, a $20 monthly offering launched earlier this year, the company provides access to the latest GPT models from OpenAI, along with other premium features.

But there's still one downside to all of Microsoft's Copilot tools: They require an internet connection. Very little work is actually happening locally, on your device. That could change soon, though, as Intel confirmed that Microsoft is already working on ways to make Copilot local. That means it may be able to answer simpler questions, like basic math or queries about files on your system, more quickly without hitting the internet at all. As impressive as Microsoft's AI assistant can be, it still typically takes a few seconds to deal with your questions.

After all the new hardware and software are announced, Build is positioned to help developers lay even more groundwork to better support those new AI and expanded Copilot features. Microsoft has already teased things like Copilot on Edge and Copilot Plugins for 365 apps, so we’re expecting to hear more on how those will work. And by taking a look at some of the sessions already scheduled for Build, we can see there’s a massive focus on everything AI-related, with breakouts for Customizing Microsoft Copilot, Copilot in Teams, Copilot Extensions and more.

While Microsoft will surely draw a lot of attention, it’s important to mention that it won’t be the only manufacturer coming out with new AI PCs. That’s because alongside revamped Surfaces, we’re expecting to see a whole host of other laptops featuring Qualcomm’s Snapdragon X Elite Chip (or possibly the X Plus) from other major vendors like Dell, Lenovo and more.

Admittedly, following the intense focus Google put on AI at I/O 2024, the last thing people may want to hear about is yet more AI. But at this point, like most of its rivals, Microsoft is betting big on machine learning to grow and expand the capabilities of Windows PCs.

This article originally appeared on Engadget at https://www.engadget.com/what-to-expect-from-microsoft-build-2024-the-surface-event-windows-11-and-ai-182010326.html?src=rss

Google Project Astra hands-on: Full of potential, but it’s going to be a while

At I/O 2024, Google’s teaser for Project Astra gave us a glimpse at where AI assistants are going in the future. It’s a multi-modal feature that combines the smarts of Gemini with the kind of image recognition abilities you get in Google Lens, as well as powerful natural language responses. However, while the promo video was slick, after getting to try it out in person, it's clear there’s a long way to go before something like Astra lands on your phone. So here are three takeaways from our first experience with Google’s next-gen AI.

Sam’s take:

Currently, most people interact with digital assistants using their voice, so right away Astra’s multi-modality (i.e. using sight and sound in addition to text/speech) to communicate with an AI is relatively novel. In theory, it allows computer-based entities to work and behave more like a real assistant or agent – which was one of Google’s big buzzwords for the show – instead of something more robotic that simply responds to spoken commands.

The first project Astra demo we tried used a large touchscreen connected to a downward-facing camera.
Photo by Sam Rutherford/Engadget

In our demo, we had the option of asking Astra to tell a story based on some objects we placed in front of camera, after which it told us a lovely tale about a dinosaur and its trusty baguette trying to escape an ominous red light. It was fun and the tale was cute, and the AI worked about as well as you would expect. But at the same time, it was far from the seemingly all-knowing assistant we saw in Google's teaser. And aside from maybe entertaining a child with an original bedtime story, it didn’t feel like Astra was doing as much with the info as you might want.

Then my colleague Karissa drew a bucolic scene on a touchscreen, at which point Astra correctly identified the flower and sun she painted. But the most engaging demo was when we circled back for a second go with Astra running on a Pixel 8 Pro. This allowed us to point its cameras at a collection of objects while it tracked and remembered each one’s location. It was even smart enough to recognize my clothing and where I had stashed my sunglasses even though these objects were not originally part of the demo.

In some ways, our experience highlighted the potential highs and lows of AI. Just the ability for a digital assistant to tell you where you might have left your keys or how many apples were in your fruit bowl before you left for the grocery store could help you save some real time. But after talking to some of the researchers behind Astra, there are still a lot of hurdles to overcome.

An AI-generated story about a dinosaur and a baguette created by Google's Project Astra
Photo by Sam Rutherford/Engadget

Unlike a lot of Google’s recent AI features, Astra (which is described by Google as a “research preview”) still needs help from the cloud instead of being able to run on-device. And while it does support some level of object permanence, those “memories” only last for a single session, which currently only spans a few minutes. And even if Astra could remember things for longer, there are things like storage and latency to consider, because for every object Astra recalls, you risk slowing down the AI, resulting in a more stilted experience. So while it’s clear Astra has a lot of potential, my excitement was weighed down with the knowledge that it will be some time before we can get more full-feature functionality.

Karissa’s take:

Of all the generative AI advancements, multimodal AI has been the one I’m most intrigued by. As powerful as the latest models are, I have a hard time getting excited for iterative updates to text-based chatbots. But the idea of AI that can recognize and respond to queries about your surroundings in real-time feels like something out of a sci-fi movie. It also gives a much clearer sense of how the latest wave of AI advancements will find their way into new devices like smart glasses.

Google offered a hint of that with Project Astra, which may one day have a glasses component, but for now is mostly experimental (the glasses shown in the demo video during the I/O keynote were apparently a “research prototype.”) In person, though, Project Astra didn’t exactly feel like something out of sci-fi flick.

During a demo at Google I/O, Project Astra was able to remember the position of objects seen by a phone's camera.
Photo by Sam Rutherford/Engadget

It was able to accurately recognize objects that had been placed around the room and respond to nuanced questions about them, like “which of these toys should a 2-year-old play with.” It could recognize what was in my doodle and make up stories about different toys we showed it.

But most of Astra’s capabilities seemed on-par with what Meta has already made available with its smart glasses. Meta’s multimodal AI can also recognize your surroundings and do a bit of creative writing on your behalf. And while Meta also bills the features as experimental, they are at least broadly available.

The Astra feature that may set Google’s approach apart is the fact that it has a built-in “memory.” After scanning a bunch of objects, it could still “remember” where specific items were placed. For now, it seems Astra’s memory is limited to a relatively short window of time, but members of the research team told us that it could theoretically be expanded. That would obviously open up even more possibilities for the tech, making Astra seem more like an actual assistant. I don’t need to know where I left my glasses 30 seconds ago, but if you could remember where I left them last night, that would actually feel like sci-fi come to life.

But, like so much of generative AI, the most exciting possibilities are the ones that haven’t quite happened yet. Astra might get there eventually, but right now it feels like Google still has a lot of work to do to get there.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-project-astra-hands-on-full-of-potential-but-its-going-to-be-a-while-235607743.html?src=rss

Google Project Astra hands-on: Full of potential, but it’s going to be a while

At I/O 2024, Google’s teaser for Project Astra gave us a glimpse at where AI assistants are going in the future. It’s a multi-modal feature that combines the smarts of Gemini with the kind of image recognition abilities you get in Google Lens, as well as powerful natural language responses. However, while the promo video was slick, after getting to try it out in person, it's clear there’s a long way to go before something like Astra lands on your phone. So here are three takeaways from our first experience with Google’s next-gen AI.

Sam’s take:

Currently, most people interact with digital assistants using their voice, so right away Astra’s multi-modality (i.e. using sight and sound in addition to text/speech) to communicate with an AI is relatively novel. In theory, it allows computer-based entities to work and behave more like a real assistant or agent – which was one of Google’s big buzzwords for the show – instead of something more robotic that simply responds to spoken commands.

The first project Astra demo we tried used a large touchscreen connected to a downward-facing camera.
Photo by Sam Rutherford/Engadget

In our demo, we had the option of asking Astra to tell a story based on some objects we placed in front of camera, after which it told us a lovely tale about a dinosaur and its trusty baguette trying to escape an ominous red light. It was fun and the tale was cute, and the AI worked about as well as you would expect. But at the same time, it was far from the seemingly all-knowing assistant we saw in Google's teaser. And aside from maybe entertaining a child with an original bedtime story, it didn’t feel like Astra was doing as much with the info as you might want.

Then my colleague Karissa drew a bucolic scene on a touchscreen, at which point Astra correctly identified the flower and sun she painted. But the most engaging demo was when we circled back for a second go with Astra running on a Pixel 8 Pro. This allowed us to point its cameras at a collection of objects while it tracked and remembered each one’s location. It was even smart enough to recognize my clothing and where I had stashed my sunglasses even though these objects were not originally part of the demo.

In some ways, our experience highlighted the potential highs and lows of AI. Just the ability for a digital assistant to tell you where you might have left your keys or how many apples were in your fruit bowl before you left for the grocery store could help you save some real time. But after talking to some of the researchers behind Astra, there are still a lot of hurdles to overcome.

An AI-generated story about a dinosaur and a baguette created by Google's Project Astra
Photo by Sam Rutherford/Engadget

Unlike a lot of Google’s recent AI features, Astra (which is described by Google as a “research preview”) still needs help from the cloud instead of being able to run on-device. And while it does support some level of object permanence, those “memories” only last for a single session, which currently only spans a few minutes. And even if Astra could remember things for longer, there are things like storage and latency to consider, because for every object Astra recalls, you risk slowing down the AI, resulting in a more stilted experience. So while it’s clear Astra has a lot of potential, my excitement was weighed down with the knowledge that it will be some time before we can get more full-feature functionality.

Karissa’s take:

Of all the generative AI advancements, multimodal AI has been the one I’m most intrigued by. As powerful as the latest models are, I have a hard time getting excited for iterative updates to text-based chatbots. But the idea of AI that can recognize and respond to queries about your surroundings in real-time feels like something out of a sci-fi movie. It also gives a much clearer sense of how the latest wave of AI advancements will find their way into new devices like smart glasses.

Google offered a hint of that with Project Astra, which may one day have a glasses component, but for now is mostly experimental (the glasses shown in the demo video during the I/O keynote were apparently a “research prototype.”) In person, though, Project Astra didn’t exactly feel like something out of sci-fi flick.

During a demo at Google I/O, Project Astra was able to remember the position of objects seen by a phone's camera.
Photo by Sam Rutherford/Engadget

It was able to accurately recognize objects that had been placed around the room and respond to nuanced questions about them, like “which of these toys should a 2-year-old play with.” It could recognize what was in my doodle and make up stories about different toys we showed it.

But most of Astra’s capabilities seemed on-par with what Meta has already made available with its smart glasses. Meta’s multimodal AI can also recognize your surroundings and do a bit of creative writing on your behalf. And while Meta also bills the features as experimental, they are at least broadly available.

The Astra feature that may set Google’s approach apart is the fact that it has a built-in “memory.” After scanning a bunch of objects, it could still “remember” where specific items were placed. For now, it seems Astra’s memory is limited to a relatively short window of time, but members of the research team told us that it could theoretically be expanded. That would obviously open up even more possibilities for the tech, making Astra seem more like an actual assistant. I don’t need to know where I left my glasses 30 seconds ago, but if you could remember where I left them last night, that would actually feel like sci-fi come to life.

But, like so much of generative AI, the most exciting possibilities are the ones that haven’t quite happened yet. Astra might get there eventually, but right now it feels like Google still has a lot of work to do to get there.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-project-astra-hands-on-full-of-potential-but-its-going-to-be-a-while-235607743.html?src=rss

Google Pixel 8a review: The best midrange Android phone gets flagship AI features

The recipe for Google’s A-series Pixels is incredibly straightforward: Combine top-notch cameras with a vivid display and then cram all that in a tried and tested design for a reasonable price. But with the addition of a Tensor G3 chip, the Pixel 8a now supports the same powerful AI features as Google’s flagship phones. So when you consider that all this comes for just $499, you’re looking at not just the top midrange Android handset on the market but possibly one of the best values of any phone on sale today.

Aside from a new aloe color option – which in my opinion is the best of the bunch – the Pixel 8a is nearly identical to the standard Pixel 8. However, there are a few subtle differences that become more noticeable when the two are viewed side-by-side. The most obvious is slightly larger bezels, which also has an impact on the Pixel 8’s screen size. Instead of a 6.2-inch display like on its pricier sibling, the Pixel 8a tops out at 6.1 inches. That said, you still get a vibrant OLED panel that produces deep blacks and rich colors, plus a slightly faster 120Hz refresh rate compared to the 90Hz on last year’s Pixel 7a.

The phone’s frame is still made out of aluminum, which feels great, while the metal camera bar in the back is actually a millimeter or two thinner, resulting in an ever so slightly sleeker device. Google also switched out the Pixel 8’s rear glass panel for plastic. But thanks to a new matte finish that’s supposed to mimic the texture of cashmere, it definitely doesn’t feel cheap. And while its IP67 rating for dust and water resistance is one step down from what’s on the mainline Pixel 8, that’s still enough to withstand dunks of up to 1 meter for 30 minutes. Not bad.

One of the biggest knocks against Google’s Tensor chips is that they don’t offer the same level of raw performance you get from rival Apple or Qualcomm silicon. And while that’s still true of the G3, when we’re talking about it powering a phone that costs $499, I’m much less bothered. In normal use, the Pixel 8a feels swift and snappy and even when gaming. Titles like Marvel Snap and TMNT: Shredder’s Revenge looked smooth. The only time I noticed significant hiccups or lag was when playing more demanding shooters like Call of Duty: Mobile.

While both sport very similar designs, the Pixel 8a (left) has a slightly smaller 6.1-inch screen with larger bezels than the standard Pixel 8 (right).
While both sport very similar designs, the Pixel 8a (left) has a slightly smaller 6.1-inch screen with larger bezels than the standard Pixel 8 (right).
Photo by Sam Rutherford/Engadget

Of course, the other part of the performance equation is all the on-device AI features that the Tensor G3 unlocks such as Audio Magic Eraser, Best Take and the Magic Editor, which you can use as much as you want instead of the 10-picture cap that free users are subject to in Google Photos.

The Pixel 8a features the same 64MP main and 13MP ultra-wide sensors used in last year’s P7a. But that’s OK, because Google’s affordable phones punch way above their weight. So instead of comparing it with a similarly priced rival, I decided to really challenge the Pixel 8a by putting it up against the Samsung 24 Ultra. And even then, it still largely kept up.

In bright light, I’d argue the Pixel 8a might be the superior shooter, as it captured more accurate colors and excellent details compared to the warmer tones and often oversaturated hues from Samsung. This was especially noticeable when shooting a single yellow rose. The S24 Ultra made the middle of the flower appear orange and super contrasty, which looks great in a vacuum but doesn’t reflect what I saw in real life.

However, at night the S24 Ultra’s massive 200MP main sensor pulled back in front, producing images that were generally sharper and more well-exposed. That said, thanks to Google’s powerful Night Sight mode, the Pixel 8a wasn’t far behind, an impressive feat for a phone that costs $800 less.

Finally, while the Pixel 8a doesn’t have any other hardware tricks besides a solid 13MP selfie cam, Google’s AI is here to take your photos even further. Best Take allows you to capture multiple group shots and then swap in people’s reactions from various options. It’s easy to use and lets you create a composite where everyone is smiling, which feels like a win-win scenario. Then there’s the Magic Editor, a fun and powerful way to eliminate distracting elements or move subjects around as you please. It’s the kind of thing you might not use every day, but now and then it will salvage a shot you might have otherwise deleted. So even if you don’t care about AI or how it works, Google is finding a way to add value with machine learning.

The Pixel 8a supports up to 18-watt wired charging but drops down to just 7.5 watts when using a Qi wireless pad.
Photo by Sam Rutherford/Engadget

While the Pixel 8a’s 4,492 mAh battery is a touch smaller than what you get on the standard model (4,575 mAh), it actually boasts slightly better battery life, possibly due to its more petite screen. On our video rundown test, the 8a lasted a solid 20 hours and 29 minutes, barely beating the regular Pixel 8’s time of 20:16.

Meanwhile, when it comes to recharging, both wired and Qi wireless speeds have stayed the same. This means you get up to 18 watts when using a cable, but a rather lethargic rate of 7.5 watts if you slap it on an induction pad. That might not be a big deal if you only use wireless charging overnight or to conveniently top up the phone while you’re doing something else. But if you need some juice in a jiffy, you better grab a cord.

Google isn’t breaking new ground with the Pixel 8a. But the simple formula of class-leading cameras, a great display, strong battery life and a slick design will never go out of style – especially when you get all this for just $499. And with the addition of AI features that were previously only available on Google’s flagship phones, the Pixel 8a is a midrange smartphone that really is smarter than all of its rivals. To top everything off, there’s a configuration with 256GB of storage for the first time on any A-series handset (though only on the Obsidian model), plus even better support with a whopping seven years of Android and security updates.

The new Aloe color for the Pixel 8a is the best-looking of the bunch.
Photo by Sam Rutherford/Engadget

The one wrinkle to this is that the deciding factor comes down to how much its siblings cost. If you go by their default pricing, the $499 Pixel 8a offers incredible savings compared to the standard $799 Pixel 8. However, prior to the 8a’s announcement, we saw deals that brought the Pixel 8 down to as low as $549, at which point you might as well spend an extra $50 to get the full flagship experience.

But for those who don’t feel like waiting for a discount or might not care about details like slower wireless charging speeds, in addition to being the best midrange Android phone, the Pixel 8a is just a damn good deal.

This article originally appeared on Engadget at https://www.engadget.com/google-pixel-8a-review-the-best-midrange-android-phone-gets-flagship-ai-features-140046032.html?src=rss

Alienware m16 R2 review: When less power makes for a better laptop

The Alienware m16 R2 is a rarity among modern laptops. That’s because normally after a major revamp, gadget makers like to keep new models on the market for as long as possible to minimize manufacturing costs. However, after its predecessor launched last year sporting a fresh design, the company reengineered the entire system again for 2024 while also limiting how big of a GPU can fit inside. So what gives? The trick is that by looking at the configurations people actually bought, Alienware was able to rework the m16 into a gaming laptop with a sleeker design, better battery life and a more approachable starting price, which is a great recipe for a well-balanced notebook.

There are so many changes on the m16 R2’s chassis it’s hard to believe it’s from the same line. Not only has Alienware gotten rid of the big bezels and chin from the R1, but the machine is also way more portable now. Weight is down more than 20 percent to 5.75 pounds (from 7.28 pounds) and it’s also significantly more compact with a depth of 9.8 inches (versus 11.4 inches before). For some style points, Alienware added RGB lighting around the perimeter of the touchpad. This result is a major upgrade for anyone who wants to take the laptop on the go. It fundamentally changes the system from something more like a desktop replacement to a portable all-rounder.

Critically, despite being smaller, the m16 R2 still has a great array of connectivity options. On its sides are two USB 3.2 Type-A ports, a microSD card reader, an Ethernet jack and a 3.5mm audio socket. Around back, there are two USB-C slots (one supports Thunderbolt 4 while the other has DisplayPort 1.4), a full-size HDMI 2.1 connector and a proprietary barrel plug for power. Generally, I like this arrangement as moving some ports to the rear of the laptop helps keep clutter down. That said, I wish Alienware had switched the placement of the Ethernet jack and one of the USB-C ports, as I find myself reaching for the latter much more often.

While it doesn't have support for HDR, the 16-inch display on the Alienware m16 R2 does have a speedy 240Hz refresh rate.
Photo by Sam Rutherford/Engadget

The m16 R2 has a single display option: a 16-inch 240Hz panel with a QHD+ resolution (2,560 x 1,600). It’s totally serviceable and for competitive gamers, that high refresh rate could be valuable during matches where potential advantage matters. But you don’t get any support for HDR, so colors don’t pop as much as they would on a system with an OLED screen. Furthermore, brightness is just OK at around 300 nits, which might not be a big deal if you prefer gaming at night or in darker environments. But if you plan on lugging this around to a place with big windows or a lot of sunlight, games and movies may look a bit subdued. That said, it’s not a deal breaker, I just wish this model had some other display options like the previous one.

While the m16 R2’s sleeker design is a major plus, the trade-off is less space for a beefy GPU. So unlike its predecessor, the biggest card that fits is an NVIDIA RTX 4070. This may come as a downer for performance enthusiasts, but Alienware said it made this change after seeing only a small fraction of buyers opt for RTX 4080 graphics on the old model. Even so, the R2 can still hold its own when playing AAA titles. In Cyberpunk 2077 at 1080p and ultra graphics, it hit 94 fps, barely behind what we saw from the ASUS ROG G16 (95 fps) with a more powerful 4080. And while the performance gap grew slightly when I turned ray tracing on, the m16 still pumped out a very playable framerate of 62 fps (versus 69 fps for the G16).

One of the biggest benefits of the m16 R2’s redesign is that it allowed Alienware to install a larger 90Wh battery versus the 84Wh pack in its predecessor. When you combine that with components and fans better tailored to the kind of performance this machine delivers, you get improved longevity. On our rundown test, the m16 R2 lasted 7 hours and 51 minutes, which is longer than both the Razer Blade 14 (6:46) and the ASUS ROG Zephyrus G14 (7:29) and just shy of what we got from a similarly specced XPS 16 (8:31). That said, it’s still not as good as the ASUS G16’s time of 9:17. Regardless, the ability to go longer between charges is never a bad thing. Meanwhile, for those who want to pack super light, one of the m16 R2’s USB-C ports in the back supports power input, though you won’t get the full 240 watts like you do with Alienware’s included brick.

As always, the m16 R2 has a light-up version of Alienware's iconic logo on its lid.
Photo by Sam Rutherford/Engadget

For 2024, it would have been so easy for Alienware to give the m16 a basic spec refresh and call it a day. But it didn’t. Instead, the company looked at its customers' preferences and gave it a revamp to match. So despite not having the same top-end performance as before, the R2 is still a very capable gaming laptop with a more compact chassis, improved battery life and a lower starting price of $1,500 with an RTX 4050. Sure, I wish its display was brighter and that there was another panel option, but getting 240Hz standard is pretty nice.

Really, the biggest argument against the m16 R2 is that for higher-specced systems like our $1,850 review unit with an RTX 4070, you can spend another $150 for an ASUS ROG G16 with the same GPU, a brighter and more colorful OLED display and an even lighter design that weighs a full pound less. But for people seeking a well-priced gaming machine that can do a bit of everything, there’s a lot of value in the m16 R2.

This article originally appeared on Engadget at https://www.engadget.com/alienware-m16-r2-review-when-less-power-makes-for-a-better-laptop-174027103.html?src=rss

Google Pixel 8a hands-on: Flagship AI and a 120Hz OLED screen for $499

A new Pixel A-series phone typically gets announced at Google I/O. Unfortunately, that means the affordable handset sometimes gets buried amongst all the other news during the company’s annual developer conference. So for 2024, Google moved things up a touch to give the new Pixel 8a extra attention. And after checking it out in person, I can see why. It combines pretty much everything I like about the regular Pixel 8 but with a lower price of $499.

Right away, you’ll see a very familiar design. Compared to the standard Pixel 8, which has a 6.2-inch screen, the 8a features a slightly smaller 6.1-inch OLED display with noticeably larger bezels. But aside from that, the Pixel 8 and 8a are almost the exact same size. Google says the material covering the display should be pretty durable as it's made out of Gorilla Glass, though it hasn’t specified an exact type (e.g. Gorilla Glass 6, Victus or something else).

Some other changes include a higher 120Hz refresh rate (up from 90Hz on the previous model), a more streamlined camera bar and a new matte finish on its plastic back that Google claims mimics the texture of cashmere. Now, I don’t think I’d go that far, but it did feel surprisingly luxurious. The 8a still offers decent water resistance thanks to an IP67 rating, though that is slightly worse than the IP68 certification on a regular Pixel 8. Its battery is a bit smaller too at 4,492 mAh (instead of 4,575 mAh). That said, Google says thanks to some power efficiency improvements, the new model should run longer than the previous model.

As for brand new features, the most important addition is that alongside the base model with 128GB of storage, Google is offering a version with 256GB. That’s a first for any A-series Pixel. And, following in the footsteps of last year’s flagships, the Pixel 8a is also getting 7 years of software and security updates, which is a big jump from the three years of Android patches and five years of security on last year’s 7a. Finally, the Pixel 8a is getting a partially refreshed selection of colors including bay, porcelain, obsidian and a brand new aloe hue, which is similar to the mint variant of the Pixel 8 earlier this year but even brighter and more saturated. I must say, even though I’ve only played around with it for a bit, it's definitely the best-looking of the bunch.

The Pixel 8a will be available in four colors: Bay, Obsidian, Porcelain and Aloe.
Photo by Sam Rutherford/Engadget

One thing that hasn’t changed, though, is the Pixel 8a’s photography hardware. It uses the same 64-megapixel and 13MP sensors for its main and ultra-wide cameras. However, as the Pixel 7a offered the best image quality of any phone in its price range, it’s hard to get too mad about that. And because this thing is powered by a Tensor G3 chip, it supports pretty much all the AI features Google introduced on the regular Pixel 8 last fall, including Best Take, Audio Magic Eraser, Circle to Search, Live Translate and more. Furthermore, while Google is giving everyone access to its Magic Editor inside Google Photos later this month, free users are limited to 10 saves per month, whereas there’s no cap for people with Pixel 8s and now the 8a.

However, there are a few features available on the flagship Pixels that you don’t get on the 8a. The biggest omission is a lack of pro camera controls, so you can’t manually adjust photo settings like shutter speed, ISO, white balance and more. Google also hasn’t upgraded the 8a’s Qi wireless charging speed, which means you’re limited to just 7.5 watts instead of up to 18 watts. Finally, while the phone does offer a digital zoom, there’s no dedicated telephoto lens like on the Pixel 8 Pro.

But that’s not a bad trade-off to get a device that delivers 90 percent of what you get on Google’s top-tier phones for just $499, which is $200 less than the Pixel 8’s regular starting price. And for anyone who likes the Pixel 8a but might not care as much about AI, the Pixel 7a will still be on sale at a reduced price of $349. Though if you want one of those, you might want to scoop it up soon because there’s no telling how long supplies will last. (Update: The Pixel 7a has returned to its default price of $499). 

The one wrinkle to all this is that at the time of writing, the standard Pixel 8 has been discounted to $549, just $50 more than the Pixel 8a. So unless an extra Ulysses S. Grant is going to make or break your budget, I’d probably go with that. (Update: Google's Pixel 8 discount has ended, so it's back to its regular price of $699). Still, even though the Pixel 8a doesn’t come with a lot of surprises, just like its predecessor, it’s shaping up to once again be the mid-range Android phone to beat.

Pre-orders go live today with official sales starting next week on May 14th.

This article originally appeared on Engadget at https://www.engadget.com/google-pixel-8a-hands-on-flagship-ai-and-a-120hz-oled-screen-for-499-160046236.html?src=rss

Apple’s M4 chip arrives with a big focus on AI

Today at its "Let Loose" event, Apple detailed its new M4 chip featuring a major focus on improved AI and machine learning capabilities. 

Built on a new second-gen 3nm process, Apple's M4 chip features four performance and six efficiency cores along with a 10-core GPU. In terms of power, Apple claims the M4's CPU is 50 percent faster compared to the M2 with a GPU that's four times quicker. Memory bandwidth has also been improved with speeds of up to 120GB/s and for the first time for the iPad line, Apple is adding support for dynamic caching, hardware-accelerated ray tracing and hardware-accelerated mesh shading. On top of that, Apple says it's maintaining class-leading energy efficiency with the M4 capable of delivering the same performance as the M2 but with half the power.

First available on the new iPad Pros, Apple's M4 chip features a 10-core CPU, a 10-core GPU and an improved 16-core neural engine for greatly improved AI performance.
Apple

The M4 also features an upgraded 16-core neural engine that can perform up to 38 trillion operations per second, which is 60x times faster than the company's first NPU in the A11 Bionic. Apple says the M4's neural engine will support and accelerate tasks like real-time Live Captions, subject isolation during videos in Final Cut Pro and automatic musical nation in StaffPad. 

Some other capabilities of the M4 chip include AV1 hardware acceleration — which is another first for the iPad line — and reduced memory requirements when performing inference-based workloads. 

Apple's M4 chip will be available first on the new 11- and 13-inch iPad Pros, which are available for pre-order today prior to official sales going live on May 15. 

Follow all of the news live from Apple's 'Let Loose' event right here.

This article originally appeared on Engadget at https://www.engadget.com/apples-m4-chip-arrives-with-a-big-focus-on-ai-142448428.html?src=rss

Walmart thinks it’s a good idea to let kids buy IRL items inside Roblox

Walmart's Discovered experience started out last year as a way for kids to buy virtual items for Roblox inside the game. But today, that partnership is testing out an expanded pilot program that will allow teens to buy real-life goods stocked on digital shelves before they're shipped to your door. 

Available to children 13 and up in the US, the latest addition to Walmart Discovered is an IRL commerce shop featuring items created by partnered user-generated content creators including MD17_RBLX, Junozy, and Sarabxlla. Customers can browse and try on items inside virtual shops, after which the game will open a browser window to Walmart's online store (displayed on an in-game laptop) in order to view and purchase physical items. 

Furthermore, anyone who buys a real-world item from Discovered will receive a free digital twin so they can have a matching virtual representation of what they've purchased. Some examples of the first products getting the dual IRL and virtual treatment are a crochet bag from No Boundaries, a TAL stainless steel tumbler and Onn Bluetooth headphones

According to Digiday, during this initial pilot phase (which will take place throughout May), Roblox will not be taking a cut from any of the physical sales made as part of Walmart's Discovered experience as it looks to determine people's level of interest. However, the parameters of the partnership may change going forward as Roblox gathers more data about how people embrace buying real goods inside virtual stores. 

Unfortunately, while Roblux's latest test may feel like an unusually exploitative way to squeeze even more money from teenagers (or more realistically their parent's money), this is really just another small step in the company's efforts to turn the game into an all-encompassing online marketplace. Last year, Roblox made a big push into digital marketing when it launched new ways to sell and present ads inside the game before later removing requirements for advertisers to create bespoke virtual experiences for each product. 

So in case you needed yet another reason not to save payment info inside a game's virtual store, now instead of wasting money on virtual items, kids can squander cash on junk that will clutter up their rooms too. 

This article originally appeared on Engadget at https://www.engadget.com/walmart-thinks-its-a-good-idea-to-let-kids-buy-irl-items-inside-roblox-180054985.html?src=rss