HP just announced HP Print AI, which is being advertised as “the industry’s first intelligent print” experience. Beyond squeezing in tech’s two favorite letters (AI), the software looks to “simplify and enhance printing from setup to support.” There are several tools here, but the most interesting aspect is something called Perfect Output.
This could actually solve the problem of printing from web pages, which typically produces something just a hair above absolute garbage. The company says the embedded algorithms will reduce all of that unnecessary white space and will get rid of ads.
Image size will also be optimized, so printing from a website should look about as good as something that came from a word processor. HP says everything will “fit perfectly on the page for the first time.” Perfect Output isn’t just for websites, as the company says it’ll also make short work of spreadsheets, which are another frustrating thing to print out.
This feature begins rolling out today, but only to select customers as a beta. HP told Engadget that Perfect Output will work with any of the company’s printers, so long as the correct driver is installed and it’s connected to a Windows 10 or Windows 11 machine. Once some customer feedback comes in, it should go into a wider release.
HP Print AI will also use artificial intelligence to customize support for each user, with the company saying that its “intelligent technology anticipates” the needs of consumers. HP says this will be especially useful when it comes to setup and for remembering user preferences. There’s also a chatbot in there that allows for language-based queries, which runs off of a proprietary LLM the company calls a "print language model." So it's technically a, sigh, PLM.
For now, these tools are tied to driver software. HP says that they’ll be featured prominently in a forthcoming app update scheduled for next year.
This article originally appeared on Engadget at https://www.engadget.com/computing/accessories/hps-print-ai-will-offer-a-better-way-to-print-websites-170523565.html?src=rss
TikTok Music is shutting down following an attempt to translate views on its base app to music streaming. The music arm announced the news that accounts will close by November 28, with all user data and login information deleted.
Google subscribers whose subscription ends after November 28 should automatically get a refund or can request one through Google Play before TikTok Music shuts down. On the other hand, Apple users must request a refund through Apple support before the 28th to get one. Anyone who actually uses TikTok Music might want to wait a minute, though, as the premium service will no longer be available once a refund is processed. Speaking of deadlines, anyone who wants to transfer their playlists from TikTok Music to another music streamer has to do so by October 28.
TikTok Music first launched in Indonesia and Brazil in July 2023. It replaced another music platform called Resso from ByteDance (TikTok's parent company). Around the same time, it became available as a closed beta test in Australia, Mexico and Singapore, fully launching in those locations that October. Despite ByteDance filing for a "TikTok Music" trademark application in May 2022, the platform never made it to the US.
This article originally appeared on Engadget at https://www.engadget.com/apps/tiktok-music-is-on-its-way-out-143058957.html?src=rss
“It’s the most powerful wearable tracking the most important organ in your body.”
Dr. Ramses Alcaide is explaining the electroencephalography (EEG) technology that his company Neurable uses to track activity with its brain-computer interface (BCI). Alcaide is the CEO and co-founder, and notes that a huge problem with EEG sensors is that they are often affixed to bulky, awkward-looking headsets — not exactly something you want to wear out in public. And to him, that’s why the technology hasn’t yet “created the type of impact that they could [on] the world.” Sure, we’ve seen a variety of headbands over the last decade, but those add an additional device to your bag. Alcaide argues there’s a better way to use EEG tech that’s even less intrusive.
Neurable began at the University of Michigan in 2011 where its technology was initially created. The overall platform is an AI system that combines filtering to increase and boost the signal of brain data. The company spun out in 2015 and has been working to bring its EEG-powered tech to “smaller everyday devices,” as Alcaide describes them.
“[It] took a lot of time, but what we’ve been able to do is take what was traditionally these large systems and bring it down to everyday devices using AI,” he says.
Devices like headphones, earbuds, helmets, AR glasses and more can be equipped with EEG sensors so that they can track neurodegenerative diseases and neurodivergence based on brain activity. For example, the ability to track Alzheimer's or ADHD before a person knows they even have it is part of the plan for Neurable. Right now though, the company’s first step is one of those “everyday wearables” that can track decreases in focus to create what Alcaide calls “good wellness hygiene.”
Billy Steele for Engadget
The company’s first device is the MW75 Neuro: a set of headphones built in collaboration with Master & Dynamic. Based on the existing MW75, this version has dry fabric EEG sensors in the ear pads, sending 12 EEG channels to the Neurable app for the software to do its AI analysis and signal processing. The app then interprets the data “with high confidence” and “lab-level accuracy,” according to the company.
The Neurable app is where all the data is displayed for the MW75 Neuro. First, it essentially gamifies mental hygiene with focus tracking. You earn points for high (2), medium (2) and low (1) focus levels, accumulating points throughout the day. You’re then able to view comparisons week-to-week as well as individual session summaries with attention span graphs. During these periods, the system can prompt you to take a break when focus decreases, which Neurable says this should help with burnout to some degree. Of course, “burnout” isn’t something that’s easy to quantify, or even tangibly measure, since there’s more than your focus or attention at play.
The MW75 Neuro isn’t just meant to keep you working. The company says monitoring your focus levels can assist you with gaming, meditation, reading and even decision-making. Noise cancellation can block out distractions during periods when you need to be locked in, which doesn’t only apply to the office. Neurable says no matter the activity, its app provides the data necessary to recognize your performance over time and identity when you need to take breaks or maybe find a different environment in order to be productive.
“This is just scratching the iceberg,” Alcaide explains. “We're not claiming or diagnosing everything, [but] it really shows you a glimpse of the future that these everyday wearables can deliver on.”
Billy Steele for Engadget
Of course, the MW75 Neuro is a set of noise-canceling headphones, which means you’ll get a host of audio features on top of the fancy brain tech. Master & Dynamic CEO Jonathan Levine told me that this version of the headphones has an identical industrial design to the regular MW75. 40mm Beryllium drivers carry M&D’s trademark warm sound profile and four microphones are employed for active noise cancellation (ANC) and calls. There are still a host of sound modes and you can customize the EQ and more inside the M&D Connect app.
Besides the ear pads, there are some other changes on the MW75 Neuro. Neurable’s version supports Adaptive Transparency mode for starters, but the key difference is inside. The electronics were completely redesigned to add EEG processors that power the AI tech, including an ARM Cortex chip. Since the sensor-packed cushions on this model are fabric instead of leather, Levine says the variation does change the sound profile slightly. And during my testing I noticed that they aren’t quite as comfortable as those on the original model either. If you pre-order from Master & Dynamic, the company will throw in non-EEG leather ear pads for free.
There’s a big hit to battery life, too. Neurable says the MW75 Neuro offers 10 hours of EEG tracking on a charge (8 hours with ANC on), compared to up to 28 hours with ANC on the regular version. I don’t think you’re going to use Neurable’s features for more than a few hours at a time, but you should know they do impact longevity.
Once you start a focus session, a timer begins in the app and continues until you turn it off. There’s a button up top if you need to take a break, otherwise the headphones continue tracking your brainwaves until you tell them to stop. There’s also an indicator on the timer screen to let you know if the sensors are properly connected. A reliable connection ensures optimal EEG signal quality during the session.
Neurable
During my tests, I used the MW75 Neuro to track short focus sessions. It’s nice that the whole system runs in the background without any distractions – other than the break suggestions. Of course, you’ll have to think back to remember if any dips lined up when you look at the graph, but I felt like the app’s prompts to take a break were well-timed and probably overdue. The software can give you voice or push notifications (or both), and the app provides a separate 10-minute timer for the so-called Brain Breaks.
I don’t have any lab-grade tech to thoroughly evaluate what Neurable is doing on these headphones from a tracking standpoint. And I’ll admit that my short time with the MW75 Neuro isn’t enough time to fully evaluate their utility. But, I can begin to see how they could help over time, especially for those of us who are incentivized by streaks and daily scores. I found it interesting to see how much time I spent in high and medium focus, as well as trying to recall if a text or Slack message may have caused me to stumble during a session.
Neurable is actually working to help with that common distraction. The company is allowing developers to build apps for the MW75 Neuro, including one in the works that will automatically pause Spotify when you lose focus. To help with messages, the company is working on a chat integration that allows you to respond with head movements while remaining in the productivity zone. Alcaide argues that 90 percent of text messages can be responded to in a simple manner with a response created by ChatGPT, so the headphones’ accelerometer can be used to detect a nod or shake for automatic replies. This goes beyond what Apple is doing with Siri Interactions on AirPods since it helps facilitate an appropriate response.
“When the iPhone came out, a touchscreen was the interface,” he continues. “For [Neurable], it’s going to be the neural interface and the accelerometer. It’s going to enable us to do a lot of the same things we do with our phone with our everyday wearable.”
The MW75 Neuro is available for pre-order today in the US in silver, onyx, navy and olive color options for $699. Neurable plans to make the headphones available in Europe and the UK in 2025 for €729 / ₤629. That’s a lot for a set of headphones, but the regular MW75 is $599, so there’s only a $100 premium for Neurable’s tech.
This article originally appeared on Engadget at https://www.engadget.com/audio/headphones/neurables-brainwave-tracking-master--dynamic-headphones-tell-you-when-to-take-a-break-120004736.html?src=rss
Last year, Logitech leaped into the content creator market by acquiring Loupedeck, which makes control surfaces for apps like Adobe Lightroom. Now, the company has unveiled its first Logitech-branded control panel, the MX Creative Console, a $200 device that includes a keypad, dialpad and plugins for popular Adobe apps like Premiere Pro.
Logitech is fighting rivals like the TourBox Elite controller and even its own Loupedeck CT, but its new offering is cheaper than the latter and sleeker than the former. The MX Creative Console features a modern design and a pair of slick control dials, along with dynamic display keys that change depending on the app and page you’re looking at.
I’ve tested a number of control panels going back to the original Loupedeck in 2017. To me, it always comes down to one main thing: Is this easier and faster than just using a keyboard and mouse? After over a week with the MX Creative Console, I found it to be powerful in some cases and too limited in others.
Hardware
The console comes in either pale gray or darker graphite and takes up very little space on your desk (3.8 x 3.1 inches for the keypad and 3.6 x 3.7 inches for the dialpad). A stand that angles the keypad or dialpad about 45 degrees toward you is also included. I prefer it flat on the table for speed, but the stand makes it easier to see the controls. The keypad has nine display keys, with the content changing based on the page and app you’re using. There are two regular buttons below to change the pages and a USB-C port on the bottom.
Steve Dent for Engadget
Meanwhile, the dialpad’s centerpiece is a large “contextual dial” since its function changes depending on the action selected. Plus, there’s a scroll wheel in the right top corner, two buttons on the top left and two buttons on the bottom in each corner. The bottom right button activates the dialpad’s “Actions Ring,” an on-screen circular display that gives you another way to tweak things like colors and text.
The keys require a light touch and have a smooth, clickless feel. The wheel on the dialpad has a nice amount of friction for precise work and lets you easily move frame-by-frame in Premiere Pro, or shuttle quickly through a timeline. It doesn’t have any haptic feedback, though, like the TourBox Elite. There’s a Bluetooth pairing switch on the bottom and a power switch on the back. It can connect to your computer either via LT Bluetooth or Logitech’s Bolt dongle also used on its mice and keyboards (not included).
Logitech says that the products are made with 72 percent post-consumer recycled plastics, low-carbon aluminum, micro textures instead of paint and FSC-certified responsible packaging. However, the dialpad uses AAA cells, either disposable or rechargeable. They’ll last a couple of months, according to Logitech, but it’s an odd choice for a product meant to be environmentally friendly.
Setup
Steve Dent for Engadget
The MX Creative Console is plug and play for Adobe apps so you can start twiddling the dials out of the box. It’s also customizable, letting you tweak settings within apps, create custom profiles and more. To set it up, I installed the Logi Options+ app on my PC (and Mac, I tested it with both), then connected the keypad via USB-C. I installed the dialpad separately by connecting it to my computers over Bluetooth.
Once the devices are recognized, clicking on “All Actions” installs the Adobe plugins. It also has direct support for apps including VLC media player, Spotify Premium, Capture One and Ableton. You can even use it to control apps without plugins like your browser for system volume, YouTube videos, emojis, screenshots and more. I found this useful just for the system volume alone (hello, terrible Windows 11 audio control).
Changing the default settings is about as easy as it gets. When you open the customization page, it shows the devices to the left (dialpad, keypad and Actions Ring), while all the possible settings are to the right. To change or add a new setting, just grab the one you want from the list and drag it over to the virtual keypad on the left. Keys can be rearranged on the same page, but it’s not easy to move a setting from one page to another.
As a Premiere Pro user, the first thing I did was create a new keypad page and add buttons to switch between the source, program and timelines to avoid a mouse click for those actions. That was relatively easy to do, thanks to the search function and intuitive drag-and-drop interface. If you’d rather not futz around with customization, Logitech has a plugin marketplace in the Logi Options+ app. I wasn’t able to use that ahead of launch, but it’s supposed to allow users to purchase or share plugins, profiles and icon packs.
Operation
Logitech
I primarily work on Lightroom Classic and Premiere Pro while occasionally making use of Photoshop and After Effects. All of those apps are supported natively by the MX Creative Console on Mac and PC.
I started with Premiere Pro, testing it on both Windows and Mac. After some pondering, I placed the keypad to the left of the keyboard and the dialpad on the right between the keyboard and mouse. That worked well visually and let me finetune edits and do adjustments with my right hand and press buttons with my left — much as I already do with a keyboard and mouse.
At first, I didn’t think the console would speed up my workflow in editing mode since I’ve memorized most of Premiere’s keyboard shortcuts. I was also worried that I’d be constantly jumping between the dial and the mouse. After playing around a bit, though, I noticed that scrubbing through the timeline with the dial offered finer and faster control than the keyboard and mouse, especially when using the scroll wheel to scale the timeline (I’d like to see faster scrubbing when I’m zoomed out though, Logitech).
Building on that, I added the split function and other click-free mouse tools I hadn’t touched in awhile. With that, I could work nearly as quickly as with a keyboard and mouse depending on the task, despite my previous fears. Though I’d be hesitant to use it myself for editing, I could see this being a good workflow for new Premiere Pro users as it visually shows actions so newbies don’t need to memorize shortcuts.
The MX Console is especially useful for color correction in Premiere. With a clip selected, you can click the bottom right dialpad button to activate the Actions Ring, move your mouse to one of the actions (exposure, contrast, whites, saturation, etc.) and turn the dial to adjust that setting. To avoid the mouse, you can also program major color adjustments into the keypad. Then, just hold the button on that setting while turning the dial.
Steve Dent for Engadget
Then it was on to Lightroom Classic. This app makes the most sense for the console, as you’re primarily performing actions (color correction, cropping etc.) on a single image. Quick keys include Develop mode, White Balance Selector, Auto White Balance, Auto upright and rating tools. Once you’ve imported images into your library, you can jump into Develop, shuttle between images using the dial and then tweak colors using the Actions Ring as with Premiere. Again, if you’d rather keep your hands on the MX Console, you can program common functions (temperature, saturation, highlights etc.) into the keypad.
The MX Console also has keys for copying and pasting Develop settings, before and after views, as well as cropping and opening images in Photoshop. A Lightroom power user could add more shortcuts to further boost efficiency. That makes it nearly as fast as the popular Loupedeck+ panel, but jumping between pages in the keypad can slow you down a bit.
Unfortunately, I found the MX Creative Console to be the least useful for Photoshop. Control panels are best for single-purpose tasks like color correction and audio adjustments, but Photoshop is designed for more complex operations. That meant I was forever taking my hands off the keypad and dialpad and putting them on the mouse and keyboard, making me less efficient, if anything. It could have been useful in Photoshop’s Camera RAW utility (which has Lightroom-like controls), but Logitech said that tool has no API and doesn’t support plugins.
Wrap-up
Steve Dent for Engadget
The MX Creative Console’s main competition is the $268 TourBox Elite, which has three dials and ten buttons. Designed to work in concert with your keyboard and mouse, it’s powerful for experienced editors, but looks a bit cheap. By contrast, Logitech’s MX Creative Console is more polished, and the visual interface its keypad provides makes it better for novices. It’s also worth noting that Elgato’s similarly priced Stream Deck+ recently added an Adobe Photoshop plugin, despite mainly being designed for live streaming. It promises easy access to Photoshop tools and adjustments via four dials and eight display keys.
Other options are more expensive, like the $529 Loupedeck CT, $395 DaVinci Resolve Speed Editor, $499 DaVinci Resolve Mini Panel and $595 Blackmagic Design DaVinci Resolve Editor Keyboard. Those are more powerful and look more professional, but will obviously cost you more.
Logitech’s MX Creative Console is a quality device with a fair amount of utility for apps like Premiere Pro and Lightroom Classic. Its usefulness will no doubt increase as Adobe adds more supported apps and the Logi Marketplace grows. However, it simply doesn’t have enough buttons and dials to perform tasks in many Adobe apps without falling back to the keyboard and mouse. If you do use apps where it works well, like Lightroom, it could provide a boost to your productivity and look cool doing it. It ships next month for $200 and Logitech includes a free three-month subscription to Adobe's Creative Cloud.
This article originally appeared on Engadget at https://www.engadget.com/computing/accessories/logitech-mx-creative-console-review-an-affordable-entry-point-into-edit-panels-070101321.html?src=rss
The Biden administration just announced a comprehensive plan to ban Chinese software and some hardware from internet-connected cars in the US. This is being framed as a national security measure, with the administration stating that this software poses “new threats to our national security, including through our supply chains.”
This is the same reasoning behind a recent ban of telecommunications equipment from Chinese companies like Huawei and ZTE. In that case, the claims had teeth, as documents reportedly showed how Huawei was involved in the country’s surveillance efforts. Today’s announcement goes on to say that China “could use critical technologies” from connected vehicles “within our supply chains for surveillance and sabotage to undermine national security.”
The rules announced today go beyond mere software. It would also cover any piece of hardware that connects a vehicle to the outside world, which includes Bluetooth, cellular, Wi-Fi and satellite components. It also includes cameras, sensors and onboard computers. The software ban would go into effect in model year 2027, with the related hardware prohibition starting in model year 2030.
The proposed ban also includes Russian auto software. The country has a fairly robust EV industry, but primarily for domestic use. There’s nothing in Russia that’s globally lusted after like the cheap EVs from Chinese companies like BYD.
This leads us to a major point. While this proposed ban is primarily for internet-connected software, it would effectively block all Chinese auto imports. The software is pretty much baked in, as are the items of hardware that allow for connectivity. It’s already tough to get one of these vehicles stateside, due to the recent tariffs placed on Chinese EVs, but this would make it nearly impossible.
Government officials, however, have held steadfast that this is a move to improve national security, and not to ban cheaper EVs from another market. “Connected vehicles and the technology they use bring new vulnerabilities and threats, especially in the case of vehicles or components developed in the P.R.C. [People's Republic of China] and other countries of concern,” said Jake Sullivan, President Biden’s national security adviser. These remarks were given to reporters over the weekend and were transcribed by The New York Times.
Sullivan went on to reference something called Volt Typhoon, which is an alleged Chinese effort to insert malicious code into American power systems, pipelines and other critical infrastructure. US officials fear that this program could be used to cripple American military bases in the event of a Chinese invasion of Taiwan or a similar military excursion.
Peter Harrell, who was previously the National Security Council’s senior director for international economics during the Biden administration, told The New York Times that “this is likely to be opening the door, over a number of years, to a much broader governmental set of actions” that would “likely see a continuation” no matter who wins the presidential election.
It’s worth noting that the BYD Seagull, as an example, sells for around $10,000. This makes it much cheaper than American EVs, even after getting slapped by that fat 100 percent tariff. A full-featured EV for $20,000 sounds pretty nice right about now. Oh well. It was fun to dream.
This article originally appeared on Engadget at https://www.engadget.com/transportation/evs/biden-administration-seeks-ban-on-auto-software-from-china-154025671.html?src=rss
Apple's macOS updates have been so dull lately, the most interesting part of last year's macOS Sonoma ended up being widgets. Widgets! Thankfully, macOS Sequoia has a lot more going on — or at least it will, once Apple Intelligence rolls out over the next few months. For now, though, Sequoia delivers a few helpful features like iPhone Mirroring, a full-fledged Passwords app and automatic transcription in the Notes app. At the very least, it's got a lot more going on than widgets.
iPhone mirroring changes everything for Macs
Heading into WWDC earlier this year, I was hoping that Apple would let Vision Pro users mirror their iPhones just as easily as they can mirror their Macs. Well, we didn't get that, but iPhone Mirroring on macOS Sequoia is close to what I'd want on the Vision Pro. Once you've got a Mac (with an Apple Silicon chip, or one of the last Intel models with a T2 security chip) running the new OS, as well as an iPhone running iOS 18, you can easily pair the two using the iPhone Mirroring app.
Once that connection is made, you'll see a complete replication of your phone within the app. It took me a few minutes to get used to navigating iOS with a trackpad and keyboard (there are a few new hotkeys worth learning), but once I did, I had no trouble opening my usual iPhone apps and games. If you're spoiled by the 120Hz ProMotion screen from an iPhone Pro, you'll notice that the mirrored connection doesn't look nearly as smooth, but from my testing it held a steady 60fps throughout games and videos. I didn't notice any annoying audio or video lag either.
Apple
While it's nice to be able to launch my iPhone from my Mac, I was surprised at what ended up being the most useful aspect of this feature: Notifications. Once you've connected your phone, its alerts pop up in your Mac's Notification Center, and it takes just one click to launch the app it's tied to. That's useful for alerts from Instagram, DoorDash and other popular apps that have no real Mac options, aside from launching their websites in a browser.
iPhone Mirroring is also a sneaky way to get in a few rounds of Vampire Survivors during interminably long meetings or classes. (Not that I would ever do such a thing.) While many mobile games have made their way over to the Mac App Store, there are still thousands that haven't, so it's nice to have a way to access them on a larger screen. Not every game works well on Macs — it's just tough to replicate a handheld touchscreen experience with a large trackpad — but mirroring is a decent option for slower-paced titles. I didn't encounter any strange framerate or lagging issues, and sound carried over flawlessly as well.
I typically always have my phone within reach, even when I'm working at a desk. But picking it up would inevitably disrupt my workflow — it's just far too easy to get a notification and find yourself scrolling TikTok or Instagram, with no memory of how you got there. With iPhone Mirroring, I can just keep on working on my Mac without missing any updates from my phone. It's also been useful when my iPhone is connected to a wireless charger and I desperately need more power before I run out the house.
If you're the sort of person who leaves your phone around your home, I'd bet mirroring would also be helpful. The feature requires having both Bluetooth and Wi-Fi turned on, and the connection range is around 50 feet, or what I'd expect from Bluetooth. Thick walls and other obstructions can also reduce that range significantly. In my testing, I could leave my iPhone in my backyard and still be able to mirror it in my living room 40 feet away. Naturally, the further you get, the choppier the experience.
Sure, Apple isn't the first company to bring smartphone mirroring to PCs. Samsung and other Android phone makers have been offering it for years, and Microsoft also has the "Phone Link" app (formerly Your Phone) for mirroring and file syncing. But those implementations differ dramatically depending on the smartphone you're using, they don't seamlessly integrate notifications and simply put, they would often fail to connect. Once you set up iPhone Mirroring, getting into your phone takes just a few seconds. It just works. And after testing the feature for weeks, I haven't run into any major connection issues.
Photo by Devindra Hardawar/Engadget
Better window tiling, finally!
It's 2024 and Apple has finally made it easier to position Mac windows around your monitor. Now you can drag apps to the sides or corners of your screen, and they'll automatically adjust themselves. It's allowed me to quickly place a browser I'm using for research alongside an Evernote window or Google Doc. Similar to Stage Manager in macOS Ventura, the tiling shortcuts are a significant shift for Mac window management.
And, of course, they're also clearly similar to Windows 10 and 11's snapping feature. Given that much of Apple's UI focus is on iOS, iPadOS and VisionOS these days, it's easy to feel like the Mac has been left behind a bit. I don't blame Apple for cribbing Microsoft's UI innovations, especially when it makes life easier for Mac users.
Photo by Devindra Hardawar/Engadget
Slick video conferencing background replacement
Apple has offered lighting adjustments and portrait background blurring in video chats for years, and now it's using that same machine learning technology to completely replace your backgrounds. Admittedly, this isn't a very new or exciting feature. But it's worth highlighting because it works across every video chat app on your Mac, and since it's relying on Apple's Neural Engine, it looks much better than software-based background replacements.
Apple's technology does a better job of keeping your hair and clothes within focus, but still separated from artificial backgrounds. And best of all, it doesn't look like a cheap green screen effect. You can choose from a few color gradients, shots of Apple Park or your own pictures or videos.
Other highlights of macOS Sequoia
Here are a few other upgrades I appreciated:
The Passwords app does a decent job of collecting your stored passwords, but it's clearly just a first attempt. It's not nearly as smart about plugging in my passwords into browser fields as apps like 1Password and LastPass.
The Notes app now lets you record voice notes and automatically transcribes them. You can also continue to jot down text during a voice recording, making it a useful way to keep track of interviews and lectures. I'm hoping future updates add features like multi-speaker detection.
Being able to jot down math equations in Notes is cool, but it's not something I rely on daily. I'm sure it'll be very useful to high school and college kids taking advanced math courses, though.
Messages finally gets rich text formatting and a send later option. Huzzah!
Finally, a macOS update worth getting excited about
You’d be forgiven for completely ignoring the last batch of macOS updates, especially if you haven’t been excited about Stage Manager or, sigh, widgets. But if you’re a Mac and iPhone owner, Sequoia is worth an immediate upgrade. Being able to mirror your iPhone and its notifications is genuinely useful, and it’s stuffed with other helpful features. And of course, if you want to get some Apple Intelligence action next month, you’ll have no choice but to upgrade. (We’ll have further impressions on all of Apple’s AI features as they launch.)
Sure, it’s a bit ironic that Apple’s aging desktop OS is getting a shot of life via its mobile platform, but honestly, the best recent Mac features have been directly lifted from iOS and iPadOS. It’s clear that Apple is prioritizing the devices that get updated far more frequently than laptops and desktops. I can’t blame the company for being realistic – for now, I’m just glad it’s thoughtfully trying to make its devices play nice together. (And seriously, just bring iPhone mirroring to the Vision Pro already.)
This article originally appeared on Engadget at https://www.engadget.com/computing/macos-sequoia-review-iphone-mirroring-is-more-useful-than-you-think-140008463.html?src=rss
You can save big today on the Elgato Stream Deck+ with $30 off the control panel on Amazon. Great for streamers or anyone who wants tactile shortcuts and dials for their workflow, the Stream Deck+ drops from its usual $200 to $170 with a discount and a clickable coupon.
Although the Stream Deck+ sacrifices some buttons compared to the cheaper Stream Deck MK.2, this model makes up for it with four dials and a touch strip. Each dial is customizable and clickable, allowing you to layer different dial shortcuts with each press inward. You can twist them to adjust things like volume, smart lights and in-game settings.
Its eight buttons are backlit and fully customizable. Streamers can use the Stream Deck desktop app to assign functions for things like muting mics, activating effects or triggering transitions. But you don’t need to be a YouTuber or Twitch streamer for it to be helpful. For example, I’m neither and use a Stream Deck daily to toggle preset macOS window arrangements through the third-party app Moom. It’s also handy for text expansion shortcuts or emojis.
The 4.2 x 0.5-inch touch strip displays labels and levels for each knob, giving you a clear visual cue about what you’re controlling with each twist. The touch-sensitive bar also supports custom long presses and page swipes.
Amazon’s sale covers both the black and white Stream Deck+ models. Make sure you click on the $10 coupon box on the product page to bring it down to $170.
This article originally appeared on Engadget at https://www.engadget.com/computing/accessories/elgatos-stream-deck-drops-to-a-record-low-of-170-in-this-early-prime-day-deal-163729012.html?src=rss
The “regular” iPhone has become like a second child. Year after year, this model has gotten the hand-me-downs from the previous version of the iPhone Pro – the older, smarter sibling. The iPhone 15 received the iPhone 14 Pro’s Dynamic Island and A16 Bionic processor, and the iPhone 14 before that got the A15 Bionic chip and a larger Plus variant with the same screen size as the iPhone 13 Pro Max. For the iPhone 16 ($799 & up), there are trickle-down items once more. But this time around, that’s not the entire story for the Apple phone that’s the best option for most people.
Surprisingly, Apple gave some of the most attractive features it has for 2024 to both the regular and Pro iPhones at the same time. This means you won’t have to wait a year to get expanded camera tools and another brand new button. Sure, Apple Intelligence is still in the works, but that’s the case for the iPhone 16 Pro too. The important thing there is that the iPhone 16 is just as ready when the AI features arrive.
So, for perhaps the first time – or at least the first time in years – Apple has closed the gap between the iPhone and iPhone Pro in a significant way. ProRAW stills and ProRES video are still exclusive to the priciest iPhones, and a new “studio-quality” four-microphone setup is reserved for them too. Frustratingly, you’ll still have to spend more for a 120Hz display. But, as far as the fun new tools that will matter to most of us, you won’t have to worry about missing out this time.
New buttons, new bump, old design
Another year has passed and we still don’t have a significant redesign for any iPhone, let alone the base-level model. As such, I’ll spend my time here discussing what’s new. Apple was content to add new colors once again, opting for a lineup of ultramarine (blueish purple), teal, pink, white and black. The colors are bolder than what was available on the iPhone 15, although I’d like to see a blue and perhaps a bright yellow or orange. Additionally, there’s no Product Red option once again — we haven’t seen that hue since the iPhone 14.
The main change in appearance on the iPhone 16 is the addition of two new buttons. Of course, one of those, the reconfigurable action button above the volume rockers, comes from the Pro-grade iPhones. By default, the control does the task of the switch it replaces: activating silent mode. But, you can also set the action button to open the camera, turn on the flashlight, start a Voice Memo, initiate a Shazam query and more. You can even assign a custom shortcut if none of the presets fit your needs.
While Apple undoubtedly expanded the utility of this switch by making it customizable, regular iPhone users will have to get used to the fact that the volume control is no longer the top button on the left. This means that when you reach for the side to change the loudness, you’ll need to remember it’s the middle and bottom buttons. Of course, the action button is smaller than the other two, so with some patience you can differentiate them by touch.
Billy Steele for Engadget
Near the bottom of the right side, there’s a new Camera Control button for quick access to the camera and its tools. A press will open the camera app from any screen, and a long press will jump straight to 4K Dolby Vision video capture at 60 fps. Once you’re there, this button becomes a touch-sensitive slider for things like zoom, exposure and lens selection. With zoom, for example, you can scroll through all of the options with a swipe. Then with a double “light press,” which took a lot of practice to finally master, you can access the other options. Fully pressing the button once will take a photo — you won’t have to lift a finger to tap the onscreen buttons.
Around back, Apple rearranged the cameras so they’re stacked vertically instead of diagonally. It’s certainly cleaner than the previous look, and the company still favors a smaller bump in the top left over something that takes up more space or spans the entire width of the rear panel (Hi Google). The key reason the company reoriented the rear cameras is to allow for spatial photos and videos, since the layout now enables the iPhone 16 to capture stereoscopic info from the Fusion and Ultra Wide cameras.
Photographic stylin’
The iPhone 16 and 16 Plus have a new 48-megapixel Fusion camera that packs a quad-pixel sensor for high resolution and fine detail. Essentially, it’s two cameras in one, combining – or fusing, hence the name – a 48MP frame and a 12MP one that’s fine-tuned for light capture. By default, you’ll get a 24MP image, one that Apple says offers the best mix of detail, low-light performance and an efficient file size. There’s also a new anti-reflective coating on the main (and ultrawide) camera to reduce flares.
The 12MP ultrawide camera got an upgrade too. This sensor now has a faster aperture and larger pixels, with better performance in low-light conditions. There’s a new macro mode, unlocked by autofocus and able to capture minute detail. This is one of my favorite features as sharp images of smaller objects have never been in the iPhone camera’s arsenal (only the Pros), and the macro tool has worked well for me so far.
The iPhone 16, like its predecessors, takes decent stills. You’ll consistently get crisp, clean detail in well-lit shots and realistic color reproduction that doesn’t skew too warm or too cool. At a concert, I noticed that the iPhone 16’s low-light performance is noticeably better than the iPhone 15. Where the previous model struggled at times in dimly lit venues, my 2x zoom shots with this new model produced better results. There wasn’t a marked improvement across the board, but most of the images were certainly sharper.
Macro mode on the iPhone 16 camera is excellent.
Billy Steele for Engadget
The most significant update to the camera on the iPhone 16 is Photographic Styles. Apple has more computational image data from years of honing its cameras, so the system has a better understanding of skin tones, color, highlights and shadows. Plus, the phone is able to process all of this in real time, so you can adjust skin undertones and mood styles before you even snap a picture. Of course, you can experiment with them after shooting, and you can also assign styles to a gallery of images simultaneously.
Photographic Styles are massively expanded and way more useful, especially when you use them to preview a shot before you commit. My favorite element of the updated workflow is a new control pad where you can swipe around to adjust tone and color. There’s also a slider under it to alter the color intensity of the style you’ve selected. For me, the new tools in Photographic Styles make me feel like I don’t need to hop over to another app immediately to edit since I have a lot more options available right in the Camera app.
As I’ve already mentioned, Camera Control is handy for getting quick shots, and the touch-sensitivity is helpful with settings, but I have some gripes with the button. Like my colleague Cherlynn Low mentioned in her iPhone 16 Pro review, the placement causes issues depending on how you hold your phone, and may lead to some inadvertent presses. You can adjust the sensitivity of the button, or disable it entirely, which is a customization you might want to explore. What’s more, the touch-enabled sliding controls are more accurately triggered if you hold the phone with your thumbs along the bottom while shooting. So, this means you may need to alter your grip for prime performance.
Like I noted earlier, the new camera layout enables spatial capture of both video and photos on the iPhone 16. This content can then be viewed on Apple Vision Pro, with stills in the HEIC format and footage at 1080p/30fps. It’s great that this isn’t reserved for the iPhone 16 Pro, but the downside (for any iPhone) is file size. When you swipe over to Spatial Mode in the camera app, you’ll get a warning that a minute of spatial video is 130MB and a single spatial photo is 5MB. I don’t have one of Apple’s headsets, so I didn’t spend too much time here since the photos and videos just appear normal on an iPhone screen.
I’d argue the most significant advantage of Spatial Mode is Audio Mix. Here, the iPhone 16 uses the sound input from the spatial capture along with “advanced intelligence” to isolate a person’s voice from background noise. There are four options for Audio Mix, offering different methods for eliminating or incorporating environmental sounds. Like Cherlynn discovered on the iPhone 16 Pro, I found the Studio and Cinematic options work best, with each one taking a different approach to background noise. The former makes it sound like the speaker is in a studio while the latter incorporates environmental noise in surround sound with voices focused in the center – like in a movie. However, like her, I quickly realized I need a lot more time with this tool to get comfortable with it.
iOS 18 is still waiting on Apple Intelligence
Billy Steele for Engadget
Apple proudly proclaimed the iPhone 16 is "built for Apple Intelligence,” but you’ll have to wait a while longer to use it. That means things like AI-driven writing tools, summaries of audio transcripts, a prioritized inbox and more will work on the base iPhone 16 when they arrive, so you won’t need a Pro to use them. Genmoji and the Clean Up photo-editing assist are sure to be popular as well, and I’m confident we’re all ready for a long overdue Siri upgrade. There’s a lot to look forward to, but none of it is ready for the iPhone 16’s debut. The iOS 18.1 public beta arrived this week, so we’re inching closer to a proper debut.
Sure, it would’ve been nice for the excitement around the new iPhones to include the first crack at Apple’s AI. But, I’d rather the company fine-tune things before a wider release to make sure Apple Intelligence is fully ready and, more importantly, fully reliable. Google has already debuted some form of AI on its Pixel series, so Apple is a bit behind. I don't mind waiting longer for a useful tool than rushing a company into making buggy software.
What will be available on launch day is iOS 18, which delivers a number of handy updates to the iPhone, and many of which deal with customization. For the first time, Apple is allowing users to customize more than the layout on their Home Screen. You can now apply tint and color to icons, resize widgets and apps and lock certain apps to hide sensitive info. Those Lock Screen controls can also be customized for things you use most often, which is more handy now since the iPhone 16 has a dedicated camera button on its frame. There’s a big overhaul to the Photos app too, mostly focused on organization, that provides a welcome bit of automatization.
Performance and battery life
The iPhone 16 uses Apple’s new A18 chip with a 6-core CPU and 5-core GPU. There’s also a 16-core Neural Engine, which is the same as both the iPhone 15 and the iPhone 16 Pro. With the A18, the base-level iPhone jumped two generations ahead compared to the A16 Bionic inside the iPhone 15. The new chip provides the necessary horsepower for Apple’s AI and demanding camera features like Photographic Styles and the Camera Control button. I never noticed any lag on the iPhone 15, even with resource-heavy tasks, and those shouldn’t be a problem on the iPhone 16, either. But, we’ll have to wait and see how well the iPhone 16 handles Apple Intelligence this fall.
Of course, the A18 is more efficient than its predecessors, which is a benefit that extends to battery life. Apple promises up to 22 hours of local video playback on the iPhone 16 and up to 27 hours on the 16 Plus. For streaming video, those numbers drop to 18 and 24 hours respectively, and they’re all slight increases from the iPhone 15 and 15 Pro.
Starting at 7AM, I ran my battery test on the iPhone 16 and had 25 percent left at midnight. That’s doing what I’d consider “normal” use: a mix of calls, email, social, music and video. I also have a Dexcom continuous glucose monitor (CGM) that’s running over Bluetooth and I used the AirPods 4 several times during the day. And, of course, I was shooting photos and a few short video clips to test out those new features. While getting through the day with no problem is good, I’d love it if I didn’t have to charge the iPhone every night, or rely on low-power mode to avoid doing so.
On a related note, Apple has increased charging speeds via MagSafe, where you can get a 50 percent top up in around 30 minutes via 25W charging from a 30W power adapter or higher.
Wrap-up
With the iPhone 16, Apple has almost closed the gap between its best phone for most people and the one intended for the most demanding power users. It’s a relief to not pine for what could be coming on the iPhone 17 since a lot of the new features on the iPhone 16 Pro are already here. And while some of them will require time to master, it’s great that they’re on the iPhone 16 at all. There are some Pro features you’ll still have to spend more for, like ProRAW photos, ProRES video, a 120Hz display, a 5x telephoto camera and multi-track recording in Voice Memos. But those are luxuries not everyone needs. For this reason, the regular iPhone will likely suit your needs just fine, since splurging on the high-end model has become more of an indulgence than a necessity.
This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/apple-iphone-16-and-iphone-16-plus-review-closing-the-gap-to-the-pro-120050824.html?src=rss
Apple Intelligence is edging closer to being ready for primetime. Apple has released the public beta of iOS 18.1, which includes some of the major generative AI features that the company has been talking up over the last few months.
We'll have to wait a few more weeks for the public versions of iOS 18.1, iPadOS 18.1 and macOS Sequoia 18.1 to bring Apple Intelligence features to everyone with a compatible device. The public betas should be more stable and less risky to install than the developer betas, but it's still definitely worth backing up your data to your computer and/or iCloud before putting this build of iOS 18.1 on your iPhone.
Right now, the only iPhones that support Apple Intelligence are the iPhone 15 Pro and iPhone 15 Pro Max, but that will change on Friday when Apple ships the iPhone 16 lineup. M-series iPads and Macs will support Apple Intelligence too.
For now, you'll need to have your device and Siri language set to US English to access Apple Intelligence tools. If you want to use Apple Intelligence in a language other than English (or in a localized version of English), you may need to wait until at least December for the public versions of the operating systems that support it.
Apple is gradually rolling out Apple Intelligence tools over the coming months, so not all of them will be available right away. The initial wave of features includes the ability to transcribe phone calls (and audio notes in the Notes app) and get summaries of the key details. Writing tools (rewriting, proofreading and summarizing), email prioritization and smart replies, notification summaries and photo clean up features are also on the docket. You'll be able to create memories in the revamped Photos apps and check out the first incarnation of the redesigned, glowing Siri (including the ability to type requests to the assistant).
You'll need to wait longer for certain other features, including ChatGPT integration, Genmoji, Image Playground (i.e. image generation) and Siri's ability to better understand personal context. Apple will roll those out over the coming months.
How to get the new Apple Intelligence features
On your iPhone, go to Settings > General > Software Update > Beta Updates and select the iOS 18 public beta option. Once the iOS 18.1 public beta is available for your device, you'll be able to see it on the software update page. You might need to free up some space before you can install the beta. To enable Apple Intelligence, go to Settings > Apple Intelligence & Siri > Join the Apple Intelligence waitlist.
The public beta installation process is almost identical on iPad. On your Mac, you'll need to go to System Settings > General > Software Update. Click the info symbol next to the ”Beta updates" option and you should be able to install the iOS 18.1 public beta from there when it's available.
This article originally appeared on Engadget at https://www.engadget.com/ai/the-ios-181-public-beta-is-here-bringing-apple-intelligence-almost-to-the-masses-175248580.html?src=rss
Google is rolling out a really useful update for Google Password Manager, allowing users to sync passkeys across their many devices. Up until this point, folks could only save passkeys to Google Password Manager on Android, so the cross-device utility was limited. It was possible to use the passkeys on other devices, but it would require users to scan a QR code.
The update allows for passkey saving via Google Password Manager on Windows, macOS, Linux and, of course, Android. ChromeOS is currently being beta tested, so that functionality should come sooner rather than later. Google also says that iOS support is “coming soon.”
Once saved, the passkey automatically syncs across other devices using Google Password Manager. The company says this data is end-to-end encrypted, so it’ll be pretty tough for someone to go in and steal credentials.
For the uninitiated, a passkey is slightly different from a password. A passkey is a digital credential that allows users to sign in to an account without using a password. The company’s been using passkeys across its software suite since last year.
Today’s update also brings another layer of security to passkeys on Google Password Manager. The company has introduced a six-digit PIN that will be required when using passkeys on a new device. This would likely stop nefarious actors from logging into an account even if they've somehow gotten ahold of the digital credentials. Just don’t leave the PIN number laying on a sheet of paper directly next to the computer.
Google passkeys can already be used with the company’s productivity software, of course, but also with Amazon, PayPal and WhatsApp. Google Password Manager is built right into Chrome and Android devices.
This article originally appeared on Engadget at https://www.engadget.com/apps/google-passkeys-can-now-sync-across-devices-on-multiple-platforms-160056596.html?src=rss