The latest addition to Proton's workplace suite is a spreadsheet tool called Proton Sheets. It will offer real-time collaboration, and users can control who has access to view and edit files. Proton Sheets can also be accessed on any device, including mobile ones. It supports importing CSV and XLS files, and the spreadsheets also support commonly used formulas for calculations.
A big part of Proton's pitch is privacy, promising that users' information won't be used for training AI. The company also protects user data with end-to-end encryption by default; the press release pointedly notes that products like Google Sheets and Microsoft Excel don't do the same.
The Sheets app will be a part of Proton Drive, which already includes a Docs platform with several features similar to those offered by other productivity tools from big tech brands. Proton also offers a VPN and a Mail app.
This article originally appeared on Engadget at https://www.engadget.com/apps/proton-sheets-joins-the-companys-productivity-suite-110000344.html?src=rss
Alan Dye, Apple's Vice President of Human Interface Design, has been poached by Meta, Bloomberg reports. The designer played a pivotal role in the look and feel of Apple's products since Jony Ive left the company in 2019, and now he’ll be taking his talents to Meta.
Dye will reportedly work under Chief Technology Officer Andrew Bosworth as the head of a new studio that will oversee the design of hardware, software and AI products. The studio will also include former Apple designer Billy Sorrentino, Meta’s interface design lead Joshua To, an industrial design team led by Pete Bristol, and metaverse design and art teams led by Jason Rubin, Meta CEO Mark Zuckerberg announced on Threads.
“The new studio will bring together design, fashion, and technology to define the next generation of our products and experiences,” Zuckerberg shared in the post. “Our idea is to treat intelligence as a new design material and imagine what becomes possible when it is abundant, capable and human-centered.”
Apple, meanwhile, is replacing Dye with Stephen Lemay, Bloomberg reports, a senior designer at the company who's worked on all of the company’s interfaces since 1999. Considering the secrecy of Apple as a company, it's hard to credit individual breakthroughs to individual designers, but Dye worked on several of Apple's major new platforms and design changes, including things like the interface of visionOS and its new Liquid Glass design language.
Meta has had success with its Quest virtual reality headsets and more recently, its Ray-Ban Meta smart glasses, but the company clearly hopes to release many more consumer hardware products with Dye and its new design studio’s help. Those will likely include future versions of the Meta Ray-Ban Display and its Neural Band accessory.
Dye isn't the first designer Apple has lost to a competitor. Evans Hankey, the company’s former head of industrial design, left Apple in 2022 to work with Ive. Hankey is now one of several former Apple employees building OpenAI's upcoming hardware device. Dye joining Meta is particularly interesting in this case because Apple is rumored to be working on products that will bring the company in even closer competition to the social media giant. The Vision Pro could be considered to be a high-end competitor in VR, but Apple is reportedly working on its own pair of smart glasses, too.
Update, December 3, 5:54PM ET: Added information from Mark Zuckerberg’s Threads post on hiring Alan Dye and Meta’s new design studio.
This article originally appeared on Engadget at https://www.engadget.com/big-tech/apple-design-lead-alan-dye-is-heading-to-meta-214449944.html?src=rss
Earlier this year, Apple launched a new tool that makes it easier to read anything on your device’s screen. Designed for people with visual disabilities, Accessibility Reader provides a full-screen view of any on-screen text. (It’s a bit like Safari’s Reader Mode, only for any app.) The feature also lets you listen to your text read aloud.
Accessibility Reader is available for iPhone, iPad, Mac and Vision Pro. Your device will need to be on iOS 26, iPadOS 26, macOS 26 Tahoe or visionOS 26.
It’s a fairly straightforward experience. But since it offers several launch and customization options, here’s a quick breakdown on getting started and tweaking it to your liking.
How to turn on and open Accessibility Reader
The Accessibility Reader settings toggle on macOS.
Activating the feature is the same on any Apple device. Go to Settings > Accessibility > Read & Speak, and turn on Accessibility Reader. (It’s at the very top.) Once you’ve done that, there are several ways to launch the tool.
Accessibility Shortcut (iOS / iPadOS / visionOS)
Triple-click the lock button. That’s the side button on iPhone and the top button on iPad and Vision Pro. (On older iPads, triple-click the Home Button.) This brings up the Accessibility Shortcut, which includes a quick-launch item for Accessibility Reader.
If you don’t need the other items in this menu, you can remove them at Settings > Accessibility > Accessibility Shortcut. Then, using the shortcut will immediately launch Accessibility Reader.
Control Center (iOS / iPadOS)
You can add a Control Center shortcut for the tool. Swipe down from the top-right to launch Control Center. Then, hold your finger on an empty part of the screen. Choose “Add a control” (bottom), and find the Accessibility Reader shortcut. You can now tap that Control Center icon whenever you want to launch it.
Keyboard Shortcut (macOS)
The default Accessibility Reader shortcut on Mac is Cmd-Esc. Or, customize it in Settings > Accessibility > Accessibility Reader by clicking the “i” next to the menu item.
Accessibility Shortcut (macOS)
The tool is also available as part of the Mac’s Accessibility Shortcut. You can launch this menu using a keyboard shortcut (Opt-Cmd-F5), by quickly pressing Touch ID three times or with a Control Center shortcut. (However, the above Cmd-Esc shortcut should be the simplest for most people.)
How to listen to text in Accessibility Reader
The tool also includes a text-to-speech (TTS) option. Once you’ve launched Accessibility Reader, listening is as simple as pressing the play button (▶). You can then use the pause (⏸) shortcut to take a break.
Other options include skipping backward or forward using the rewind or fast-forward symbols. There’s also a speed adjustment, which you can change by choosing the 1x button.
If you want the Reader to speak text automatically when it opens, you can do that, too. That option is found under Settings > Accessibility > Accessibility Reader. (On Mac, select the “i” symbol next to the menu entry to find this option.)
How to customize Accessibility Reader
It’s easy to adjust the font size, color, theme and more. Once you’ve launched Accessibility Reader, tap the customization (AA) button. There, you can change the theme, colors, font, line spacing and much more.
This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/how-to-use-accessibility-reader-on-apple-devices-212231319.html?src=rss
Micron Technology is winding down its consumer-facing Crucial brand to focus on providing RAM and other components to the AI industry, The Wall Street Journal reports. The company plans to continue shipping Crucial RAM and storage through February 2026, and will honor warranty service and support for its existing Crucial products even after it stops selling directly to consumers.
"The AI-driven growth in the data center has led to a surge in demand for memory and storage. Micron has made the difficult decision to exit the Crucial consumer business in order to improve supply and support for our larger, strategic customers in faster-growing segments," Sumit Sadana, Micron Technology's EVP and Chief Business Officer said in an announcement to investors. Micron Technology didn't share how many jobs could be impacted by shuttering Crucial, but did note that it hoped to soften the blow via "redeployment opportunities into existing open positions within the company."
The majority of generative AI products used today are supported by a growing network of data centers that train and host large language models. The rapid buildout of servers at these data centers has been a boon to PC parts makers like NVIDIA, who provide the GPUs used to power them, but also companies like Micron, who build the memory components these computers need to run. It's not surprising the company would want to focus on where growing demand is, but it does put considerable strain on the remaining companies who continue to service both businesses and hobbyist PC-builders.
There were next to no true deals on memory or pre-built PCs for Black Friday due to how costly RAM has become now that AI companies are buying it in bulk. PC maker CyberPowerPC even went as far to say that "global memory (RAM) prices have surged by 500 percent and SSD prices have risen by 100 percent," forcing it to raise prices on its products. Losing another source of RAM like Crucial likely won't make things any better.
This article originally appeared on Engadget at https://www.engadget.com/computing/crucial-is-a-casualty-of-ais-hunger-for-ram-185910113.html?src=rss
India will no longer require smartphone makers to preinstall the Sanchar Saathi "security" app. After blowback from Apple, Samsung and opposition leaders, the Modi government issued a statement saying it "has decided not to make the pre-installation mandatory for mobile manufacturers." The app is still available as a voluntary download.
India's Ministry of Communications framed the U-turn as a result of strong voluntary adoption. The nation said 14 million users (around 1 percent of the nation’s population) have downloaded the app. "The number of users has been increasing rapidly, and the mandate to install the app was meant to accelerate this process and make the app available to less aware citizens easily," the statement read.
In a statement sent to Engadget, the Electronic Frontier Foundation (EFF) celebrated India’s reversal. "This was a terrible and dangerous idea by the Indian government that lasted 24 hours longer than it ever should have," EFF Civil Liberties Director David Greene wrote. "We thank our colleague organizations in India, such as SFLC.in and Internet Freedom Foundation, for promptly opposing it."
The Indian government had previously given smartphone makers 90 days to preinstall the Sanchar Saathi app on all new phones. They were also required to deliver it to existing devices via software updates. India claims its app exists solely for cybersecurity purposes. It includes tools allowing users to report and lock lost or stolen devices.
But privacy advocates warned that it could be used as a government backdoor for mass surveillance. According to the BBC, the app’s privacy policy allows it to make and manage calls and send messages. It can access call and message histories, files, photos and the camera.
Reutersreports that industry experts cited Russia as the only known precedent for such a requirement. In August, Vladimir Putin's regime ordered the messenger app MAX to be preinstalled on all mobile devices in the country. Like with India's example, experts warned that it could be used for surveillance.
On Tuesday, Reuters reported that Apple would not comply with India's order, citing privacy and security concerns. Samsung reportedly followed. Opposition leaders in the Indian government also joined the fray. Senior Congress leader Randeep Singh Surjewala called on the Modi government to clarify its legal authority for "mandating a non-removable app." Despite India's framing, it seems likely that the two companies' stances, along with domestic political pressure, played no small role in the reversal.
Update, December 3, 2025, 2:50 PM ET: This story has been updated to add a statement from the EFF.
This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/india-will-no-longer-require-smartphone-makers-to-preinstall-its-state-run-cybersecurity-app-171500923.html?src=rss
Sony is bringing another of its long-running game franchises to iOS and Android in the shape of MLB The Show Mobile. This is a free-to-play “standalone experience built from the ground up to deliver realistic baseball gameplay on mobile devices.” San Diego Studio, the developer of every MLB The Show game since the series debuted in 2006, is behind this mobile game as well.
MLB The Show Mobile, which was spotted by Gematsu, doesn’t feature crossplay with console games. For now, it’s only available in the Philippines and it went live there on Wednesday. Sony says it doesn’t have a timeline in place for expanding availability to more territories, but it certainly plans to do that. It’s not uncommon for mobile games to have a soft launch in select regions before they’re made available elsewhere. Sony is doing the same thing with a Ratchet and Clank multiplayer game.
Sony is optimizing MLB The Show Mobile for more recent mobile devices. On the iOS side, that means “iPhone 16 or comparable” devices. As for Android, you’ll get the best experience on Samsung Galaxy S25, Sony Xperia V or a comparable device, according to the game’s website.
MLB The Show Mobile features solo and player-vs-player modes. There are more than 1,100 cards representing baseball players in the game. You’ll be able to build out an all-star roster of MLB players past and present, and upgrade their cards. San Diego Studio appears to be tapping into the Ultimate Team modes of EA Sports games, as you’ll be able to buy and sell cards with other players in a marketplace. Sony also notes that in-game purchases can include random items.
Each of these player cards has a momentum cost. These are stat points you can use strategically to better your chances of winning. The gameplay is skill-based. You’ll need to get the timing right to throw a great pitch or hit the ball out of the park. You’ll have real-time control of runners as well, so you can try to steal bases.
This article originally appeared on Engadget at https://www.engadget.com/gaming/playstation/sony-is-bringing-mlb-the-show-to-ios-and-android-163300468.html?src=rss
Video used to be an afterthought for Nikon, but since the company purchased RED last year, content creators are now high on its priority list. A perfect example of that is Nikon’s new $2,200 ZR: a full-frame mirrorless model that stands up against dedicated cinema cameras for a fraction of the price.
It’s the first consumer camera to capture video using RED’s 12-bit RAW format, but unlike RED’s Hollywood cameras, it has a fast and accurate autofocus system. It also comes with a huge display, pro video monitoring tools, in-body stabilization and 32-bit float internal audio recording. After shooting a short film that tested its capabilities, I can confirm that the Nikon ZR offers incredible video quality at this price.
Body and handling
While a bit lighter than Nikon’s Z6 III, the 1.19-pound (540-gram) ZR feels solid. It has a boxy design like Sony’s FX2 but a much smaller grip because it’s designed to be rigged up for cinema shooting with cages and handles. However, unlike the FX2 which has multiple 1/4-inch mounting threads to do such rigging, the ZR unfortunately has only one of those on the bottom.
The ZR also lacks an electronic viewfinder like the FX2, but it more than makes up for that with its huge 4-inch display — the largest I’ve ever seen on a mirrorless camera. At 1,000 nits, it’s bright enough to shoot on sunny days, extremely sharp (3.07 million dots) and flips out for vloggers. All of that makes it a perfect primary display for checking the image and controlling the camera.
Nikon has nailed the ZR’s handling, too. While it’s not covered with buttons and dials like some models, it does have two shooting dials to control exposure and a joystick for autofocus. There’s also a camera/video switch, two record buttons, a power switch and five customizable buttons. Many of Nikon’s lenses come with control rings as well, so extra manual control is available.
The menu button is unusual: you press once for the quick menu and hold to see the full menu. Given the large number of settings, I would advise anyone buying this camera to learn all the important adjustments, then customize the controls to avoid wading through dense menus while shooting.
Another unique feature is in the battery compartment. There’s a single fast CFexpress slot to handle RAW video, plus a microSD slot for proxies. The lack of a second CFexpress slot or fast SD card slot for backup isn’t ideal for a professional camera, though.
Finally, the ZR runs on the same N‑EL15c batteries as other Nikon mirrorless cameras. They allow 90 minutes of HD shooting on a charge, or 390 photos per CIPA standards. That’s mediocre, so if you’re planning long shoots, stock up on batteries.
Video
Steve Dent for Engadget
The Nikon ZR has the largest selection of RAW video settings I’ve seen. The centerpiece is RED’s RAW R3D NE light codec (designed by RED for Nikon) with RED’s Log3G10 log format. It also supports Nikon’s N-RAW, ProRes/ProRes RAW and H.265 with resolution that ranges from 6K at up to 60 fps to 4K 120 fps and 1080p at 240 fps. Despite the smallish body, it can capture 6K RAW video continuously for 125 minutes without overheating.
The 24MP sensor uses a dual ISO system with native 800 and 6,400 ISOs, providing a nice range for indoor and outdoor shooting. The company claims 15+ stops of dynamic range, which is more than just about any other mirrorless camera. Other key video features include five-axis in-body stabilization with seven stops of shake reduction, waveform and vectorscope monitoring and a false color display for manual focus.
To test the camera’s features and video quality, I shot a short film in a mix of indoor low light, outdoor daytime and a mix between the two. I also shot handheld (including running with it) to test the stabilization. I primarily captured in R3D RAW, as well as Nikon’s N-RAW at the native 800 and 6,400 ISOs to maximize dynamic range. (You can take 24MP photos with this camera, but I’m focusing on video as it’s mainly designed for that.)
In order to not see a flat log profile when shooting, you’ll need to apply a look-up table (LUT) designed for RED cameras, like "Achromic," "Bleach" or "Caustic." Those are only for in-camera previews and not baked into the video, but you can apply those LUTs later in Adobe Premiere or DaVinci Resolve to get the same look.
Steve Dent for Engadget
With such a high native ISO, I was able to shoot inside with a single studio light. Video quality was outstanding with little noise in shadow regions, even after boosting black levels in post. Meanwhile, the RED R3D codec and Log3G10 gave me extra latitude to reveal shadow detail and dial down highlights when I shot the subject against a bright window.
When you use the R3D codec, exposure is strictly manual with no ability to set auto shutter speed (shutter angle) or f-stop. So, for a scene with varying light, I used Nikon’s N-RAW to see if it would give me the correct exposure at the beginning and end of the scene. It did a good job, with no noticeable jumps during the shot.
Video in sunlight at ISO 800 was also sharp with accurate colors after downscaling to 4K from 6K in DaVinci Resolve. ISO 800 is a relatively high native setting, though, and the ZR doesn’t have a built-in ND filter to reduce exposure. That means you’ll need to buy ND filters for outside shooting or the high shutter speeds will result in choppy video.
Cinema cameras from Blackmagic Design, Arri or RED are manual-focus only. But the ZR is a Nikon camera, and it has the best AF system I’ve seen on any of the company’s models, consistently nailing focus even with moving subjects. You can also automatically track vehicles, birds and other animals. At the same time, the ZR handles manual focus well. That’s thanks to a built-in display that’s big enough to check focus accurately and Nikon’s focus peaking setting with three levels of sensitivity.
Steve Dent for Engadget
In-body stabilization on the ZR wasn’t up to par with Panasonic’s S1 II, however. Video was smooth for handheld shooting if I panned the camera gently, but all my running and walking shots showed noticeable camera shake. That said, the ZR at least has in-body stabilization, unlike most cinema cameras, and most filmmakers will use a gimbal for running shots, regardless of which camera they use. (Note that the rattling you hear when the ZR is turned off is the sensor, which floats by design.)
Finally, I was able to capture good audio quality via an external microphone without any clipping worries thanks to the Nikon ZR’s 32-bit float internal audio capture. The company also touts directional capture using its built-in mics, but as with any such system, audio quality isn’t high enough for production use.
Wrap-up
With the ZR, Nikon has shown that it’s finally catching up to and even surpassing its rivals for content creation. Whether you’re doing social media, YouTube, documentaries or even film production, this camera is versatile and powerful with few compromises. Video quality and ease of use even beats models that are double or triple the price.
The ZR’s primary competition is in the low-end cinema cameras, particularly Sony’s $2,998 FX2 and the $3,899 Canon R5C. While more expensive, both come with an electronic viewfinder that the ZR lacks, and the R5C can shoot up to 8K video. Another option is Blackmagic Design’s Pyxis 6K camera, but it only offers basic autofocus capabilities and lacks in-body stabilization.
Compared to those options, Nikon’s ZR delivers better dynamic range thanks to the inclusion of RED’s R3D RAW codec. It also comes with an excellent autofocus system and decent in-body stabilization. If you’re a creator looking to get the best video quality for the money without losing those niceties, I’d highly recommend the ZR.
This article originally appeared on Engadget at https://www.engadget.com/cameras/nikon-zr-review-a-highly-capable-cinema-camera-at-a-reasonable-price-152634311.html?src=rss
Superhuman, the AI-powered mail app, is heading in a more agentic direction with its latest update. Its "write with AI" feature, which you could previously activate when drafting an email, now works across your inbox, calendar, and the web. This means it can now pull in information from other emails or research a topic online. The AI will think for as long as it needs before responding to a prompt and will open its Ask AI tool if it needs clarification.
Ask AI now lives in a left sidebar when you’re on desktop, so it’s always accessible should you need to draft a note, ask a question or quickly schedule a meeting without digging around in your emails. You can also now check your Ask AI history on iOS and desktop for previous conversations. Write with AI is also now available on Android, which will soon gain the other new features too.
Superhuman
Superhuman was acquired by Grammarly earlier this year, with the latter recently rebranding so all of its AI apps now sit under the Superhuman umbrella. The mail service is seemingly primarily targeted at business rather than consumers, with its most advanced version of Write with AI and Ask AI being included in Business and Enterprise plans. The more basic standard version of Write with AI is rolled into the Starter plan for desktop and mobile.
Superhuman is promising further agentic updates in the near future.
This article originally appeared on Engadget at https://www.engadget.com/ai/superhuman-formerly-grammarly-has-some-ai-updates-for-its-superhuman-mail-app-140017716.html?src=rss
One of the iPhone’s many accessibility features is something Apple calls "Magnifier," which uses the smartphone's cameras to magnify and identify objects in the world around you. For Global Accessibility Awareness Day in May this year, Apple brought Magnifier to the Mac, opening up even more places the assistive tool can be used, like classroom or work environments where you might already have a MacBook pulled out.
Magnifier requires macOS 26 Tahoe and can work with a built-in webcam, a connected third-party camera or an iPhone via Apple's Continuity feature. Provided your MacBook can run Apple’s latest software update, it’s a natural fit for zooming in on a whiteboard at the back of a large lecture hall or getting a closer look at documents on a desk in front of you. You can use the app to both capture an individual image you want to refer to later, or to analyze text in a live video feed. But where to begin? Here’s how to set up and use Magnifier on your Mac.
How to use Magnifier to identify and display text
A MacBook using Magnifier and a connected iPhone to identify and format text from a book.
Apple
Magnifier's most powerful feature uses the MacBook's machine learning capabilities to identify, display and format text that your camera captures. This works with text your camera can see in the room around you, and things it captures via macOS' Desk View feature. For example, to view documents on your desk:
Open Magnifier.
Click on the Camera section in Magnifier's menu bar and then select your Desk View camera from the dropdown menu.
Click on the Reader icon (a simple illustration of a document) near the top-right of your Magnifier window.
Click on the sidebar menu icon to access settings to format text.
Apple gives you options to change the color, font and background of text Magnifier identifies, among other customization options. If you'd prefer to capture faraway text, you can position a webcam or iPhone camera facing away from you and swap to it via the Camera section in Magnifier's menu bar.
You can also listen to any text Magnifier has identified by clicking on the Play button in the top-right corner of Magnifier's reader mode. Clicking the Pause button will pause playback, clicking the Skip Forward or Skip Backward buttons skip through lines of text, and if you want to adjust playback speed, you can click on the 1x button and pick a speed from the dropdown menu.
How to use Magnifier to zoom in on yourself
Magnifier can identify text, but it also works as a way to get a zoomed in view of your own face.
Ian Carlos Campbell for Engadget
By default, Magnifier uses your MacBook's built-in webcam, which means you'll see a view of yourself and whatever's behind you if you don't have another camera selected. This might not be usual for seeing faraway text, but it is handy if you're applying makeup, putting in contacts or doing anything else where you need a detailed view of your face.
In my tests, using Magnifier worked the best with my MacBook's built-in webcam or an iPhone. When I tried using a third-party webcam from Logitech, my live camera feed was noticeably laggy. Your mileage may vary, but if you experience any issues with your own webcam, it's worth trying your built-in webcam to see if that helps. You can swap between cameras and zoom in to your camera feed inside the Magnifier app:
Open Magnifier.
In the top menu bar, select Camera and then click on the camera you'd like to use in the dropdown menu.
Use the slider in the top center of the Magnifier window to zoom in on yourself.
You can see a live feed of your zoomed in view in Magnifier's main window. If you click on the Camera button in the bottom-left corner of the app, you can also snap a photo to review later. Any photos you capture will appear in Magnifier's left sidebar menu. Clicking on them lets you view them, zoom in on them and adjust their visual appearance (Brightness, Contrast and other visual settings) via the Image section in Magnifier's menu bar.
This article originally appeared on Engadget at https://www.engadget.com/computing/laptops/how-to-use-magnifier-on-a-macbook-to-zoom-in-on-faraway-text-080100677.html?src=rss
ExpressVPN, one of the best VPNs, is launching two brand-new features that sound confusingly like things it already does. Users on Android, Mac and iOS (but apparently not Windows, Linux or smart TVs) can now use Fastest Location to automatically pick the VPN server with the fastest download speed and lowest latency. Mac users are also getting an overhauled ExpressVPN app designed to work natively with MacOS.
If you've used ExpressVPN before, your first reaction probably went something like "Wait, didn't it already have a Fastest Location button and a Mac app?" You're not wrong, but there's still a meaningful difference with these new features. In the past, ExpressVPN didn't technically pick the fastestlocation, but the Smart Location, which picks the best available server using "metrics such as download speed, latency, and distance" (emphasis mine). Those are the same metrics as the new feature, but the such as makes me think there are, or were, other ingredients in the "smart location" algorithm.
My guess is that ExpressVPN is rebranding "smart" to "fastest" in response to customer complaints that "smart" was picking sub-optimal server locations. That's not a behavior I noticed when I last reviewed ExpressVPN — the smart location was always plenty fast for me — but I'm just one user. Only testing can show whether they actually changed the algorithm or just the name.
The new Mac app is a more straightforward upgrade. While ExpressVPN has always had a client for Mac, it's thus far been a port of an app originally developed for iPad. This makes its otherwise-excellent interface feel a bit like, well, a phone app you use on your desktop. In contrast, the new app was built using Project Catalyst, which lets Mac developers turn their iOS apps into desktop-native software. The new interface looks a lot richer, using the screen space a lot like Proton VPN does. And being more like Proton VPN is rarely a bad thing.
This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/vpn/expressvpn-adds-a-fastest-location-button-and-launches-a-new-native-mac-app-205837728.html?src=rss