Sony is bringing another of its long-running game franchises to iOS and Android in the shape of MLB The Show Mobile. This is a free-to-play “standalone experience built from the ground up to deliver realistic baseball gameplay on mobile devices.” San Diego Studio, the developer of every MLB The Show game since the series debuted in 2006, is behind this mobile game as well.
MLB The Show Mobile, which was spotted by Gematsu, doesn’t feature crossplay with console games. For now, it’s only available in the Philippines and it went live there on Wednesday. Sony says it doesn’t have a timeline in place for expanding availability to more territories, but it certainly plans to do that. It’s not uncommon for mobile games to have a soft launch in select regions before they’re made available elsewhere. Sony is doing the same thing with a Ratchet and Clank multiplayer game.
Sony is optimizing MLB The Show Mobile for more recent mobile devices. On the iOS side, that means “iPhone 16 or comparable” devices. As for Android, you’ll get the best experience on Samsung Galaxy S25, Sony Xperia V or a comparable device, according to the game’s website.
MLB The Show Mobile features solo and player-vs-player modes. There are more than 1,100 cards representing baseball players in the game. You’ll be able to build out an all-star roster of MLB players past and present, and upgrade their cards. San Diego Studio appears to be tapping into the Ultimate Team modes of EA Sports games, as you’ll be able to buy and sell cards with other players in a marketplace. Sony also notes that in-game purchases can include random items.
Each of these player cards has a momentum cost. These are stat points you can use strategically to better your chances of winning. The gameplay is skill-based. You’ll need to get the timing right to throw a great pitch or hit the ball out of the park. You’ll have real-time control of runners as well, so you can try to steal bases.
This article originally appeared on Engadget at https://www.engadget.com/gaming/playstation/sony-is-bringing-mlb-the-show-to-ios-and-android-163300468.html?src=rss
Waymo is adding four new US cities to the gradual rollout of its robotaxi service. As reported by TechCrunch, the company said it has already started trialling self-driving cars in Philadelphia, albeit with a human safety monitor, and that it will now commence similar manual tests in Baltimore, St. Louis and Pittsburgh.
After the initial supervisory and data-collecting stage, the plan is to deploy fully autonomous vehicles, as Waymo recently did in Miami, ahead of launching in five new cities across Texas and Florida in 2026. Waymo's taxis currently accept passengers in Los Angeles, Phoenix, Atlanta, Austin and the San Francisco Bay Area, and it recently announced that San Diego, Las Vegas and Detroit would soon be joining them.
Also key to the company’s aggressive nationwide expansion is New York City, even if a fully functioning robotaxi service is likely still some way off. New York state law currently prohibits the operation of vehicles without a driver behind the wheel, but back in August, Waymo was temporarily granted the permit needed to be able to test autonomous vehicles in parts of Manhattan and Downtown Brooklyn. The testing phase ran until late September, marking the first time a permit for the "testing deployment" of AVs in the city had been signed off.
Waymo has international ambitions too. Next year it will partner with Moove to launch a robotaxi service in London, which will be its first major expansion outside the US. Fully driverless cars are currently banned in the UK, but new legislation will begin a long regulatory process that starts with government-approved robotaxi pilots in the Spring.
This article originally appeared on Engadget at https://www.engadget.com/transportation/evs/waymos-testing-avs-in-four-more-cities-including-philly-161709279.html?src=rss
Video used to be an afterthought for Nikon, but since the company purchased RED last year, content creators are now high on its priority list. A perfect example of that is Nikon’s new $2,200 ZR: a full-frame mirrorless model that stands up against dedicated cinema cameras for a fraction of the price.
It’s the first consumer camera to capture video using RED’s 12-bit RAW format, but unlike RED’s Hollywood cameras, it has a fast and accurate autofocus system. It also comes with a huge display, pro video monitoring tools, in-body stabilization and 32-bit float internal audio recording. After shooting a short film that tested its capabilities, I can confirm that the Nikon ZR offers incredible video quality at this price.
Body and handling
While a bit lighter than Nikon’s Z6 III, the 1.19-pound (540-gram) ZR feels solid. It has a boxy design like Sony’s FX2 but a much smaller grip because it’s designed to be rigged up for cinema shooting with cages and handles. However, unlike the FX2 which has multiple 1/4-inch mounting threads to do such rigging, the ZR unfortunately has only one of those on the bottom.
The ZR also lacks an electronic viewfinder like the FX2, but it more than makes up for that with its huge 4-inch display — the largest I’ve ever seen on a mirrorless camera. At 1,000 nits, it’s bright enough to shoot on sunny days, extremely sharp (3.07 million dots) and flips out for vloggers. All of that makes it a perfect primary display for checking the image and controlling the camera.
Nikon has nailed the ZR’s handling, too. While it’s not covered with buttons and dials like some models, it does have two shooting dials to control exposure and a joystick for autofocus. There’s also a camera/video switch, two record buttons, a power switch and five customizable buttons. Many of Nikon’s lenses come with control rings as well, so extra manual control is available.
The menu button is unusual: you press once for the quick menu and hold to see the full menu. Given the large number of settings, I would advise anyone buying this camera to learn all the important adjustments, then customize the controls to avoid wading through dense menus while shooting.
Another unique feature is in the battery compartment. There’s a single fast CFexpress slot to handle RAW video, plus a microSD slot for proxies. The lack of a second CFexpress slot or fast SD card slot for backup isn’t ideal for a professional camera, though.
Finally, the ZR runs on the same N‑EL15c batteries as other Nikon mirrorless cameras. They allow 90 minutes of HD shooting on a charge, or 390 photos per CIPA standards. That’s mediocre, so if you’re planning long shoots, stock up on batteries.
Video
Steve Dent for Engadget
The Nikon ZR has the largest selection of RAW video settings I’ve seen. The centerpiece is RED’s RAW R3D NE light codec (designed by RED for Nikon) with RED’s Log3G10 log format. It also supports Nikon’s N-RAW, ProRes/ProRes RAW and H.265 with resolution that ranges from 6K at up to 60 fps to 4K 120 fps and 1080p at 240 fps. Despite the smallish body, it can capture 6K RAW video continuously for 125 minutes without overheating.
The 24MP sensor uses a dual ISO system with native 800 and 6,400 ISOs, providing a nice range for indoor and outdoor shooting. The company claims 15+ stops of dynamic range, which is more than just about any other mirrorless camera. Other key video features include five-axis in-body stabilization with seven stops of shake reduction, waveform and vectorscope monitoring and a false color display for manual focus.
To test the camera’s features and video quality, I shot a short film in a mix of indoor low light, outdoor daytime and a mix between the two. I also shot handheld (including running with it) to test the stabilization. I primarily captured in R3D RAW, as well as Nikon’s N-RAW at the native 800 and 6,400 ISOs to maximize dynamic range. (You can take 24MP photos with this camera, but I’m focusing on video as it’s mainly designed for that.)
In order to not see a flat log profile when shooting, you’ll need to apply a look-up table (LUT) designed for RED cameras, like "Achromic," "Bleach" or "Caustic." Those are only for in-camera previews and not baked into the video, but you can apply those LUTs later in Adobe Premiere or DaVinci Resolve to get the same look.
Steve Dent for Engadget
With such a high native ISO, I was able to shoot inside with a single studio light. Video quality was outstanding with little noise in shadow regions, even after boosting black levels in post. Meanwhile, the RED R3D codec and Log3G10 gave me extra latitude to reveal shadow detail and dial down highlights when I shot the subject against a bright window.
When you use the R3D codec, exposure is strictly manual with no ability to set auto shutter speed (shutter angle) or f-stop. So, for a scene with varying light, I used Nikon’s N-RAW to see if it would give me the correct exposure at the beginning and end of the scene. It did a good job, with no noticeable jumps during the shot.
Video in sunlight at ISO 800 was also sharp with accurate colors after downscaling to 4K from 6K in DaVinci Resolve. ISO 800 is a relatively high native setting, though, and the ZR doesn’t have a built-in ND filter to reduce exposure. That means you’ll need to buy ND filters for outside shooting or the high shutter speeds will result in choppy video.
Cinema cameras from Blackmagic Design, Arri or RED are manual-focus only. But the ZR is a Nikon camera, and it has the best AF system I’ve seen on any of the company’s models, consistently nailing focus even with moving subjects. You can also automatically track vehicles, birds and other animals. At the same time, the ZR handles manual focus well. That’s thanks to a built-in display that’s big enough to check focus accurately and Nikon’s focus peaking setting with three levels of sensitivity.
Steve Dent for Engadget
In-body stabilization on the ZR wasn’t up to par with Panasonic’s S1 II, however. Video was smooth for handheld shooting if I panned the camera gently, but all my running and walking shots showed noticeable camera shake. That said, the ZR at least has in-body stabilization, unlike most cinema cameras, and most filmmakers will use a gimbal for running shots, regardless of which camera they use. (Note that the rattling you hear when the ZR is turned off is the sensor, which floats by design.)
Finally, I was able to capture good audio quality via an external microphone without any clipping worries thanks to the Nikon ZR’s 32-bit float internal audio capture. The company also touts directional capture using its built-in mics, but as with any such system, audio quality isn’t high enough for production use.
Wrap-up
With the ZR, Nikon has shown that it’s finally catching up to and even surpassing its rivals for content creation. Whether you’re doing social media, YouTube, documentaries or even film production, this camera is versatile and powerful with few compromises. Video quality and ease of use even beats models that are double or triple the price.
The ZR’s primary competition is in the low-end cinema cameras, particularly Sony’s $2,998 FX2 and the $3,899 Canon R5C. While more expensive, both come with an electronic viewfinder that the ZR lacks, and the R5C can shoot up to 8K video. Another option is Blackmagic Design’s Pyxis 6K camera, but it only offers basic autofocus capabilities and lacks in-body stabilization.
Compared to those options, Nikon’s ZR delivers better dynamic range thanks to the inclusion of RED’s R3D RAW codec. It also comes with an excellent autofocus system and decent in-body stabilization. If you’re a creator looking to get the best video quality for the money without losing those niceties, I’d highly recommend the ZR.
This article originally appeared on Engadget at https://www.engadget.com/cameras/nikon-zr-review-a-highly-capable-cinema-camera-at-a-reasonable-price-152634311.html?src=rss
Amazon is rolling out a new Alexa+ feature on Fire TV that can take you to a specific moment in a given movie on Prime Video based on a natural language voice command. The company says that, when you describe a certain scene, quote or character action, Alexa+ can start playing that part of the film. The company previewed this feature at its Devices and Services event in September.
According to Amazon, you can say something like “Jump to the card scene in Love Actually" or “Jump to the Ozdust ballroom scene in Wicked with Glinda,” to quickly get to that moment. Alexa+ can apparently figure out which movie you're referring to if you don't say the title. So if you say, for instance, “Jump to the scene when John McClane says ‘come out to the coast, we’ll get together, have a few laughs,’” Prime Video will start playing that bit in Die Hard where McClane is in an air duct.
To make this work, Alexa+ uses "visual understanding" and captions to determine what's happening in each scene so it can take you to the one you're looking for. It's all processed through the X-Ray feature in Prime Video. As with Alexa+, it's built on Amazon Bedrock and it harnesses large language models such as Amazon Nova and Anthropic Claude.
Alexa+ has indexed tens of thousands of scenes across thousands of movies on Prime Video so far, including many that you can purchase or rent. Amazon plans to expand this feature to more films and scenes, as well as TV shows, in the near future.
While this is pretty interesting from a tech perspective and how Amazon’s able to make it work, I’d be interested to know how many people actually end up using it. This isn’t how most people who genuinely love cinema watch movies — maybe just start at the beginning of a film and take it from there? Besides, if you really want to watch a specific scene, YouTube exists.
This article originally appeared on Engadget at https://www.engadget.com/ai/amazon-rolls-out-a-find-a-scene-alexa-feature-for-prime-video-150557530.html?src=rss
Amazon appears to have quietly removed its terrible AI-generated English dubs for several anime shows currently streaming on Prime Video, following widespread ridicule from viewers and industry professionals. AI dubs were recently added to Banana Fish, No Game, No Life and Vinland Saga, where they were labeled "AI beta" in the Languages section of the app.
As shows that previously only offered English subtitles, the option of a dub for those who prefer it could have been seen as a win for Amazon. But it quickly became clear that the dubs were really quite bad, completely devoid of any emotion or convincing intonation in dramatic moments. Particularly awful clips of the AI English dub for Banana Fish soon started circulating on social media, and the National Association of Voice Actors released a statement in which it branded the dubs "AI slop."
In his own statement, voice actor Daman Mills called the AI-generated dub for Banana Fish a "massive insult to us as performers." In a post on X, which at the time of writing has been liked 14,000 times, he said: "Voice Actors deserve the same level of respect as on camera performers. Anime already pays talent very little. Dub production costs shouldn’t make a dent in these companies’ pocket books. Using AI for a dub of a show that released nearly 8 YEARS AGO AND HAD NO RUSHED SCHEDULE just spits in our faces, has infuriated the consumer, and completely destroys the art."
Decision-makers at Amazon apparently noted the backlash, as the English dub options no longer show up, as acknowledged by Mills and others yesterday. An AI-generated Spanish dub for Vinland Saga appears to have survived the silent cull, but otherwise it’s back to Japanese language-only and subtitles for the other shows.
The company is clearly committed to introducing more AI to Prime Video — along with its various other services — despite this latest public shaming. It launched an "AI-aided" dubbing program for Prime Video earlier this year, piloting English and Latin American Spanish dubs in 12 licensed series and movies on Prime Video in March. Last month, it also introduced video recaps that summarize shows’ "most pertinent plot points" using generative AI. The feature is currently in beta for select English language Prime Original shows in the US.
This article originally appeared on Engadget at https://www.engadget.com/entertainment/amazon-halts-ai-anime-dub-beta-after-widespread-ridicule-141501051.html?src=rss
Superhuman, the AI-powered mail app, is heading in a more agentic direction with its latest update. Its "write with AI" feature, which you could previously activate when drafting an email, now works across your inbox, calendar, and the web. This means it can now pull in information from other emails or research a topic online. The AI will think for as long as it needs before responding to a prompt and will open its Ask AI tool if it needs clarification.
Ask AI now lives in a left sidebar when you’re on desktop, so it’s always accessible should you need to draft a note, ask a question or quickly schedule a meeting without digging around in your emails. You can also now check your Ask AI history on iOS and desktop for previous conversations. Write with AI is also now available on Android, which will soon gain the other new features too.
Superhuman
Superhuman was acquired by Grammarly earlier this year, with the latter recently rebranding so all of its AI apps now sit under the Superhuman umbrella. The mail service is seemingly primarily targeted at business rather than consumers, with its most advanced version of Write with AI and Ask AI being included in Business and Enterprise plans. The more basic standard version of Write with AI is rolled into the Starter plan for desktop and mobile.
Superhuman is promising further agentic updates in the near future.
This article originally appeared on Engadget at https://www.engadget.com/ai/superhuman-formerly-grammarly-has-some-ai-updates-for-its-superhuman-mail-app-140017716.html?src=rss
It's that time of year again, when all of our favorite streaming platforms start dropping personalized lists of what we've been consuming. Spotify Wrapped is perhaps the biggest of the bunch and it's available for perusal right now.
As always, users can access Wrapped to find their most listened-to genres, artists, songs, albums and podcasts from the past year. This information is shareable via social media if you want random bald eagle avatars to comment on your music taste, but there's a new interactive feature called Wrapped Party.
Spotify
This is a game of a sort. Spotify says it "turns your listening data into a live competition." Wrapped Party hands out awards for stuff like listening to smaller artists and obsession with a particular artist, in addition to total minutes streamed. Finally, friends can settle the age-old debate of "who listens to music more."
Spotify Wrapped is also about the platform itself, so we have plenty of little tidbits from the global user base. Bad Bunny was named the most streamed artist in the world, just ahead of his Super Bowl performance that internet bozos have turned into a controversy for some reason. This is the fourth time he's come out on top in the past five years. He also had the most popular album of the year.
Spotify
The global top song is something of a surprise, as it's not Bad Bunny or even Taylor Swift. It's the Lady Gaga and Bruno Mars duet "Die With a Smile." The top podcast is, as always, The Joe Rogan Experience. At least Spotify is getting what it paid for with Rogan.
This article originally appeared on Engadget at https://www.engadget.com/entertainment/music/spotify-wrapped-2025-is-here-and-now-its-a-competition-130052418.html?src=rss
Uber has made a big push to offer robotaxis as an option for its rideshare services in more markets this year. Starting today, the company is offering autonomous vehicles as an option for customers in Dallas. The move is in partnership with Avride.
At the start, the AVs providing rides will have a person in the front seat, but Uber plans to have fully driverless operation "in the future." The company will have a small fleet of Avride's Hyundai Ioniq 5 vehicles to start, but it plans to eventually have hundreds of these AVs working in Dallas. Riders can set their preferences to increase their chances of being paired with a robotaxi in the Uber app. If someone is assigned an AV for their ride, they will have the option to switch to a traditional rideshare driver.
Uber started a partnership with Avride in October 2024, but the rideshare company has cast a wide net for collaborators. It has also worked to bring robotaxis to markets with Waymo in Austin and Atlanta, with Lucid in the Bay Area, with WeRide in Abu Dhabi, and with Momenta in Europe.
This article originally appeared on Engadget at https://www.engadget.com/transportation/uber-is-launching-robotaxis-in-dallas-120000411.html?src=rss
It's time for the annual Rockefeller Christmas tree lighting! The Christmas in Rockefeller Center tree lighting special will air tonight, Dec. 3 from 8-10 PM ET — though coverage will start an hour prior, at 7 PM ET. The Voice and Happy's Place star Reba McEntire will host and perform at the Rockefeller Tree lighting, which will also feature performances from Halle Bailey, Michael Bublé, Kristin Chenoweth, Laufey, the Radio City Rockettes and more. Here’s how to tune into the 2025 Rockefeller Tree lighting.
When is the 2025 Rockefeller Tree lighting?
The Rockefeller Center Christmas tree will be lit on Wednesday, Dec. 3, 2025.
2025 Rockefeller Tree lighting time:
Coverage of the 2025 Rockefeller Tree lighting will start at 7 p.m. ET. The official Christmas in Rockefeller Center tree lighting special will air from 8-10 p.m ET.
2025 Rockefeller Tree lighting channel:
The Christmas in Rockefeller Center special will air on NBC and stream on Peacock.
How to watch the 2025 Rockefeller Tree lighting without cable:
Who is hosting the 2025 Rockefeller Tree lighting?
Reba McEntire will host NBC's annual holiday special, and perform throughout the evening.
2025 Rockefeller Tree Lighting performers:
Alongside Reba McEntire, the tree lighting ceremony special will feature performances from Marc Anthony, Halle Bailey, Michael Bublé, Kristin Chenoweth, Laufey, New Edition, Brad Paisley, Carly Pearce, Gwen Stefani and the Radio City Rockettes, who are celebrating their 100th anniversary this year.
Ways to watch the Rockefeller Tree lighting for free:
This article originally appeared on Engadget at https://www.engadget.com/entertainment/streaming/how-to-watch-the-christmas-in-rockefeller-center-tree-lighting-special-tonight-111504725.html?src=rss
One of the iPhone’s many accessibility features is something Apple calls "Magnifier," which uses the smartphone's cameras to magnify and identify objects in the world around you. For Global Accessibility Awareness Day in May this year, Apple brought Magnifier to the Mac, opening up even more places the assistive tool can be used, like classroom or work environments where you might already have a MacBook pulled out.
Magnifier requires macOS 26 Tahoe and can work with a built-in webcam, a connected third-party camera or an iPhone via Apple's Continuity feature. Provided your MacBook can run Apple’s latest software update, it’s a natural fit for zooming in on a whiteboard at the back of a large lecture hall or getting a closer look at documents on a desk in front of you. You can use the app to both capture an individual image you want to refer to later, or to analyze text in a live video feed. But where to begin? Here’s how to set up and use Magnifier on your Mac.
How to use Magnifier to identify and display text
A MacBook using Magnifier and a connected iPhone to identify and format text from a book.
Apple
Magnifier's most powerful feature uses the MacBook's machine learning capabilities to identify, display and format text that your camera captures. This works with text your camera can see in the room around you, and things it captures via macOS' Desk View feature. For example, to view documents on your desk:
Open Magnifier.
Click on the Camera section in Magnifier's menu bar and then select your Desk View camera from the dropdown menu.
Click on the Reader icon (a simple illustration of a document) near the top-right of your Magnifier window.
Click on the sidebar menu icon to access settings to format text.
Apple gives you options to change the color, font and background of text Magnifier identifies, among other customization options. If you'd prefer to capture faraway text, you can position a webcam or iPhone camera facing away from you and swap to it via the Camera section in Magnifier's menu bar.
You can also listen to any text Magnifier has identified by clicking on the Play button in the top-right corner of Magnifier's reader mode. Clicking the Pause button will pause playback, clicking the Skip Forward or Skip Backward buttons skip through lines of text, and if you want to adjust playback speed, you can click on the 1x button and pick a speed from the dropdown menu.
How to use Magnifier to zoom in on yourself
Magnifier can identify text, but it also works as a way to get a zoomed in view of your own face.
Ian Carlos Campbell for Engadget
By default, Magnifier uses your MacBook's built-in webcam, which means you'll see a view of yourself and whatever's behind you if you don't have another camera selected. This might not be usual for seeing faraway text, but it is handy if you're applying makeup, putting in contacts or doing anything else where you need a detailed view of your face.
In my tests, using Magnifier worked the best with my MacBook's built-in webcam or an iPhone. When I tried using a third-party webcam from Logitech, my live camera feed was noticeably laggy. Your mileage may vary, but if you experience any issues with your own webcam, it's worth trying your built-in webcam to see if that helps. You can swap between cameras and zoom in to your camera feed inside the Magnifier app:
Open Magnifier.
In the top menu bar, select Camera and then click on the camera you'd like to use in the dropdown menu.
Use the slider in the top center of the Magnifier window to zoom in on yourself.
You can see a live feed of your zoomed in view in Magnifier's main window. If you click on the Camera button in the bottom-left corner of the app, you can also snap a photo to review later. Any photos you capture will appear in Magnifier's left sidebar menu. Clicking on them lets you view them, zoom in on them and adjust their visual appearance (Brightness, Contrast and other visual settings) via the Image section in Magnifier's menu bar.
This article originally appeared on Engadget at https://www.engadget.com/computing/laptops/how-to-use-magnifier-on-a-macbook-to-zoom-in-on-faraway-text-080100677.html?src=rss