NVIDIA- and Uber-backed Nuro is testing autonomous vehicles in Tokyo

US self-driving startup Nuro, which is backed by the likes of NVIDIA, Toyota and Uber, has started testing its autonomous vehicles on Tokyo's challenging streets, Bloomberg reported. The company, which plans to launch a robotaxi service with Uber and Lucid in San Francisco this year, will be testing a "handful" of vehicles in the city. Human safety drivers will be at the wheel, as is required by Japanese law. 

Tokyo presents a challenge for autonomous vehicles, given its narrow, crowded streets and left side of the road driving. "Testing the capability of the autonomy system in such an interesting market with some international complexity really is a good pressure test of what the system is capable of," said CEO Andrew Chapin. The company's ultimate goal is to achieve Level 4 autonomy, which allows full self-driving under limited conditions. 

Waymo is the other major robotaxi operator testing vehicles in Tokyo in collaboration with Japanese taxi operators Nihon Kotsu and the country's leading taxi app, Go. It has been operating in the nation since April 2025 in collaboration with Toyota.

Nuro has yet to announce which operators or vehicle manufacturers it will be partnering with, but Chapin said it may not limit itself to autonomous rides. "A universal autonomy platform that can be extended to a lot of different applications and form factors is a bit different than the approach Waymo is taking," he told Bloomberg. The company previously teamed with 7-Eleven on autonomous deliveries in Mountain View, California. 

Uber plans to have up to 100,000 autonomous vehicles including 20,000 robotaxis powered by Lucid and Nuro, with a rollout starting in 2027. It introduced its new vehicle design recently at CES 2026. Uber is also collaborating with Nissan and Wayve with the aim to introduce pilot cars in Tokyo by late 2026.  

This article originally appeared on Engadget at https://www.engadget.com/transportation/nvidia--and-uber-backed-nuro-is-testing-autonomous-vehicles-in-tokyo-081200366.html?src=rss

GeForce Now adds GOG syncing and 90fps game streaming in VR headsets

NVIDIA's GeForce Now game streaming platform has added a few minor but useful updates, especially for GOG and VR headset users, the company announced at Game Developer's Conference (GDC). The biggest technical improvement is for virtual reality headsets that support GeForce Now like the Apple Vision Pro and Meta Quest. Starting next week (March 19), those devices will be able to stream at 90 fps for Ultimate members (up from 60 fps) for improved smoothness, responsiveness and realism. 

Another helpful update is in-app labels coming "soon" to GeForce Now. Once you connect an Xbox or Ubisoft_ account, you'll see clear labels directly on game art inside the GeForce Now app showing exactly what's available to play from your subscription services. NVIDIA is also expanding account linking, adding GOG to the roster of services on top of Gaijin single-sign announced at CES. 

GeForce Now is also expanding its Install-to-Play library with select Xbox titles including Brutal Legend from Double Fine Productions and Compulsion Games' Contrast. The service will also see several anticipated games directly on the cloud service at launch, namely Remedy's Control Resonant and Samson: A Tyndalston Story from Liquid Swords. 

As a reminder, NVIDIA's GeForce Now is one of the better cloud gaming services out there, particularly since it added GeForce RTX 5080-powered servers that Engadget's Devindra Hardawar called "indistinguishable from a powerful rig." The service recently came to Fire TV sticks and is available on Windows and Mac PCs, NVIDIA's Shield, Android TV, smartphones and many other devices. 

This article originally appeared on Engadget at https://www.engadget.com/gaming/geforce-now-adds-gog-syncing-and-90fps-game-streaming-in-vr-headsets-130656731.html?src=rss

FAA opens up real world testing for air taxi startups

US regulators have approved eight pilot programs across 26 states that will allow Archer, Joby and other eVTOL companies to finally start testing aircraft this summer, according to a US Department of Transportation (DoT) press release. That will allow those manufacturers to run trials for use cases like urban air taxi services, regional passenger transportation, cargo, emergency medical operations and autonomous flight technology. 

The new projects were made possible by the White House's Advanced Air Mobility and eVTOL Integration Pilot Program (e-IPP) approved last year to allow certification for such aircraft to progress after being stuck in the mud for years. "By safely testing the deployment of these futuristic air taxis and other AAM vehicles, we can fundamentally improve how the traveling public and products move," US Transportation Secretary Sean Duffy said at the time

Other FAA aircraft partners include Beta, Electra, Elroy Air, Wisk, Ampaire and Reliable Robotics. Key pilot programs were approved for the Texas, Utah, Pennsylvania, Louisiana and North Carolina Departments of Transportation, along with New York and New Jersey Port Authority and the City of Albuquerque. We've already glimpsed some of the ideas, like Archer's plan to use air taxis between New York's major airports and city heliports.

A number of eVTOL startups have launched in recent years, but so far none of the aircraft have received "type certificates" for carrying passengers or other commercial purposes. Archer and Joby are the farthest along in that process, having been granted the FAA's final airworthiness criteria — the final step before full approval. 

The delays are mostly about safety and working eVTOL planes into existing aviation flows. "The gap isn't technical capability anymore. It's regulatory synchronization," the FAA's Kalea Texeira said last year on LinkedIn. "[That includes factors like] vertiports. Energy supply chains. Part 135 [commercial] integration. Pilot training frameworks that match the aircraft timeline." In the same post, Texeira added that Joby wouldn't certify until mid-2027 at the earliest, with Archer following in 2028.

The new program could help accelerate plane-makers' plans. In a YouTube video, Beta CEO Kyle Clark said selection for the program will help his company start operations a year earlier than it previously expected. Archer, meanwhile, compared the program to robotaxi testing and said it will help build trust with the public for its Midnight aircraft. "This is the clearest sign yet... that bringing air taxis to market in the United States is a real priority," said Archer CEO Adam Goldstein.

This article originally appeared on Engadget at https://www.engadget.com/transportation/faa-opens-up-real-world-testing-for-air-taxi-startups-112219316.html?src=rss

Qualcomm’s new Arduino Ventuno Q is an AI-focused computer designed for robotics

Qualcomm, which purchased microcontroller board manufacturer Arduino last year, just announced a new single-board computer that marries AI with robotics. Called the Arduino Ventuno Q, it uses Qualcomm's Dragonwing IQ8 processor along with a dedicated STM32H5 low-latency microcontroller (MCU). "Ventuno Q is engineered specifically for systems that move, manipulate and respond to the physical world with precision and reliability," the company wrote on the product page

The Ventuno Q is more sophisticated (and expensive) than Arduinio's usual AIO boards, thanks to the Dragonwing IQ8 processor that includes an 8-core ARM Cortex CPU, Adreno Arm Cortex A623 GPU and Hexagon Tensor NPU that can hit up ot 40 TOPs. It also comes with 16GB of LPDDR5 RAM, along with 64GB of eMMC storage and an M.2 NVME Gen.4 slot to expand that. Other features include Wi-Fi 6, Bluetooth 5.3, 2.5Gbps ethernet and USB camera support. 

The Ventuno Q includes Arudino App Lab, with pre-trained AI models including LLMs, VLMs, ASR, gesture recognition, pose estimation and object tracking, all running offline. It's designed for AI systems that run entirely offline like smart kiosks, healthcare assistants and traffic flow analysis, along with Edge AI vision and sensing systems. It also supports a full robotics stack including vision processing combined with deterministic motor control for precise vision and manipulation. It's also ideal for education and research in areas like computer vision, generative AI and prototyping at the edge, according to Arduino. 

"With Ventuno Q, AI can finally move from the cloud into the physical world," Qualcomm wrote. "This platform enables building machines that perceive, decide, and act — all on a single board. Our goal is to make advanced robotics and edge AI accessible to every developer, educator, and innovator." The Arduino Ventuno Q will be available in Q2 2026 from the Arduino Store and elsewhere and is expected to cost under $300. 

This article originally appeared on Engadget at https://www.engadget.com/ai/qualcomms-new-arduino-ventuno-q-is-an-ai-focused-computer-designed-for-robotics-113047697.html?src=rss

UK government delays AI copyright rules amid artist outcry

The UK government is working on a controversial data bill that would allow AI companies like Google and OpenAI to train their models on copyrighted materials without consent. However, following a two month consultation, it looks like passage of the law will be delayed. "Copyright is going to be kicked down the road," a person with knowledge of the matter told The Financial Times

Responses by stakeholders during the consultation period weren't favorable to any of the government's proposed ideas for use of copyrighted materials, the FT's sources said. There's no expectation now that an AI bill will be part of the King's Speech set for May this year. 

As a result, Ministers have decided to go back to the drawing board and spend more time exploring other options. The House of Lords Communications and Digital Committee called on the government to develop a licensing-first regime "underpinned by robust transparency that safeguards creators' livelihoods while supporting sustainable AI growth."

The UK parliament's preferred position on the bill (also argued by tech giants like Google) has been that copyright holders need to formally opt-out if they don't want their materials used to train AI models. However, publishers, filmmakers, musicians and others have said that this would be impractical and an existential threat to the UK's creative industries.

The House of Lords took the side of artists and introduced an amendment that would require tech companies to disclose which copyright-protected works were used to train AI models. That addition, however, was blocked by the UK's House of Commons in May last year.

The UK's majority Labour government — already under fire for its handling of the economy — has taken hits from publishers, musicians, authors and other creative groups over the proposed law. Elton John called the government "absolute losers" while Paul McCartney said that AI has its uses but "it shouldn't rip creative people off." McCartney and others artists were part of a "silent album" meant to show the impact of IP theft by AI. 

Baroness Beeban Kidron from the House of Lords has also ripped the government over the AI bill. "Creators do not deny the creative and economic value of AI, but we do deny the assertion that we should have to build AI for free with our work, and then rent it back from those who stole it," she said last year. "It's astonishing that a Labour government would abandon the labor force of an entire section."

This article originally appeared on Engadget at https://www.engadget.com/ai/uk-government-delays-ai-copyright-rules-amid-artist-outcry-113937154.html?src=rss

Apple Music can now flag AI content, but only if distributors elect to label it

While music streaming apps like Bandcamp, Spotify and Deezer have taken steps to inform users about AI-generated content, we haven't heard much out of Apple Music in that regard. However, Apple Music has now introduced "Transparency Tags" designed to show listeners if any elements were generated in whole or part by AI. The catch is that Apple is leaving it up to labels and distributors to create those tags, according to an Apple newsletter to industry partners seen by Music Business Worldwide..  

"Proper tagging of content is the first step in giving the music industry the data and tools needed to develop thoughtful policies around AI, and we believe labels and distributors must take an active role in reporting when the content they deliver is created using AI," Apple wrote, calling it a concrete first step toward transparency around artificial intelligence.

Streaming platforms already use metadata tags for things like song and album titles, genre and the name of the artist. The new tags will now identify any artwork, tracks, compositions and music videos created in whole or in part by AI. 

However, Apple's new system requires labels and distributors to opt in and manually flag their use of AI, a system that's similar to what Spotify is doing. On top of that, Apple has no apparent enforcement mechanism for AI content. 

By contrast, other music platforms including Deezer and Bandcamp are using in-house AI-detection tools to flag content whether the distributor opts in or not. Deezer disclosed in January 2026 that it receives over 60,000 fully AI-generated tracks every day, double the number it saw in September 2025. Synthetic content, also called "AI slop," has accounted for 13.4 million tracks on its platform, Deezer added.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/music/apple-music-can-now-flag-ai-content-but-only-if-distributors-elect-to-label-it-121521873.html?src=rss

Apple’s new Studio Display XDR monitor has limited functionality on older Silicon Macs

If you're looking to pre-order Apple's new Studio Display XDR monitor today but have an older Mac, beware of some potential issues. According to the compatibility list spotted by Apple Insider, the new display will only work at 60Hz and not at its full 120Hz refresh rate on some older and less powerful Silicon models. Moreover, support for older Intel Macs isn't mentioned at all for either the Studio Display XDR or cheaper Studio Display

All Apple Silicon Macs will work with both monitors, including those with the oldest M1 chips, according to the support pages. However, the compatibility list for the Studio Display XDR includes this nugget: "Mac models with M1, M1 Pro, M1 Max, M1 Ultra, M2, and M3 support Studio Display XDR at up to 60Hz. All other Studio Display XDR features are supported." So even if you have a hotrod M1 Ultra-based Mac, the Studio Display XDR's refresh rate is capped at 60Hz — despite the fact that the chip can drive third-party monitors at 120Hz. 

Similarly, only the iPad Pro M5 supports the Studio Display XDR at 120Hz, with all other compatible models (in the iPad Pro and iPad Air family) limited to 60Hz. 

Intel Mac support isn't mentioned at all in the compatibility list for either display, though they may function in some limited manner when connected. Intel Macs just received their last new OS update with macOS Tahoe (and only three more years of security updates), but it's still surprising that they're not compatible with Apple's latest monitors. 

This article originally appeared on Engadget at https://www.engadget.com/computing/accessories/apples-new-studio-display-xdr-monitor-has-limited-functionality-on-older-silicon-macs-082212069.html?src=rss

Apple reveals its new 5K mini-LED Studio Display XDR

Apple continues its gradual unveiling of new products this week with the launch of the Studio Display and an all-new 27-inch Studio Display XDR. The latter is a higher-end model aimed at content creators with a 27-inch 5K Retina XDR display that features a mini-LED display with 2,000-plus dimming zones, up to 2,000 nits of peak HDR brightness and a wider color gamut for improved accuracy. It looks like a replacement for the expensive, nearly seven-year-old 32-inch 6K Pro XDR Display, which is no longer for sale on Apple's website.

The Studio Display XDR also has a 120Hz refresh rate, addressing complaints about the relatively anemic 60Hz refresh rate of previous models. At the same time, it comes standard with a new tilt- and height-adjustable stand, with a height range of 105mm.

Apple reveals its new 5K mini-LED Studio Display XDR
Apple

Apple calls the Studio Display XDR the "world's best pro display" for things like HDR video editing and medical displays. Brightness levels are certainly outstanding at 1,000 nits SDR and 2,000 nits HDR, and the 1,000,000:1 contrast ratio and 80 percent Rec.2020 coverage are also top-notch. The new model should even be fine for some light gaming thanks to the 120Hz refresh rate and Adaptive Sync support, though many buyers may want a 32-inch or larger display like the now-discontinued Pro XDR model. 

Other features include a 12MP Center Stage camera with Desk View support and Thunderbolt 5 connectivity with a second port for downstream high-speed accessories or additional daisy-chained displays. It can also act as a Thunderbolt hub, while offering up to 140W of charging power through the included Thunderbolt 5 Pro cable, enough to fast-charge a 16-inch MacBook Pro. 

Apple reveals its new 5K mini-LED Studio Display XDR
Apple

Along with the Display XDR, Apple also announced a new version of the standard Studio Display. As before, it comes with a 27-inch 5K Retina display with up to 600 nits of brightness and P3 wide color, either with standard or optional nano-texture glass (a $300 option). However, it now includes an improved 12MP Center Stage camera along with Desk View to show your face and an overhead view of your desk at the same time. You also get a studio-quality three-microphone array and six speaker sound system with Spatial Audio.

That display now supports Thunderbolt 5 connectivity as well, providing higher-speed connections for accessories and the ability to daisy-chain displays. However, max charging power on this model is limited to 96W, still enough to fast-charge a 14-inch MacBook Pro. The Studio Display comes standard with a tilt-adjustable stand, but you can get it with a tilt- and height-adjustable standard for $400 more as before.

The Studio Display XDR will be available tomorrow for pre-order starting at $3,299, while the new Studio Display also goes on pre-order on March 4 starting at $1,599 without the nano-texture display or heigh-adjustable stand. 

This article originally appeared on Engadget at https://www.engadget.com/computing/accessories/apple-reveals-its-new-5k-mini-led-studio-display-xdr-141515587.html?src=rss

Meta’s AI display glasses reportedly share intimate videos with human moderators

Users of Meta's AI smart glasses in Europe may be unknowingly sharing intimate video and sensitive financial information with moderators outside of the bloc, according to a report from Sweden's Svenska Dagbladet released last week. Employees in Kenya doing AI "annotation" told the journalists that they've seen people nude, using the toilet and engaging in sexual activity, along with credit card numbers and other sensitive information. 

With Meta's Ray-Ban Display and other glasses with AI capabilities, users can record what they're looking at or get answers to questions via a Meta AI assistant. If a wearer wants to make use of that AI, though, they must agree to Meta's terms of service that allow any data captured to be reviewed by humans. That's because Meta's large language models (LLMs) often require people to annotate visual data so that the AI can understand it and build its training models. 

This data can end up in places like Nairobi, Kenya, often moderated by underpaid workers. Such actions are subject to Europe's GDPR rules that require transparency about how personal data is processed, according to a data protection lawyer cited in the report. 

However, Svenska Dagbladet's reporters said they needed to jump through some hoops to see Meta's privacy policy for its wearable products. That policy states that either humans or automated systems may review sensitive data, and puts the onus on the user to not share sensitive information.

Meta declined to comment directly on the story, and simply said that "when live AI is being used, we process that media according to the Meta AI Terms of Service and Privacy Policy." To find out more, check out Svenska Dagbladet's detailed reporting on the subject. 

This article originally appeared on Engadget at https://www.engadget.com/ai/metas-ai-display-glasses-reportedly-share-intimate-videos-with-human-moderators-135939855.html?src=rss

Google Home’s latest feature is Gemini-powered ‘Live Search’ for cameras

Google Home has some significant new quality-of-life updates and a new AI-powered feature, the division's head honcho Anish Katturkan announced on X. Many of them, including a function called "Live Search," are powered by the company's Gemini for Home service launched in October 2025 as the official replacement for Google Assistant on smart devices. 

"We launched Gemini for Home in Early Access specifically to learn from real-world usage," Katturkaran said. "With millions of you now testing and shaping this experience every day, we're pushing regular voice improvements to address your feedback."

The Live Search feature does just what it says, letting you query Gemini about the current state of your home based on what the cameras see. For instance, you can ask things like "Hey Google, is there a car in the driveway?" However, the feature is only available for Google Home Premium advanced subscribers who pay a $20 per month ($200 per year) fee. 

Gemini for Home now uses updated models to improve the quality and accuracy of answers too and will more reliably play newly-released songs. Other key updates include better targeting for smart home devices by room, house and device, reduced instances of cutting off a speaker prematurely, better reliability for user-created automations by voice and more. Too see all those changes, check out Google Home's latest changelog,

Finally, Google Home announced "enhanced support" for the Nest x Yale lock, including comprehensive passcode management (including for guests), a more robust activity history, real-time notifications for lock events and enhanced lock settings like single touch locking. 

This article originally appeared on Engadget at https://www.engadget.com/home/smart-home/googles-homes-latest-feature-is-gemini-powered-live-search-for-cameras-112216551.html?src=rss