The Super Mario Galaxy Movieis nearly upon us, as the hotly-anticipated sequel arrives in theaters on April 1. Nintendo recently dropped the final trailer for the film, which is filled with quick visual gags and nods to the source material.
There aren't too many actual reveals in this footage, as it covers a lot of the same ground as previous trailers. However, it does show that fan favorite Lumalee is returning as a prison guard of some sort, reversing the storyline from the original film in which the cheerfully nihilistic creature was trapped in a cage.
Nintendo also released a larger presentation that featured the aforementioned trailer, but also included interviews with actors and franchise creator Shigeru Miyamoto. We did get some news in this video.
It was revealed that the long-tongued dinosaur Yoshi will be voiced by Donald Glover. So it's likely the dino will be saying a lot more than "Yoshi" over and over. Actor Luis Guzman will also be playing Wart, the primary antagonist from Super Mario Bros. 2. Issa Rae will be on hand to voice Honey Queen, the gigantic bee character from the Super Mario Galaxy games.
It was even confirmed by lead actors Chris Pratt and Charlie Day that Luigi would be on hand for the entire adventure this time, and not confined to a cage-based subplot. I didn't realize Luigi's role in the first film was enough of a controversy to warrant this kind of mention, but here we are.
Illumination CEO Chris Meledandri also appeared in the video, assuring viewers that there are still "some big surprises" waiting in the actual film. To that end, there's been a rumor floating around that Fox McCloud from the Starfox franchise would be showing up. Is this the start of a Nintendo cinematic universe that will culminate in 10 years with a Super Smash Bros. movie? Stranger things have happened.
This article originally appeared on Engadget at https://www.engadget.com/entertainment/tv-movies/heres-the-final-trailer-for-the-super-mario-galaxy-movie-181819593.html?src=rss
OpenAI is rolling out new interactive responses in ChatGPT it says are designed to make the chatbot more useful for learners. Starting today, ChatGPT will generate dynamic visuals when you ask it to explain select scientific and mathematical concepts, including the Pythagorean theorem, Coulomb's law and lens equations. When ChatGPT responds with an interactive visual, you'll be able to tweak any variables and the equation itself, allowing you to see how those changes affect the solution.
With today's release, OpenAI says ChatGPT will respond with interactive visuals when asked about more than 70 concepts, with support for additional topics to come down the line. The visuals are available to all ChatGPT users, regardless of subscription status. However, OpenAI notes high school- and college-aged students are likely to get the most out of the new feature.
ChatGPT explains Ohm's law.
OpenAI
The more interactive responses from ChatGPT follow the release of Study Mode last summer. Released in response to the sheer amount of students using chatbots to complete their coursework, that feature guides the user toward finding an answer themselves, rather than provide an outright solution. "This is just the beginning," OpenAI says of its latest feature. "Over time, we plan to expand interactive learning with additional subjects and continue building tools that strengthen learning with ChatGPT."
This article originally appeared on Engadget at https://www.engadget.com/ai/chatgpt-will-now-generate-interactive-visuals-to-help-you-with-math-and-science-concepts-170000520.html?src=rss
Nintendo's next platform adventure, Yoshi and the Mysterious Book, will be released for Switch 2 on May 21. The company announced the release date as part of its annual Mar10 Day celebration. This is a made-up holiday that exists because the date spelled out like that sort of looks like the word Mario.
In any event, there's a new trailer for the perpetually hungry dinosaur's latest adventure. It looks super cute. It sort of resembles a children's picture book come to life. Yoshi games typically boast unique graphical styles, with past entries featuring entire worlds made of yarn, cardboard and more. Even the very first Yoshi platformer, Super Mario World 2: Yoshi's Island, featured a kind of hand-drawn aesthetic.
The gameplay looks to be somewhat unique, with a reduced emphasis on chucking eggs. Many of the game's creatures grant Yoshi special abilities when they hop on the dino for a ride. This reminds me of another Nintendo-branded glutton, Kirby.
Today's trailer also shows Yoshi gobbling up an enemy and encountering a foul and bitter taste, giving the little cutie a momentary stomach ache. I guess Yoshi's palette has become more refined since the last game.
This has already been a big week for the anthropomorphic dinosaur. Nintendo recently dropped another trailer for The Super Mario Galaxy Movie and it was revealed that Donald Glover will be voicing Yoshi. That film hits theaters on April 1, which is just a few weeks away.
This article originally appeared on Engadget at https://www.engadget.com/gaming/nintendo/yoshi-and-the-mysterious-book-will-be-released-for-switch-2-on-may-21-164753150.html?src=rss
Google is rolling out Gemini AI agents to the Department of Defense's more than 3 million civilian and military employees, according to Bloomberg. The agents will initially operate on unclassified networks, with talks underway to expand them to classified and top-secret systems, according to Emil Michael, the Under Secretary of Defense for Research and Engineering.
Eight pre-built agents will automate tasks like summarizing meeting notes, building budgets and checking proposed actions against the national defense strategy. Google Vice President Jim Kelly said in a blog post on Tuesday that Defense Department personnel can also create custom agents using natural language.
Google's AI chatbot, accessible through the Pentagon's GenAI.mil portal, has been used by 1.2 million Defense Department employees for unclassified work since December, with personnel running 40 million unique prompts and uploading more than 4 million documents. Training has reportedly not kept pace with adoption, however, as only 26,000 people have completed AI training since December, but future sessions are fully booked, something that suggests more employees are getting on board.
The expansion comes as the Pentagon rapidly broadens its AI partnerships after its standoff with Anthropic, which refused to remove guardrails against domestic surveillance and autonomous weapons from its technology. The Pentagon has since classified the American AI company as a "supply chain risk," which Anthropic will fight in court. Roughly 900 Google and 100 OpenAI employees have since signed an open letter urging their employers to hold firm on the same guardrails. Google quietly altered its "AI Principles" regarding these exact uses in early February.
The Department of Defense has since struck deals with OpenAI and xAI for restricted networks. Google itself faced internal backlash over Pentagon work in 2018 when thousands of employees protested Project Maven, a program that used AI to analyze drone video feeds. It did not renew that contract but has since loosened its restrictions on military work.
This article originally appeared on Engadget at https://www.engadget.com/ai/google-to-provide-pentagon-with-gemini-powered-ai-agents-161037444.html?src=rss
NVIDIA is reportedly working on its own open-source AI agent platform, according to Wired. The chipmaker has been pitching the product to enterprise software companies. Reporting indicates it's going to be called NemoClaw, suggesting that the entire industry is going to embrace this whole "claw" naming convention moving forward.
Just like OpenClaw, this will be a platform in which users dispatch AI agents to perform a variety of tasks. However, NVIDIA's effort looks to have an enterprise focus for now. To that end, reporting indicates that companies will be able to access this platform even if their products don't run on NVIDIA chips.
NVIDIA is currently preparing for its annual developer conference next week and Wired has suggested that the company has already reached out to entities like Salesforce, Cisco and Google to strike partnerships for its platform. It's not clear if these discussions have led to anything official, as none of these companies have provided statements.
This could be a steep climb for NVIDIA, as usage of these multi-purpose agents in the enterprise space is relatively controversial. Some tech companies have asked employees to refrain from using OpenClaw and related tools on their work computers, as the agents can be unpredictable and cause all manner of mayhem. A Meta employee recently shared a story about an AI agent going rogue and mass deleting emails.
This poses a serious security risk to enterprise customers. It's one thing if the claw is trapped on a personal computer, but another thing if it has access to an entire enterprise network. NVIDIA is reportedly beefing up NemoClaw with additional layers of security for AI agents, which is likely an effort to attract those business customers.
Why is this a big deal? Unlike traditional chatbots that typically require hand-holding from the user every step of the way, claws are designed to run autonomously on computers and perform complex, multi-pronged tasks without too much human supervision.
Peter Steinberger is joining OpenAI to drive the next generation of personal agents. He is a genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people. We expect this will quickly become core to our…
This all started with software originally called Clawdbot, which is now called OpenClaw. The creator of OpenClaw, Peter Steinberger, recently joined OpenAI to help "drive the next generation of personal agents."
This article originally appeared on Engadget at https://www.engadget.com/ai/nvidia-is-reportedly-working-on-its-own-open-source-ai-agent-platform-153203397.html?src=rss
NVIDIA's GeForce Now game streaming platform has added a few minor but useful updates, especially for GOG and VR headset users, the company announced at Game Developer's Conference (GDC). The biggest technical improvement is for virtual reality headsets that support GeForce Now like the Apple Vision Pro and Meta Quest. Starting next week (March 19), those devices will be able to stream at 90 fps for Ultimate members (up from 60 fps) for improved smoothness, responsiveness and realism.
Another helpful update is in-app labels coming "soon" to GeForce Now. Once you connect an Xbox or Ubisoft_ account, you'll see clear labels directly on game art inside the GeForce Now app showing exactly what's available to play from your subscription services. NVIDIA is also expanding account linking, adding GOG to the roster of services on top of Gaijin single-sign announced at CES.
GeForce Now is also expanding its Install-to-Play library with select Xbox titles including Brutal Legend from Double Fine Productions and Compulsion Games' Contrast. The service will also see several anticipated games directly on the cloud service at launch, namely Remedy's Control Resonant and Samson: A Tyndalston Story from Liquid Swords.
As a reminder, NVIDIA's GeForce Now is one of the better cloud gaming services out there, particularly since it added GeForce RTX 5080-powered servers that Engadget's Devindra Hardawar called "indistinguishable from a powerful rig." The service recently came to Fire TV sticks and is available on Windows and Mac PCs, NVIDIA's Shield, Android TV, smartphones and many other devices.
This article originally appeared on Engadget at https://www.engadget.com/gaming/geforce-now-adds-gog-syncing-and-90fps-game-streaming-in-vr-headsets-130656731.html?src=rss
Meta is snapping up Moltbook, a Reddit-like social network for AI agents that has been around since January and remains completely ridiculous. The company hasn't disclosed the terms of the deal.
Moltbook and its creators Matt Schlicht and Ben Parr will be joining Meta Superintelligence Labs (MSL) when the deal closes. That's expected to happen in the coming days, according to Axios.
“The Moltbook team joining MSL opens up new ways for AI agents to work for people and businesses," a Meta spokesperson told TechCrunch. "Their approach to connecting agents through an always-on directory is a novel step in a rapidly developing space, and we look forward to working together to bring innovative, secure agentic experiences to everyone.”
It seems current Moltbook users will be able to continue interacting with the platform for the time being. Moltbook was built on the back of OpenClaw, a tool that enables people to whip up AI agents that can interact with dozens of different apps. (OpenAI hired the creator of OpenClaw last month.)
Schlicht used OpenClaw to create a bot named “Clawd Clawderberg” and asked it to create a social network for AI agents. And that's how Moltbook came to be.
For what it's worth, Clawd Clawderberg is a play on "Mark Zuckerberg" and Moltbook is a clear riff on "Facebook," so it’s somewhat fitting that Schlicht vibe-coded his way to a job at Meta. It also emerged that it was relatively easy for humans to pose as AI agents and post on Moltbook. Again, all of this is deeply, deeply absurd.
This article originally appeared on Engadget at https://www.engadget.com/ai/meta-is-buying-moltbook-the-ridiculous-social-network-populated-by-ai-bots-152732453.html?src=rss
Google is rolling out a batch of Gemini updates across its Workspace apps that give the AI assistant the ability to generate first drafts in Docs, build entire spreadsheets in Sheets, design presentations in Slides and answer questions about files stored in Drive. The features started rolling out on March 10 in beta for Google AI Ultra and Pro subscribers and Gemini Alpha business customers, in English only.
In Docs, a new "Help me create" tool produces a formatted first draft by pulling context from Drive, Gmail, Chat and the web based on a user's prompt. Gemini can also match the writing style or formatting of a reference document. Google says more than a third of new Docs are created from copies of existing files, so the formatting tool is meant to cut down on that manual work. In Sheets, Gemini can now construct an entire spreadsheet from a natural language prompt, drawing data from a user's files and emails, as well as Google Chat and the web.
A "Fill with Gemini" feature auto-populates table cells, which Google says is nine times faster than manual entry based on a 95-person study (this sounds profoundly unscientific, so take these claims with a grain of salt). Sheets also gained optimization tools powered by Google DeepMind and Google Research that can solve problems like employee scheduling through written prompts. In Slides, Gemini can generate individual slides that match an existing deck's theme, with full presentation generation from a single prompt coming later.
Google Drive is getting AI Overviews in search results, similar to a feature the company recently added to Gmail, along with a new "Ask Gemini" tool that lets users query their files, emails and calendar. The Drive features will be released first only for customers in the US, unlike the rest of these updates.
This article originally appeared on Engadget at https://www.engadget.com/ai/google-brings-gemini-powered-content-creation-tools-to-docs-sheets-slides-and-drive-144705622.html?src=rss
It takes around 30 hours to experience everything Resident Evil Requiem has to offer. If you've already enjoyed all the thrills and spills and you're itching for more, there's some positive news. Capcom has some updates on the way. The biggest of those is a story expansion, which is now in development. Just don't expect it to arrive imminently.
"In this story, we will delve deeper into the world of Requiem," game director Koshi Nakanishi said in a short video message. "We’re hard at work on it now. It will take some time, so we ask for your patience and hope you’ll look forward to it."
Nakanishi noted that on top of the story expansion and fixing bugs and performance issues, the development team is cooking up some other features. A photo mode is on the way to help you capture all the horrors that Grace and Leon encounter. There's also a "surprise coming around May," Nakanishi said. "We’re planning to add a mini-game."
This article originally appeared on Engadget at https://www.engadget.com/gaming/a-resident-evil-requiem-story-expansion-is-in-the-works-140512827.html?src=rss
Sonos has just announced its first new products since 2024, when the company’s plans went sideways after a disastrous update to its app. First up is the Sonos Play, the company’s latest portable speaker. Long-time Sonos watchers will recognize the name from the old Play:1, Play:3 and Play:5 speakers, but this new model has little to do with those products of the past. The $299 Play is a Wi-Fi and Bluetooth speaker that sits between the $179 Roam 2 and $499 Move 2 and could be the “goldilocks” speaker in the company’s portable lineup, at least based on what I know so far.
The closest comparison for the Play is the excellent Era 100, which Sonos released back in 2023. At 7.6” tall, 4.4” wide and 3” deep, it’s much thinner than the Era 100 which is over 5 inches deep. And compared to the Move 2 (9.5” x 6.3 x 5”) it’s much more portable. That goes for weight, too — the Play is less than 3 pounds, compared to over 6.5 pounds for the Move 2. It’s not the kind of speaker you’ll throw in your bag and forget about, like the tiny Roam 2, but it’s far more portable than the Move 2. Finally, the Play is IP67 rated, just like the Roam 2. That means it can be submerged in up to a meter of water for up to 30 minutes; it’s also dustproof.
The grab handle on the back of the Sonos Play.
Sonos
From a speaker component perspective, it’s again quite similar to the Era 100. It has two tweeters positioned at a 90-degree angle for stereo separation paired with one midwoofer; it also has two additional passive radiators to increase the bass response in its relatively small case. The Era 100 lacks those passive radiators but is otherwise identical. Obviously, we’ll have to listen to the Play before saying how closely it compares to the Era 100, but this speaker should significantly outperform the Roam 2 simply due to the increased size of its components. The Move 2, on the other hand, is extremely loud and will likely still be the best choice for people who want a speaker to cover a large outdoor space.
You’ll find familiar controls on the Sonos Play, which comes in black or white. (Fingers crossed for future color options like the lovely trio that Sonos offers on the Roam.) On the top surface are buttons for play/pause, volume up and down and a microphone toggle. On the back is a power button, a Bluetooth button and a physical switch that disconnects the microphone for increased security. Finally, there’s a new feature here: a removable plastic grab loop.
Sonos was keen to note that the Play is a full-featured member of the Sonos ecosystem. Like all of its other speakers, that means you’ll see all Sonos speakers in the app and can group them as you see fit, or have different music playing on different speakers throughout the house. You can also pair two of these in stereo. If you remove one from your network (say you’re outside and away from Wi-Fi), you’ll need to re-pair them though. In addition to controlling playback via the Sonos app (which, in my testing, is functioning fine and recovered from the 2024 debacle), you can stream music via AirPlay 2 or Spotify Connect. The Sonos Voice Assistant as well as Amazon Alexa are also on board here for anyone who likes to shout at their speakers.
The Sonos Play on its wireless charging base.
Sonos
There’s a new trick here for both the Play and Move 2, as well. For the first time, you can group Sonos speakers together through Bluetooth. After pairing a Play to your phone via Bluetooth, you can press and hold the play/pause button on three more Play or Move 2 speakers to add them to the group. If you want to cover a larger outdoor space with multiple speakers, this sounds like a pretty handy way to do so.
The Play also has line-in via its USB-C port, and you can use it for Ethernet as well; both features require a separate adapter. You can even use the USB-C port to top up your phone if you’re so inclined. And while you can also charge via the USB-C port, the Play comes with a wireless charging dock which makes for a nice home base for the speaker’s primary location. Annoyingly, Sonos did not include a charger, so you’ll need to provide your own USB-C brick.
A pair of Sonos Era 100 SL speakers with a turntable.
Sonos
Sonos is also adding a second, much simpler speaker to its lineup today: the Era 100 SL. Like the One SL before it, the Era 100 SL is identical to the Era 100 with one key difference. There are no microphones on it at all. As such, the Era 100 SL is also a bit cheaper, coming in at $189 compared to $219 for the standard model.
Otherwise, there are no differences in acoustic architecture or feature set here. As its most affordable speaker besides the portable Roam 2, Sonos is positioning the Era 100 SL as the ideal entry point into its products. I can’t really argue with that, as the Era 100 still sounds outstanding and is also quite flexible with features like line-in and Bluetooth as well as all the standard streaming options. Both versions of the Era 100 are compatible with each other, too — so if you get an SL and then decide you want a stereo pair, a standard Era 100 with a mic will work there and bring voice control to your system as well.
Both the $299 Play and $189 Era 100 SL are up for pre-order now, and Sonos says they’ll be shipping on March 31.
This article originally appeared on Engadget at https://www.engadget.com/audio/speakers/the-sonos-play-puts-the-best-parts-of-the-era-100-in-a-portable-speaker-133000129.html?src=rss