The next Metal Gear Solid remaster collection arrives this summer

Volume two of the Metal Gear Solid: Master Collection will arrive on August 27, publisher Konami announced today during Sony’s latest State of Play presentation. The bundle will feature 2008’s Metal Gear Solid 4: Guns of the Patriots, the HD remaster of 2010’s Metal Gear Solid: Peace Walker and a selection of bonus content, including Metal Gear: Ghost Babel, which was originally released for Game Boy Color in 2000. All told, that’s a smaller selection of games than Konami made available with Vol. 1 of the Master Collection, but Metal Gear fans will be excited nonetheless, if only for the fact it will mark the first time MGS4 will be officially playable on a platform other than the PlayStation 3.

That it has taken Konami nearly two decades to release the conclusion of Solid Snake’s story on more systems has to do with the nature of the game as a PS3 exclusive. MGS4 took extensive advantage of the console’s unique Cell architecture, a fact that made it difficult (and expensive) proposition to port to more recent x86-based systems. In recent years, it’s been possible to emulate the game on a powerful PC, but not everyone has that kind of hardware.

Metal Gear Solid: Master Collection Vol.2 will be available on PS5, Xbox Series X/S, PC, Nintendo Switch and Nintendo Switch 2.

Update, February 12, 6:30PM ET: This story was updated after publish to add details about Metal Gear Solid: Master Collection Vol.2’s launch platforms.

This article originally appeared on Engadget at https://www.engadget.com/gaming/playstation/the-next-metal-gear-solid-remaster-collection-arrives-this-summer-231711005.html?src=rss

How to buy a GPU in 2026

One of the toughest parts of any new computer build or upgrade is finding the right video card. In a gaming PC, the GPU is easily the most important part, and you can limit your experience by going with the wrong model. The buying process can be frustrating, especially right now with memory shortages leading to higher prices. In this guide, we'll help you navigate the market and find the right GPU for your needs.

The first question to ask yourself is what kind of games do you want to play. Competitive shooters like Valorant, Overwatch and Marvel Rivals were designed to run on older hardware. As such, even entry-level GPUs like the GeForce RTX 5060 can push those games at 120 frames per second and above at 1080p (more on why that's important in a moment).

By contrast, if you want to play modern, single-player games with ray tracing and other graphical extras, you'll need a more powerful GPU. Just how much more powerful will depend on the resolution of your monitor.

A 1440p or QHD monitor has 78 percent more pixels than a 1080p screen, and a 4K display has more than twice as many pixels as a QHD panel. In short, running a game at 4K, especially at anything above 60 frames per second, is demanding, and most GPUs will need to use upscaling techniques like NVIDIA's Deep Learning Super Sampling (DLSS) and AMD's FidelityFX Super Resolution (FSR) to push new games at high refresh rates.

On the subject of resolution, it doesn't make sense to spend a lot of money on a 4K monitor only to pair it with an inexpensive GPU. That's a recipe for a bad experience. As you're shopping for a new video card, you should think about the resolution and frame rate you want to play your games. If you're in the market for both a GPU and display, be sure to check out our guide to the best gaming monitors.

If your budget allows, a good bet is to buy a midrange card that can comfortably render all but the most demanding games at 1440p and at least 144 frames per second. Put another way, you want a GPU that can saturate a monitor at its native resolution and refresh rate in as many games as possible. That will give you the smoothest possible experience in terms of motion clarity, and allow you to dabble in both competitive shooters and the latest single-player games as the mood strikes you.

Intel Arc B580 label view
Intel Arc B580 label view
Photo by Devindra Hardawar/Engadget

One of the confusing aspects of the GPU industry are all the players involved. What you need to know is that there are three main players: AMD, Intel and NVIDIA. They design the cards you can buy, but delegate the manufacturing of them to so-called add-in board (AIB) partners like ASUS, XFX, Gigabyte and others.

As you can imagine, this creates some headaches. The most annoying of which is that AMD, Intel and NVIDIA will often set recommended prices for their graphic cards, only for their partners to sell their versions of those GPUs for more than the manufacturer's suggested retail price (MSRP). For example, NVIDIA's website lists the RTX 5070 with a starting price of $549. On Newegg, there are no new 5070s listed at that price. The only models anywhere close to $549 are refurbished and open box specials. If you want one that comes sealed, that will cost you at least $630.

As for what company you should buy your new GPU from, before 2025, NVIDIA was the undisputed king of the market. Specific GeForce cards may have not offered the best rasterization performance in their price range, but between their performance in games with ray tracing and the fact NVIDIA was ahead on features like DLSS, an RTX GPU was a safe bet.

However, with this year's RTX 50 series release (and excluding models like the RTX 5080 and 5090 where there's no competition), it's safe to say NVIDIA missed the mark this generation. If you're in the market for an entry- or mid-level GPU, AMD and Intel offer better value, with cards that come with enough VRAM for now and into the future. That said, there are still a few reasons you might consider an NVIDIA GPU, starting with ray tracing.

For decades, developers have used rasterization techniques to approximate how light behaves in the real world, and the results have been commendable. But if you know what to look for, it's easy to see where the illusion falls apart. For that reason, real-time ray tracing has been a goal of industry for years, and in 2018 it became a reality with NVIDIA's first RTX cards.

In some games, effects like ray-traced reflections and global illumination are transformational. Unfortunately, those features are expensive to run, often coming at a significant frame rate drop without upscaling. Since ray tracing was optional in many games before 2025, you could save money by buying an AMD GPU. For example, even if the RX 7800 XT was worse at ray tracing than the RTX 4070, the former was often cheaper to buy, had more onboard VRAM and offered as good or better rasterization performance in many games.

However, you can't ignore ray tracing performance anymore. We're starting to see releases like Doom: The Dark Ages where the tech is an integral part of a game's rendering pipeline, and more are likely to follow in the future. Thankfully, AMD's newest cards are much better in that regard, though you'll still get an edge running an NVIDIA model. For that reason, if ray tracing is important to you, NVIDIA cards are still the way to go.

If you're new to the world of PC gaming, it can be tricky to wrap your head around refresh rates. In short, the higher the refresh rate of a monitor, the more times it can update the image it displays on screen every second, thereby producing a smoother moving picture.

For example, moving elements on a monitor with a 240Hz refresh rate will look better than on one with a 120Hz refresh rate. However, that's dependent on your GPU being able to consistently render a game at the appropriate frame rates. In the case of a 120Hz monitor, you want a GPU with enough headroom to drive most games at 120 fps. Realistically, most video cards won't be able to achieve that in every game, but it's a good baseline to aim for when shopping for a new GPU.

Since the release of NVIDIA's RTX 40-series GPU, the company has offered a feature called frame generation. As the name suggests, it allows NVIDIA's latest video cards to generate an additional frame for every frame they render normally. With the 50-series, NVIDIA has since begun offering multi-frame generation, which gives those GPUs the ability to generate up to three additional frames for every rendered frame. AMD has its own take on the tech, as does Intel, though NVIDIA's offering is considered superior to both due to how it handles frame pacing.

Frame generation is nice to have, but it's not the silver bullet it might seem. Enabling it will increase system latency, reducing how responsive your games feel. Somewhat unintuitively, high-end GPUs also benefit more from the tech than their entry-level counterparts since they can naturally render more frames. For that reason, it's best to think of frame generation as a way to get the most out of a high refresh rate display.

I've mentioned DLSS a few times already. Alongside FSR and Intel XeSS, DLSS is an example of what's known as an image reconstruction technology. More and more, native rendering is going out of fashion in game design. With ray tracing and other modern effects enabled, even the most powerful GPUs can struggle to render a game at 1440p or 4K and a playable framerate. That’s why many developers will turn to DLSS, FSR or XeSS to eke out additional performance by upscaling a lower resolution image to QHD or UHD.

Upscaling in games is nothing new. For example, the PS4 Pro used a checkerboard technique to output games in 4K. What’s different now is how modern GPUs go about it. With DLSS, NVIDIA pioneered an approach that uses machine learning to recreate an image at a higher resolution, and in the process, addressed some of the pitfalls of past upscaling methods. If you're sensitive to these sorts of things, there's still blur and shimmer with DLSS, FSR and XeSS, but it's much less pronounced and can lead to significant performance gains.

To DLSS, NVIDIA later added single and multi-frame generation. DLSS is only available on NVIDIA cards, and following the recent release of DLSS 4.5, widely considered to offer the best image quality. That's another reason why you might choose an NVIDIA card over one of its competitors.

However, if you decide to go with an AMD GPU, don't feel like you're missing out. The company recently released FSR 4. While it's not quite on par with DLSS 4 and 4.5 in terms of support and image quality, it's a major leap over FSR 3 and FSR 2.

While on the subject of DLSS, I'll also mention NVIDIA Reflex. It's a latency-reducing technology NVIDIA introduced in 2020. AMD has its own version called Radeon Anti-Lag, but here again Team Green has a slight edge. If you're serious about competitive games, Reflex can significantly reduce input lag, which will make it easier to nail your shots in Counter-Strike 2, Valorant and other shooters.

Previously, one of the reasons to pick an NVIDIA GPU over the competition was the company's solid track record of driver support. With one of the company's video cards, you were less likely to run into stability issues and games failing to launch. At the start of 2025, NVIDIA's drivers were abysmal, but the company has since corrected course.

As you're comparing different GPUs, especially those in the same tier, pay close attention to the amount of VRAM they offer. Many modern games will eat up as much VRAM as a GPU can offer, and if your card has a low amount, such as 8GB, you're likely to run into a performance bottleneck.

If your budget allows for it, always go for the model with more VRAM. Consider, for instance, the difference between the $379 RTX 5060 Ti 8GB and $429 RTX 5060 Ti 16GB. Spending an extra $50 is going to be a lot for some people, but it's the difference between a card that is only adequate for many recent releases and one that will last you for a few years. In many cases, more VRAM is better.

A slight caveat to this is when comparing models that have different memory bandwidths. A GPU that can access more of its memory faster can outperform one with more memory, even if it has less of it outright. Here, you'll want to read reviews of the models you're comparing to see how they perform in different games.

Modern GPUs are big. Most new cards will take up at least two PCI slots on the back of your motherboard. They can also vary dramatically in length, depending on the number of fans the AIB has added to cool the PCB. To be safe, be sure to check the length of the card you want to buy against the maximum clearance listed by your case manufacturer. If you have a radiator at the front of your case, you will also need to factor the size of that in your measurements. The last thing you want is to buy a card that doesn't fit in your case.

Lastly, be sure to check the recommended power supply for the card you want. As a rule of thumb, unless you know what you're doing, it's best to just stick with the manufacturer's recommendation. For instance, NVIDIA suggests pairing the RTX 5070 Ti with a 750 watt PSU. So if you're currently running a 650 watt unit, you'll need to factor in the price of a PSU upgrade with your new GPU.

NVIDIA RTX 5060 Ti
NVIDIA RTX 5060 Ti
Devindra Hardawar for Engadget

It depends. If you can find a deal on an old RTX 40 series GPU, then yes. NVIDIA's RTX 50 series don't offer greatly improved performance over their predecessors, and with most models selling for more than their suggested retail price, it's not the best time to buy a new NVIDIA card.

That said, I suspect finding a good deal on a used GPU will be difficult. Most people will know the value of what they have, and considering the current market, will probably try to get as much as they can for their old card.

You may find better deals on older AMD and Intel GPUs, but I think you're better off spending more now on a new model from one of those companies since the generational gains offered by their latest cards are much more impressive. Simply put, the 9070 XT and B580 are two of the best cards you can buy right now.

Anything older than a card from NVIDIA's 40 series or AMD's RX 6000 family is not worth considering. Unless your budget is extremely tight or you mostly play older games, you're much better off spending more to buy a new card that will last you longer.

If you've read up to this point, you're probably wondering if it's even worth buying a GPU right now. The answer is (unsurprisingly) complicated. There are a handful of great cards like the Radeon RX 9060 XT and 9070 that are absolutely worth it. The problem is finding any GPU at a price approaching those set by AMD, Intel or NVIDIA.

The AI boom, and in particular actions by OpenAI, have led to memory shortages. In turn, those shortages have caused the price of consumer GPUs, SSDs and RAM kits to skyrocket in recent months. As of our latest update to this guide, some models like the GeForce RTX 5070 Ti are selling for hundreds of dollars above MSPR.

As such, if you own a relatively recent GPU, you're probably best off trying to hold onto your current card until things settle down. But if your GPU isn't cutting it anymore, you face a difficult decision: overpay now, or wait and potentially pay even more later.

To make that decision easier, I've been maintaining a separate guide that lists a selection of GPU models you can buy close to MSPR. My goal is to update that article at least once per month, so be sure to check often.

Entry-level (1080p)

As we mentioned above, if you're only aiming to play basic competitive shooters like Valorant and Overwatch 2 in 1080p, an entry-level GPU may be all you need. While 1080p isn't an ideal resolution when it comes to sharpness, many gamers prefer it since it's easier to reach higher framerates. And it also helps that 1080p gaming monitors, like the AOC 24G15N 24-inch we recommend, tend to offer speedy refresh rates for between $100 and $200. When you're zipping through matches, you likely won't have time to take a breath and appreciate the detail from higher resolutions.

Here are our recommendations for entry-level video cards:

  • AMD Radeon RX 9060 XT 8GB: Surprisingly enough, you can actually find this AMD GPU for $300. While you'll have to live with 8GB of RAM, that's more than enough for 1080p gaming, and it also has the benefit of DLSS 4 upscaling.

  • AMD Radeon RX 7600: While it's a last-gen card, the RX 7600 is still powerful enough to handle basic shooters.

While entry-level cards can dabble with 1440p gaming, it's worth stepping up to something a bit more powerful if you actually want to achieve higher refresh rates. For most gamers, 1440p is the best balance between sharpness and high frame rates. It looks noticeably better than 1080p, and doesn't require the horsepower overhead of 4K. (And there's a good chance you won't really see a visual difference with the jump to 4K.)

Here are our recommendations for midrange GPUs:

  • NVIDIA RTX 5060 Ti: Forget the disappointing RTX 5070, the 5060 Ti delivers excellent 1080p and 1440p performance. And best of all, you can still find it under $500. (Read our NVIDIA RTX 5060 Ti review.)

  • AMD Radeon RX 9060 XT 16GB: A step up from the 8GB model we recommend above. The 16GB 9060 XT offers excellent performance across many of the latest games, and is less expensive than the 5060 Ti.

  • AMD Radeon RX 9070: AMD surprised us all with the Radeon RX 9070 and 9070 XT, two midrange cards that offered similar power to and more VRAM than NVIDIA's more expensive cards. While you won't see the RX 9070 for its $550 launch price today, you can still snag one for a slight premium at $650. (Check out our AMD Radeon RX 9070 and 9070 XT review.)

If you want the most of what modern PC games have to offer, including 4K and all of the benefits of ray tracing, then be ready to spend big bucks on a high-end GPU. If you're going this route, though, be sure you're also gaming on a high-end monitor that befits these powerful GPUs.

Here are our recommendations for premium GPUs:

  • NVIDIA RTX 5070 Ti: The RTX 5070 Ti surprised me with excellent 4K gaming performance for a launch price that was well below the RTX 5080. It's the best overall NVIDIA card if you want to play in 4K at 120Hz or beyond, but it's also the hardest to find at MSRP. (Check out our NVIDIA RTX 5070 Ti review.)

  • AMD Radeon RX 9070 XT: I already mentioned the RX 9070 XT. With shortages of the 5070 Ti, it's the best GPU you can buy now without paying a ridiculous premium. (Check out our AMD Radeon RX 9070 and 9070 XT review.)

  • NVIDIA RTX 5080: If the RTX 5070 Ti isn't enough for you, the RTX 5080's additional power and 16GB of VRAM should suit your fancy. Just be prepared to pay around $1,500 for it, a 50 percent jump from its $999 launch price.

This article originally appeared on Engadget at https://www.engadget.com/gaming/pc/how-to-buy-a-gpu-160100017.html?src=rss

Apple just made Xcode better for vibe coding

Apple has just released Xcode 26.3, and it's a big step forward in terms of the company's support of coding agents. The new release expands on the AI features the company introduced with Xcode 26 at WWDC 2025 to give systems like Claude and ChatGPT more robust access to its in-house IDE. 

With the update, Apple says Claude and OpenAI's Codex "can search documentation, explore file structures, update project settings, and verify their work visually by capturing Xcode Previews and iterating through builds and fixes." This is in contrast to earlier releases of Xcode 26 where those same agents were limited in what they could see of a developer's Xcode environment, restricting their utility. According to Apple, the change will give users tools they can use to streamline their processes and work more efficiently than before.

Developers can add Claude and Codex to their Xcode terminal from the Intelligence section of the app's setting menu. Once a provider is selected, the interface allows users to also pick their preferred model. So if you like the outputs of say GPT 5.1 over GPT 5.2, you can use the older system. 

The tighter integration with Claude and Codex was made possible by Model Context Protocol (MCP) servers Apple has deployed. MCP is a technology Anthropic debuted in fall 2024 to make it easier for large language models like Claude to share data with third-party tools and systems. Since its introduction, MCP has become an industry standard — with OpenAI, for instance, adopting the protocol last year to facilitate its own set of connections. 

Apple says it worked directly with Anthropic and OpenAI to optimize token usage through Xcode, but the company’s adoption of MCP means developers will be able to add any coding agent that supports the protocol to their terminal in the future. Xcode 26.3 is available to download for all members of the Apple Developer Program starting today, with the Mac Store availability “coming soon.”

This article originally appeared on Engadget at https://www.engadget.com/ai/apple-just-made-xcode-better-for-vibe-coding-195653049.html?src=rss

OpenAI brings its Codex coding app to Mac, with new multi-agent abilities included

Since last spring, OpenAI has offered Codex. What started life as the company's response to Claude Code is becoming something more sophisticated with the release of a new dedicated macOS app. At its most basic form, Codex is a programming agent capable of writing code for users, but now it can also manage multiple AI assistants that can work together to complete more complex tasks.

OpenAI gives an example of how this could work in practice. The company used Codex to create a Mario Kart-like racing game, complete with a selection of different playable cars, eight tracks and a collection of powerups players can use against the competition. For a single AI agent, generating a game from scratch, with all the needed visual assets, would be a tough ask, but Codex was able to complete the task because it could delegate the work of making the game to different models with complementary capabilities. 

For example, it turned to GPT Image for the visual assets, while a separate model simultaneously coded the web game. "It took on the roles of designer, game developer and QA tester to validate its work by actually playing the game," OpenAI says of the process. 

If that sounds complicated, OpenAI has tried to make it more approachable with a section of the app titled Skills. The feature bundles “instructions, resources, and scripts so Codex can reliably connect to tools, run workflows, and complete tasks according to your team’s preferences," the company explains. "The Codex app includes a dedicated interface to create and manage skills. You can explicitly ask Codex to use specific skills, or let it automatically use them based on the task at hand."

As you might imagine, Codex can also automate repetitive tasks. A dedicated Automations section of the app allows you to schedule tasks, which the software will complete in the background. "At OpenAI, we’ve been using Automations to handle the repetitive but important tasks, like daily issue triage, finding and summarizing CI failures, generating daily release briefs, checking for bugs, and more," the company said. 

The release of the Codex macOS app comes as AI startups explore what a group of AI agents working in parallel can accomplish. At the start of the year, Anysphere, the company behind Cursor, found it was possible to build a working web browser from scratch using such an approach, though it did encounter problems along the way. 

For a limited time, OpenAI is making Codex available to ChatGPT Free and Go users so they can see what's possible with this new software. At the same time, the company is doubling rates for Plus and Pro subscribers.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-brings-its-codex-coding-app-to-mac-with-new-multi-agent-abilities-included-183103262.html?src=rss

NASA used Claude to plot a route for its Perseverance rover on Mars

Since 2021, NASA's Perseverance rover has achieved a number of historic milestones, including sending back the first audio recordings from Mars. Now, nearly five years after landing on the Red Planet, it just achieved another feat. This past December, Perseverance successfully completed a route through a section of the Jezero crater plotted by Anthropic's Claude chatbot, marking the first time NASA has used a large language model to pilot the car-sized robot.    

Between December 8 and 10, Perseverance drove approximately 400 meters (about 437 yards) through a field of rocks on the Martian surface mapped out by Claude. As you might imagine, using an AI model to plot a course for Perseverance wasn't as simple as inputting a single prompt. 

As NASA explains, routing Perseverance is no easy task, even for a human. "Every rover drive needs to be carefully planned, lest the machine slide, tip, spin its wheels, or get beached," NASA said. "So ever since the rover landed, its human operators have painstakingly laid out waypoints — they call it a 'breadcrumb trail' — for it to follow, using a combination of images taken from space and the rover’s onboard cameras." 

To get Claude to complete the task, NASA had to first provide Claude Code, Anthropic's programming agent, with the "years" of contextual data from the rover before the model could begin writing a route for Perseverance. Claude then went about the mapping process methodically, stringing together waypoints from ten-meter segments it would later critique and iterate on.  

This being NASA we're talking about, engineers from the agency's Jet Propulsion Laboratory (JPL) made sure to double check the model's work before sending it to Perseverance. The JPL team ran Claude's waypoints through a simulation they use every day to confirm the accuracy of commands sent to the rover. In the end, NASA says it only had to make "minor changes" to Claude's route, with one tweak coming as a result of the fact the team had access to ground-level images Claude hadn't seen in its planning process.  

"The engineers estimate that using Claude in this way will cut the route-planning time in half, and make the journeys more consistent," NASA said. "Less time spent doing tedious manual planning — and less time spent training — allows the rover’s operators to fit in even more drives, collect even more scientific data, and do even more analysis. It means, in short, that we’ll learn much more about Mars."

While the productivity gains offered by AI are often overstated, in the case of NASA, any tool that could allow its scientists to be more efficient is sure to be welcome. Over the summer, the agency lost about 4,000 employees – accounting for about 20 percent of its workforce – due to Trump administration cuts. Going into 2026, the president had proposed gutting the agency's science budget by nearly half before Congress ultimately rejected that plan in early January. Still, even with its funding preserved just below 2025 levels, the agency has a tough road ahead. It's being asked to return to the Moon with less than half the workforce it had during the height of the Apollo program.     

For Anthropic, meanwhile, this is a major feat. You may recall last spring Claude couldn't even beat Pokémon Red. In less than a year, the company's models have gone from struggling to navigate a simple 8-bit Game Boy game to successfully plotting a course for a rover on a distant planet. NASA is excited about the possibility of future collaborations, saying "autonomous AI systems could help probes explore ever more distant parts of the solar system."

This article originally appeared on Engadget at https://www.engadget.com/ai/nasa-used-claude-to-plot-a-route-for-its-perseverance-rover-on-mars-203150701.html?src=rss

Google’s Project Genie lets you create your own 3D interactive worlds

This past summer, Google DeepMind debuted Genie 3. It’s what’s known as a world world, an AI system capable of generating images and reacting as the user moves through the environment the software is simulating. At the time, DeepMind positioned Genie 3 as a tool for training AI agents. Now, it’s making the model available to people outside of Google to try with Project Genie.

To start, you’ll need Google’s $250 per month AI Ultra plan to check out Project Genie. You’ll also need to live in the US and be 18 years or older. At launch, Project Genie offers three different modes of interaction: World Sketching, exploration and remixing. The first sees Google’s Nano Banana Pro model generating the source image Genie 3 will use to create the world you will later explore. At this stage, you can describe your character, define the camera perspective — be it first-person, third-person or isometric — and how you want to explore the world Genie 3 is about to generate. Before you can jump into the model’s creation, Nano Banana Pro will “sketch” what you’re about to see so you can make tweaks. It’s also possible to write your own prompts for worlds others have used Genie to generate.

One thing to keep in mind is that Genie 3 is not a game engine. While its outputs can look game-like, and it can simulate physical interactions, there aren’t traditional game mechanics here. Generations are also limited to 60 seconds, as is the presentation, which is capped at 24 frames per second and 720p. Still, if you’re an AI Ultra subscriber, this is a cool opportunity to see the bleeding edge of what DeepMind has been working over the past couple of years.

This article originally appeared on Engadget at https://www.engadget.com/ai/googles-project-genie-lets-you-create-your-own-3d-interactive-worlds-183646428.html?src=rss

Google brings its Nano Banana image generator to Chrome

Following its recent AI makeover of Gmail, Google is bringing more Gemini-powered tools to Chrome. Starting today, a host of new features are rolling out for the browser, with more to come over the next few months. 

The first of the new features is a sidebar. Available to all Gemini in Chrome users, the interface allows you to chat with Gemini and keep a conversation going across multiple tabs. Google suggests the sidebar is useful for multitaskers. "Our testers have been using it for all sorts of things: comparing options across too-many-tabs, summarizing product reviews across different sites, and helping find time for events in even the most chaotic of calendars," the company writes. 

Now you can access Nano Banana, Google's in-house image generator, directly from Chrome. No need to go to the Gemini app.
Now you can access Nano Banana, Google's in-house image generator, directly from Chrome. No need to go to the Gemini app.
Google

The sidebar is also where you access the second new feature Google is adding to Chrome. Following its successful rollout within the Gemini app, Nano Banana, Google's in-house image generator, is available directly inside of the browser. With the addition, you won't need to open a new tab when you want Gemini to make you an AI image. You also won't need to download and upload a file when you want Gemini to edit an existing image for you. Instead, you can complete both of those tasks from any of your open tabs, thanks to the new sidebar.    

Looking forward, Google plans to bring Personal Intelligence, which debuted inside of the Gemini app at the start of January, to Chrome in the coming months. Once the feature arrives, it will allow the browser to remember past conversations you've had with Gemini. In turn, Google says this will lead to a more personalized Chrome. "Personal Intelligence in Chrome transforms the browsing experience from a general purpose tool into a trusted partner that understands you and provides relevant, proactive, and context-aware assistance," the company said.

In the meantime, Gemini in Chrome already supports Google's Connected Apps feature, which allows the assistant to pull information from the company's other services, including Gmail and Calendar. During a press briefing, a Google employee demoed this feature by asking Gemini to pull up the dates of when their children would be on March break. Without telling the assistant where to look, Gemini sourced the correct time frame from the employee's email inbox.    

A new sidebar interface allows Chrome users to access Gemini from any of their open tabs.
A new sidebar interface allows Chrome users to access Gemini from any of their open tabs.
Google

Last but not least, Google is previewing a new auto browse feature inside of Chrome. In the demo the company showed, an employee asked Gemini to find and buy them the same winter jacket they bought a few seasons ago. The assistant first drafted a plan outlining how best to tackle the request. It reasoned the best place to start was with a search of the employee's email inbox to determine the correct model and size of jacket. It then went shopping.

While Gemini was working on this task, the employee was free to continue browsing in Chrome. At several points in the process, the assistant would stop before continuing to obtain the employee's permission to move forward. For instance, it paused when it needed login credentials, and again when it needed a credit card number to complete the purchase. 

Judging from the demo, it will probably take you less time to do your online shopping and other browser tasks on your own. Google suggests the feature will appeal to those who are creatures of habit. Say you often order the same produce from a grocery delivery service every week, Gemini can automate the ordering. Plus, the feature is in preview, so early testers probably won't be too put off by Gemini's slow pace. In any case, Google AI Pro and Ultra subscribers in the US can try auto browse starting today.    

This article originally appeared on Engadget at https://www.engadget.com/ai/google-brings-its-nano-banana-image-generator-to-chrome-180000104.html?src=rss

OpenAI releases Prism, a Claude Code-like app for scientific research

OpenAI is releasing a new app called Prism today, and it hopes it does for science what coding agents like Claude Code and its own Codex platform have done for programming. 

Prism builds on Crixet, a cloud-based LaTeX platform the company is announcing it acquired today. For the uninitiated, LaTeX is a typesetting system for formatting scientific documents and journals. Nearly the entire scientific community relies on LaTeX, but it can make some tasks, such as drawing diagrams through TikZ commands, time-consuming to do. Beyond that, LaTeX is just one of the software tools a scientist might turn to when preparing to publish their research.   

That's where Prism comes into the picture. Like Crixet before it, the app offers robust LaTeX editing and a built-in AI assistant. Where previously it was Crixet's own Chirp agent, now it's GPT-5.2 Thinking. OpenAI's model can help with more than just formatting journals — in a press demo, an OpenAI employee used it to find and incorporate scientific literature that was relevant to the paper they were working on, with GPT-5.2 automating the process of writing the bibliography. 

"None of this absolves the scientist of the responsibility to verify that their references are correct, but it can certainly speed up the process," said Kevin Weil, vice president of science for OpenAI, when asked during the demo the possibility of ChatGPT generating fake citations. 

"We're conscious that, as AI becomes more capable, there are concerns around volume, quality and trust in the scientific community," he later added. "Our view is that the right response is not to keep AI at arm's length or let it operate invisibly in the background; it's to integrate it directly into scientific workflows in ways that preserve accountability and keep researchers in control." 

Later in the same demo, the OpenAI employee used Prism to generate a lesson plan for a graduate course on general relativity, as well as a set of problems for students to solve. OpenAI envisions these features helping scientists and professors spend less time on the more tedious tasks in their professions. 

Prism is available to anyone with a personal ChatGPT account. It includes support for unlimited projects and collaborators. OpenAI plans to bring the software to organizations on ChatGPT Business, Team, Enterprise and Education plans soon. Crixet won’t be offered separately.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-releases-prism-a-claude-code-like-app-for-scientific-research-180000454.html?src=rss

Gemini 3 is now Google’s default model for AI Overviews

Google has begun rolling out two upgrades for Search. Starting today, Gemini 3 is the default model powering AI Overviews. When the company debuted its new family of AI systems last November, it first deployed Gemini 3 in AI Overviews through a router that was programmed to direct the most difficult questions to the new system. Now Google is making Gemini 3 the standard for all users globally. In practice, Gemini 3 should prove better at generating more credible and relevant summaries. 

As for that second upgrade, now you can jump into AI Mode conversation directly from an AI Overview. Google first previewed this feature late last year.

"In our testing, we’ve found that people prefer an experience that flows naturally into a conversation — and that asking follow-up questions while keeping the context from AI Overviews makes Search more helpful," said Robby Stein, vice president of product for Google Search. "It’s one fluid experience with prominent links to continue exploring: a quick snapshot when you need it, and deeper conversation when you want it."

If you're using Google Search on a mobile device, you can jump directly into an AI Mode conversation from an AI Overview starting today. 

This article originally appeared on Engadget at https://www.engadget.com/ai/gemini-3-is-now-googles-default-model-for-ai-overviews-170000302.html?src=rss

Claude now offers deeper integrations with apps like Canva and Slack

Anthropic has been building out support for third-party apps inside of Claude. As of today, the chatbot can now connect to platforms like Slack and Canva, fetching up files from inside those apps or performing tasks within them on a user's behalf.

For instance, when connected to Box, Claude can now search for files, preview documents inline and answer questions about the content in front of you. Meanwhile, with a connection to Asana, it can now turn chats into projects, tasks and timelines your co-workers can then find and interact with on the project management app. 

Box and Asana are just two of the platforms adding deeper integrations with Claude today. In total, there are nine launch partners, with some of the more notable ones including Canva, Figma and Slack.   

As with Anthropic's past integrations, the new functionality is powered by Model Context Protocol (MCP) servers. MCP is a technology Anthropic released in fall 2024 to make it easier for third-party platforms to connect their systems to Claude. Since then, the protocol has become an industry standard. OpenAI, for instance, adopted MCP last year, and has been building additional support since then. At the end of last year, Anthropic donated the protocol to the Linux Foundation. The company says AI platforms will be able to bring similar integrations to their own products since they're built on a new open extension designed by Anthropic.  

This article originally appeared on Engadget at https://www.engadget.com/ai/claude-now-offers-deeper-integrations-with-apps-like-canva-and-slack-180000604.html?src=rss