How to de-Gemini your Google apps

Over the past couple of years, Google has found ways to stuff Gemini in nearly every app and service it offers. Whether it's Gmail with its AI inbox or Chrome with its chat sidebar, Gemini is now inescapable inside of Workspace. I don't know about you, but I don't need an AI to tell me how to write a =SUM equation in Sheets or an outline for a first draft. Most of the time, I find Gemini is a distraction. If you feel the same way, this how-to is for you.    

From the "General" tab of Gmail's settings menu, look for the Smart features checkbox.
From the "General" tab of Gmail's settings menu, look for the Smart features checkbox.
Igor Bonifacic for Engadget

To turn Gemini off, you will need to disable two separate sets of options. The first set covers a set of features, including smart compose, that are shared across Gmail, Chat and Meet — so if you turn them off in one app, they won't be available in any of the three. All of this is most easily done through Gmail's web client.  

  1. In Gmail, tap the cog icon.

  2. Select See all settings.

  3. Under the General tab, scroll down to find Smart features.

  4. Disable Turn on smart features in Gmail, Chat, and Meet.  

In Japan, Switzerland, the United Kingdom or European Economic Area, smart features are turned off by default.  

Next, turn your attention to Workspace. 

  1. In Gmail, tap the cog icon.

  2. Select See all settings.

  3. Under the General tab, scroll down and click Manage Workspace smart feature settings.

  4. Toggle off smart features in Google Workspace and Smart features in other Google products.

A word of warning: completely disabling smart features in Google Workspace turns off not only Gemini integration but also access to basic capabilities like spelling and grammar corrections. You'll also lose features that have been Google staples for years. In Gmail, for example, the app will stop sorting incoming emails by priority, a notification at the top of the screen informing you that smart features are required for inbox categorization. Whatever Google's motivation for this state of affairs, it's a design decision that actively discourages users from disabling Gemini integration. 

Disabling Gemini in Google Workspace will also turn off other features.
Igor Bonifacic for Engadget

If you want to rid yourself of Gemini but would still like to use some of the other features the company offers through Gmail and its other apps, I recommend leaving the first set of smart features on while disabling the Workspace-specific ones. You can also opt to turn off some of the features included in the first group, while leaving others on. Below is a list of those features, with a brief overview of the less self-explanatory ones.    

  • Grammar

  • Spelling

  • Autocorrect

  • Smart Compose — as you write an email, Gmail will generate predictive writing suggestions  

  • Smart Compose personalization — as you write, Gmail will tailor Smart Compose suggestions to your writing style 

  • Nudges — Gmail will generate notifications prompting you to respond or follow up on unanswered emails   

  • Smart Reply — Gmail will generate suggestions on how to respond to an email

  • Package tracking — Google will display shipping updates inside of Gmail

  • Desktop notifications — Yes, for some reason you need the power of AI to get notifications on your PC 

Unfortunately, Google doesn't offer this same level of granular control when it comes to smart features inside of Workspace. For instance, if you turn off Gemini in Docs, Calendar won't automatically display events from Gmail. Again, Google really wants to dissuade you from disabling Gemini. 

If your workplace uses Google Workspace, all of the above options should be present in Gmail's settings menu, and you can follow the same steps to turn off most of the smart features Google offers. Unfortunately, the second part of the process does nothing. You will still see Gemini in Docs, Sheets and elsewhere, even with smart features in Workspace turned off. Only your admin can completely turn off Gemini for you.

This article originally appeared on Engadget at https://www.engadget.com/ai/how-to-de-gemini-your-google-apps-170000462.html?src=rss

How to watch the Artemis II landing

After its history-making trip around the Moon, NASA's Artemis II mission is set to return to Earth later today. The Orion spacecraft carrying astronauts Reid Wiseman, Christina Koch, Victor Glover and Jeremy Hansen is scheduled to splash down off the coast of San Diego at approximately 8:07PM ET. NASA will stream the landing on YouTube and its NASA+ website, as will Netflix and HBO Max. The official broadcast will begin at 6:30PM ET.  

After leaving Earth on NASA's super heavy-lift SLS rocket and spending nine days in space, the most dangerous part of the Artemis II mission still lies ahead. It will take approximately 13 minutes for the Orion spacecraft to complete re-entry. During that time, it will be subject to temperatures of up to 5,000 degrees Fahrenheit (2,760 degrees Celsius). 

Reentry is dangerous for any crewed spacecraft, but is of particular concern here because of a "skip reentry" during the Artemis 1 mission. At that time, the Orion crew vessel briefly used its own lift to "skip" back out of Earth's upper atmosphere before re-entering for the final descent, suffering excess charring in the process. NASA spent months investigating and determined the craft was safe to fly, but Artemis II will take a more gradual approach back to Earth in hopes of reducing its exposure to excess heat. 

Still, this is the first time in 53 years that NASA will need to guide a human crew back from the Moon. Once all is said and done, however, the Artemis II crew will have traveled 695,081 miles (1,118,624 km), captured amazing images along the way and reminded the world what’s possible when nations work together. 

This article originally appeared on Engadget at https://www.engadget.com/science/space/how-to-watch-the-artemis-ii-landing-145344873.html?src=rss

How to watch the Artemis II landing

After its history-making trip around the Moon, NASA's Artemis II mission is set to return to Earth later today. The Orion spacecraft carrying astronauts Reid Wiseman, Christina Koch, Victor Glover and Jeremy Hansen is scheduled to splash down off the coast of San Diego at approximately 8:07PM ET. NASA will stream the landing on YouTube and its NASA+ website, as will Netflix and HBO Max. The official broadcast will begin at 6:30PM ET.  

After leaving Earth on NASA's super heavy-lift SLS rocket and spending nine days in space, the most dangerous part of the Artemis II mission still lies ahead. It will take approximately 13 minutes for the Orion spacecraft to complete re-entry. During that time, it will be subject to temperatures of up to 5,000 degrees Fahrenheit (2,760 degrees Celsius). 

Reentry is dangerous for any crewed spacecraft, but is of particular concern here because of a "skip reentry" during the Artemis 1 mission. At that time, the Orion crew vessel briefly used its own lift to "skip" back out of Earth's upper atmosphere before re-entering for the final descent, suffering excess charring in the process. NASA spent months investigating and determined the craft was safe to fly, but Artemis II will take a more gradual approach back to Earth in hopes of reducing its exposure to excess heat. 

Still, this is the first time in 53 years that NASA will need to guide a human crew back from the Moon. Once all is said and done, however, the Artemis II crew will have traveled 695,081 miles (1,118,624 km), captured amazing images along the way and reminded the world what’s possible when nations work together. 

This article originally appeared on Engadget at https://www.engadget.com/science/space/how-to-watch-the-artemis-ii-landing-145344873.html?src=rss

Meta’s Muse Spark model brings reasoning capabilities to the Meta AI app

Following the icy reception to Llama 4, Meta is releasing the first in a new family of AI systems built by its recently formed Superintelligence team. The company is kicking off its new Muse era with Spark, a lightweight model geared toward consumer use. In the future, Meta plans to offer more capable versions of Muse, but for now, it's clear the company wants to nail the basics. 

To that point, many of Spark's capabilities are table stakes for a new model in 2026. For instance, it offers both "Instant" and "Thinking" modes. With the latter engaged, the model will take an extra few moments to reason through a prompt. Other consumer-facing AI systems have had this kind of flexibility for a while. Anthropic, for example, was one of the first AI labs to offer a "hybrid reasoning model" when it released Claude Sonnet 3.7 at the start of last year. That said, Meta plans to add an even more powerful "Contemplating" mode down the road.   

A GIF demonstrating Muse Spark's multi-agent capabilities.
Meta

Muse Spark can also coordinate multiple AI subagents to tackle a request. Meta suggests users will see this capability in action when they ask for help with tasks like family trip planning. In such a scenario, one agent might compile an itinerary, while another finds kid-friendly activities everyone can enjoy. At the same time, Meta has built Spark to be natively multimodal, meaning the model can process images, video and audio. Like Google Lens, this gives you the option to snap a photo with your phone and ask Meta AI questions about what you see. 

Of course, it wouldn't be a 2026 AI release if Muse Spark didn't include a built-in shopping assistant. Like ChatGPT, Spark can compare different items for you, listing the pros and cons of each, with links to make it easy to buy the product that appeals to you.

Muse Spark is available today in the Meta AI app and meta.ai website everywhere where the company offers those services. Meta will begin rolling out the new features the model powers starting in the US. In the coming weeks, the company plans to bring Muse Spark to more countries and places where people can access Meta AI, including Facebook, Instagram and WhatsApp. 

Additionally, Meta says it "hopes to open source future versions of the model." We'll see if the company ends up doing that; last year, Meta CEO Mark Zuckerberg appeared to flip flop on the company's open source stance, saying it would need to be more "rigorous" about such decisions moving forward.

This article originally appeared on Engadget at https://www.engadget.com/ai/metas-muse-spark-model-brings-reasoning-capabilities-to-the-meta-ai-app-161456684.html?src=rss

Chrome finally adds support for vertical tabs. 

Google has started rolling out a small but significant update to Chrome on desktop. Starting today, users will begin seeing an option to organize their tabs vertically. To use the new feature, right click on any Chrome window and select "Show Tabs Vertically." 

Google is late to the game here. Before today, every other major browser but Chrome offered support for vertical tabs — though the quality of implementation varies widely. Firefox, for instance, has supported vertical tabs since its 136 update in March of last year, and in my experience, has one of the best interfaces for managing dozens of tabs. Apple's own Safari is another browser with the option to stack tabs vertically, though things can quickly get confusing due to all the different ways you can group webpages. 

Separately, Google is rolling out an enhanced reading mode that offers a new full-page interface. To use the feature, right click on a page and select "Open in reading mode." As you might imagine, reading mode is designed to make busy webpages easier to get through without distraction. As with most Chrome upgrades, it may take a few days before today's update rolls out to your device, so be patient if you don't see it right away.    

This article originally appeared on Engadget at https://www.engadget.com/computing/chrome-finally-adds-support-for-vertical-tabs-170000081.html?src=rss

How to watch the historic Artemis II lunar flyby

NASA's Artemis II mission is about to make history. After a successful April 1 launch, and a trip of 39,000 miles through space, astronauts Reid Wiseman, Christina Koch, Victor Glover and Jeremy Hansen are about to travel farther from Earth than any human beings have before, and you can watch the entire thing unfold online. NASA will stream the entire flyby on YouTube and its own NASA+ website, with coverage beginning at 1PM ET.  You can also watch NASA+ through Netflix.

It's going to take some time for things to get underway, so if you're working or have plans this evening but don't want to miss seeing history being made, your best bet is to try and catch a handful of key moments. At approximately 1:56PM ET, Artemis II will fly farther than any crewed mission has before, breaking the previous record set by Apollo 13 in 1970. Then, the Orion spacecraft will begin its flyby of the Moon at 2:45PM ET, with the craft expected to make its closest approach to the lunar surface at approximately 7:02PM ET. A few short minutes later, the spacecraft will reach its maximum distance from Earth at about 7:07PM ET. 

A little more than an hour later at 8:35PM, the Artemis II crew will get a chance to see a total solar eclipse from the far side of the Moon. This is something that won't be visible from Earth. So if you can only catch one part of the broadcast, this is the one to watch.

This article originally appeared on Engadget at https://www.engadget.com/science/space/how-to-watch-the-historic-artemis-ii-lunar-flyby-155114417.html?src=rss

OpenAI brings ChatGPT’s Voice mode to CarPlay

In a surprise release, OpenAI has made ChatGPT's Voice mode available through Apple CarPlay. If you're running the latest version of both iOS and the ChatGPT app, and own a CarPlay-compatible vehicle, you can check out the experience. To get started, download all the necessary software, connect your iPhone to CarPlay and select "New voice chat" from ChatGPT. When the in-app text indicates ChatGPT is "listening," you can start a conversation.         

There are some notable limitations to using ChatGPT Voice with CarPlay. For one, OpenAI's chatbot can't control car functions. If you want to adjust the cabin temperature or skip tracks, you'll still need Siri for those tasks. Due to Apple's restrictions, you also can't start using ChatGPT through a wake word like you can Siri. For example, to resume a previous conversation, you need to open the ChatGPT app from CarPlay and tap a recent or pinned chat.  

With those limitations in mind, OpenAI suggests you can use Voice mode to get how-to advice, brainstorm ideas and practice languages. Personally, I like to listen to podcasts and music when I'm driving, but if talking with ChatGPT is your thing, you do you.    

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-brings-chatgpts-voice-mode-to-carplay-191422297.html?src=rss

Google releases Gemma 4, a family of open models built off of Gemini 3

When Google released Gemini 3 Pro at the end of last year, it was a significant step forward for the company's proprietary large language models. Now, the company is bringing some of the same technology and research that made those models possible to the open source community with the release of its new family of Gemma 4 open-weight models.

Google is offering four different versions of Gemma 4, differentiated by the number of parameters on offer. For edge devices, including smartphones, the company has the 2-billion and 4-billion "Effective" models. For more powerful machines, there’s the 26-billion "Mixture of Experts" and 31-billion "Dense" systems. For the unfamiliar, parameters are the settings a large language model can tweak to generate an output. Typically, models with more parameters will deliver better answers than ones with less, but running them also requires more powerful hardware. 

With Gemma 4, Google claims it's managed to engineer systems with "an unprecedented level of intelligence-per-parameter." To back up this claim, the company points to the performance of Gemma 4's 31-billion and 26-billion variants, which claimed the third and sixth spots respectively on Arena AI's text leaderboard, beating out models 20 times their size.     

All of the models can process video and images, making them ideal for tasks like optical character recognition. The two smaller models are also capable of processing audio inputs and understanding speech. Separately, Google says the Gemma 4 family is capable of generating offline code, meaning you could use them to do vibe coding without an internet connection. Google has also trained the models in more than 140 languages.    

Google is releasing the Gemma 4 family under an Apache 2.0 license. The company made previous Gemma models available through its own Gemma license. The move will give people a greater deal of freedom to modify the new systems to their needs.  

"This open-source license provides a foundation for complete developer flexibility and digital sovereignty; granting you complete control over your data, infrastructure and models." Google said. "It allows you to build freely and deploy securely across any environment, whether on-premises or in the cloud." 

If you want to give one of the systems a try for yourself, the model weights are available through Hugging Face, Kaggle and Ollama. 

This article originally appeared on Engadget at https://www.engadget.com/ai/google-releases-gemma-4-a-family-of-open-models-built-off-of-gemini-3-160000332.html?src=rss

Claude Code leak suggests Anthropic is working on a ‘Proactive’ mode for its coding tool

What should have been a routine release has revealed some of the features Anthropic has been working on for Claude Code. As reported by Ars Technica, The Verge and others, after the company released Claude Code's 2.1.88 update on Tuesday, users found it contained a file that exposed the app's source code. Before Anthropic took action to plug the leak, the codebase was uploaded to a public GitHub repository, where it was subsequently copied more than 50,000 times. All told, the entire internet (and Anthropic's competitors) got a chance to examine more than 512,000 lines of code and 2,000 TypeScript files. 

In the aftermath, some people claim to have found evidence of upcoming features Anthropic is working to develop. Over on X, Alex Finn, the founder of AI startup Creator Buddy, says he found a flag for a feature called Proactive mode that will see Claude Code work even when the user hasn't prompted it to do something. Finn claims he also found evidence of a crypto-based payment system that could potentially allow AI agents to make autonomous payments. In a Reddit post spotted by The Verge, another person found evidence that Anthropic might have been working on a Tamagotchi-like virtual companion that "reacts to your coding" as a kind of April Fools joke.    

"A Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson told Bleepingcomputer. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again."

As with any other leak, it's worth remembering plans can and often do change. Just because a company has written the code to support a feature doesn't mean it will eventually ship said feature. 

This article originally appeared on Engadget at https://www.engadget.com/ai/claude-code-leak-suggests-anthropic-is-working-on-a-proactive-mode-for-its-coding-tool-150107049.html?src=rss

The RAM crisis is Apple’s best chance in decades to capture the PC market

In the current RAM crisis, no company is better positioned to not only weather the storm but turn it to its advantage like Apple. It proved that when it released the MacBook Neo in early March. Despite only including 8GB of RAM, the Neo doesn't feel compromised, a testament to the company's silicon and software engineering. For Apple, it may be tempting to treat its latest MacBook as a one-off. That would be a mistake, because at this moment, the business decisions that made the Neo possible represent a once-in-a-generation opportunity to become a bigger player in the PC market. 

If you read Engadget, there's a good chance you know the contours of the global memory shortage, but it's worth repeating just how bad things have become in recent months. Just three companies — SK Hynix, Samsung and Micron — produce more than 90 percent of the world's memory chips. At the end of last year, Micron announced it would end its consumer-facing business to focus on providing RAM and other components to AI customers.  

Citing data from TrendForce, The Wall Street Journal reported in January that data centers would consume 70 percent of the high-end memory produced in 2026. As the Big Three shift more of their production to meet enterprise demand, they're allocating fewer wafers for consumer products, leading to dramatic price increases in that market segment. According to data from Counterpoint Research, the price of memory — including consumer RAM kits and SSDs, as well as LPDDR5X memory for smartphones — increased by 50 percent during the final quarter of 2025. Before the end of the current quarter, the firm predicts prices will increase by another 40 to 50 percent, and the CEO of SK Hynix recently warned shortages could last until 2030

Since nearly all consumer electronics need some amount of RAM and storage, the trickle-down effects have come fast and hard. In December, before the situation got as bad as it is now, TrendForce warned that most of the major PC manufacturers were either considering, if not already planning, price hikes. This month, the firm warned laptop prices could increase by as much as 40 percent if manufacturers and retailers moved to protect their margins. Such a scenario would send the cost of a $900 model to about $1,260.

Amid all that, Apple added another point of pressure: the $600 MacBook Neo. During a recent investor call, Nick Wu, the chief financial officer of ASUS, described the Neo as "a shock to the entire market," adding "all PC vendors, including upstream vendors like Microsoft, Intel and AMD" are taking the cute device "very seriously." Wu warned ASUS would "need more time" before it could ready a response.         

For ASUS and other Windows manufacturers, any response realistically may take a year or more to formulate. That's because the Neo represents both a technical and logistical hurdle. 

To start, it's a fundamentally different machine from the one most Windows OEMs are making right now. It has the advantage of using "unified memory" instead of a set of traditional RAM modules. The 8GB of RAM the Neo has is shared between the A18 Pro's CPU and GPU, meaning it can more efficiently use the RAM that it does have. That's part of the reason the Neo doesn't feel like a Windows PC with 8GB of RAM. Apple didn't get to the A18 Pro and the MacBook Neo by accident. It has spent more than a decade designing its own chips. 

Since 2024, Microsoft has mandated 16GB of RAM — and 256GB of solid-state storage — for PCs that are part of its Copilot+ AI program. That branding effort may not have amounted to much, with Copilot+ AI PCs accounting for just 1.9 percent of all computers sold in the first quarter of 2025, but it did push OEMs, including ASUS, Dell and others to make more capable machines. It also saw Microsoft rework Windows to better support ARM-based processors from Qualcomm. Still, it's hard to see how Windows manufacturers can challenge Apple by going back to existing or older x86 chips with with less RAM. 

Qualcomm's Snapdragon X2 processors could offer a potential response, but there are question marks there too. At CES 2026, the company announced the Snapdragon X2 Plus, a pared down version of its X2 Elite chipset with a six-core CPU. On paper, it should offer similar performance to the A18 Pro, but it doesn't seem Qualcomm has produced the chip at scale or that Windows OEMs have shown much interest in it. As of the writing of this story, the company's website lists just four X2 Plus-equipped models. I was only able to find one of those in stock, the $1,050 HP Omnibook 5. It has an OLED screen and more RAM than the Neo. Could HP repurpose something like the Omnibook 5 to take on the Neo? Maybe, but I'm not sure there's getting around the need for 16GB to get Windows 11 running decently.    

Even if the Snapdragon X2 Plus offers a stopgap measure, no company operates a supply chain quite like Apple. It has spent billions of dollars to make itself independent of companies like Qualcomm by designing its own Wi-Fi and Bluetooth chips, for example. It also doesn't need to pay Microsoft a licensing fee to use a bloated Windows 11. Those are all factors that lead to OEMs like ASUS and Lenovo operating on razor thin margins.  

Per Statista, Apple earned a nearly 36.8 percent gross profit margin on its products in 2025. That's almost exactly half as much as the gross margin it made on services, which grew to a record 75.4 percent last year. For comparison, ASUS has seen its profit margins erode to about 15.3 percent in recent quarters, or less than a third of Apple's 2025 average of 46.9 percent. For ASUS and other Windows OEMs, the short-term outlook isn’t good. HP recently told investors RAM now accounts for more than a third of the cost of its PCs. And if memory shortages continue, many of them will be forced to raise their prices to protect their margins. 

Apple is in no such position. The iPhone recently had its best quarter ever, contributing $85.27 billion to the company's Q1 revenue. The fact that Mac revenue declined from $8.9 billion to $8.3 billion year-over-year didn't make a dent to Apple's bottom line. For the companies that must now compete against the Neo, it's not a fair playing field. To Lenovo, Dell, HP and ASUS, PC sales are almost everything to their business. For Apple, it's a side hustle.       

As the company prepares to kick off its 51st year, it should consider it may never be in a better position to claw ahead in the market where it all started for the company. In both the PC and smartphone segments, Apple's market share has always been a distant second (and sometimes third and forth) to Windows and Android, in part because commoditization has consistently worked against the company. But when a single part now accounts for a third of the cost of a new PC, the regular rules don't apply. 

It's not just that the company is better insulated than nearly every other player against runaway RAM costs, it's that it also has a technological edge and the profit margins to compete on price at the same time. In recent quarters, the company's share of the PC market has hovered around the 9 to 10 percent mark, meaning it's consistently been about the fourth largest manufacturer. 

For as long as the RAM shortage continues, Apple should seriously consider sacrificing some of its PC profits to become a bigger player. So far, the company has moved to protect the margins on its more expensive devices. For example, it increased the price of the latest MacBook Air and MacBook Pro by $100. The company doubled the amount of base storage to make up for the hike. 

Moving forward, it should do everything it can to maintain, and maybe even lower the price of its computers to a point where its competitors can't meet it. If the Lenovos and HPs of the world can't compete on either price or performance, consumers will move to Mac computers. As Apple looks to the next 50 years, it may not get another opportunity like the one it has right now. 

This article originally appeared on Engadget at https://www.engadget.com/computing/laptops/the-ram-crisis-is-apples-best-chance-in-decades-to-capture-the-pc-market-130000672.html?src=rss