Update, 4:05PM ET: A few hours after this story was published, Google reached out to retract the news. The company provided Engadget with the following statement:
"Search Live has not rolled out globally to all users. It remains available in the US and India, with testing currently underway in additional markets. We apologize for the earlier miscommunication."
Given that the company says it is testing in more markets, it seems entirely possible that the global Search Live release will happen sooner than later. But, for now, it’s on hold.
The original, unedited article follows below:
After rolling out Search Live to all US Google app users last September, Google is now bringing the feature to every place where it offers its AI Mode chatbot. Search Live, if you need a reminder, allows you to point your phone's camera at an object or scene and ask questions about what you see in front of you. Google debuted the tool at I/O 2025 before it began rolling it out to users. With today's expansion, Search Live is available in more than 200 countries and territories.
What's more, Google has updated the feature to run off its Gemini 3.1 Flash model, an upgrade the company says should translate to more natural conversations, in addition to a faster and more reliable experience. The new model is also natively multilingual. You can access Search Live from the Google app on Android and iOS. Tap the "Live" button below the search bar to get started. You can also access Search Live through Google Lens. As in the Google app, look for the "Live" icon, here located near the bottom of the screen, to start chatting.
This article originally appeared on Engadget at https://www.engadget.com/ai/google-is-testing-search-live-in-more-markets-150000316.html?src=rss
When OpenAI released GPT-5.4 at the start of March, the company said the new model was designed primarily for professional work like programming and data analysis. Now OpenAI is launching GPT-5.4 mini and nano, and while it is once again highlighting the usefulness of these new systems for tasks like coding, one of the new models is available to Free and Go users. What's more, that model, GPT-5.4 mini, even offers performance that approaches GPT-5.4 in a handful of areas.
As a Free or Go user, you can access 5.4 mini by selecting "Thinking" from ChatGPT's plus menu. For paid users, the model is the new fallback for when you've hit your rate limit with 5.4 proper. OpenAI says 5.4 mini offers better performance than GPT-5.0 mini in a few different key areas, including reasoning, multimodal understanding and tool use. That means 5.4 mini is better at parsing non-text inputs such as images and audio, and has a more nuanced understanding of how to do things like search the web. It does all of this while running more than twice as fast as its predecessor.
As for GPT-5.4 nano, OpenAI says it's ideal for tasks such as data classification and extraction where speed and cost-efficiency are top of mind. If you're a ChatGPT user, you won't find the new model in the chatbot. Instead, OpenAI is making it only available through its API service. The company envisions developers using more advanced models to delegate tasks to AI agents running GPT-5.4 nano, and that's reflected in the cost of the new model, which OpenAI has priced starting at $0.20 per million input tokens.
This article originally appeared on Engadget at https://www.engadget.com/ai/gpt-54-mini-brings-some-of-the-smarts-of-openais-latest-model-to-chatgpt-free-and-go-users-170000585.html?src=rss
At the start of the year, Google introduced Personal Intelligence, a Gemini feature that allows the chatbot to pull information from the user's other Google apps and services to generate personalized responses. After making the feature first available to Google AI Pro and Ultra subscribers, the company is expanding availability to more users in the US.
Google is kicking off the expansion with AI Mode. Starting today, anyone in the US can enable Personal Intelligence inside of the company's dedicated search chatbot. To enable the feature, tap on your profile, select Search personalization, followed by Connected Content Apps. From there, select Connect Workspace and Google Photos.
In the coming weeks, Google will start rolling out Personal Intelligence to free users of the Gemini app in the US, with international availability to follow thereafter. The company plans to do the same with Gemini in Chrome, where personalization will first roll out to users in the US before becoming available in other countries.
Google suggests a few different use cases for Gemini personalization inside of AI Mode, the Gemini app and Chrome. For instance, say you turn to AI mode for help with planning an upcoming trip. Instead of generating a generic itinerary, the chatbot will pull information from your apps to suggest something more tailored to your interests. It can also help you with troubleshooting in cases where you can’t remember the exact make or model of a device you’re trying to fix, as long as there are some hints to its origin contained inside of your Gmail account.
In each case, Personal Intelligence is disabled by default. Gemini will not personalize its responses unless you enable the new feature. Additionally, personalization is only available to personal accounts and not for Workspace business, enterprise and education users.
This article originally appeared on Engadget at https://www.engadget.com/ai/google-makes-gemini-personalization-available-to-free-users-160000581.html?src=rss
OpenAI's forthcoming "adult mode" will allow users to engage in lewd conversations with ChatGPT, but not use the chatbot to generate explicit images, audio or video. In response to reporting from The Wall Street Journal, an OpenAI spokesperson characterized the upcoming release as capable of producing smut rather than pornography.
OpenAI CEO Sam Altman first floated the idea of allowing people to use ChatGPT for erotica last October, saying the company wanted to "treat adult users like adults." OpenAI originally planned to release adult mode at the start of 2026. Since then, the company has pushed back the feature a handful of times, with the most recent delay coming at the start of March so that OpenAI could "focus on work that is a higher priority for more users."
Through The Journal's reporting, we're learning OpenAI forged ahead with work on adult mode despite reservations from its council on wellbeing and AI. The group of eight researchers and experts were reportedly unanimous in warning the company AI-generated erotica could lead to people developing an unhealthy emotional dependence on ChatGPT, and that underage users would almost certainly find ways to access the feature. According to The Journal, one council member, citing cases where people have taken their own lives after becoming attached to ChatGPT, said the company was at risk of creating a "sexy suicide coach."
Those concerns appear to have been well-founded. At one point, the company's age verification technology was misidentifying underage users as adults about 12 percent of the time, according to The Journal. At OpenAI's scale, with around 100 million teens using ChatGPT every week, that error rate would have translated to millions of minors accessing erotic chats. OpenAI told The Journal its prediction algorithm performs to industry standards, adding no such system will ever be completely foolproof.
This article originally appeared on Engadget at https://www.engadget.com/ai/openais-adult-mode-reportedly-wont-generate-pornographic-audio-images-or-video-150744035.html?src=rss
Inevitably, the more you use something — your Mac included — the more dirty and cluttered it’s likely to become. At that point, you can buy a new machine, but the more economical move is to make what you have already work better. To help your computer feel new, or at least a little cleaner and less chaotic, we put together this guide with techniques and useful apps that have helped us maintain a more organized computer. I’ve been using these tips since before I first published this guide in 2021, and they’ve helped keep my 2018 MacBook Air looking and running (almost) like brand new.
How to clean your Mac’s screen and body
While there are many products out there from manufacturers claiming their one does it best, my advice is to keep things simple. It’s also the one Apple recommends. To start, you will need some water in a spray bottle and a clean microfiber cloth. You can use regular water from the tap but I've found distilled water works best; it’s far less likely to leave residue behind on your Mac, particularly on the display. You can buy distilled water at a grocery store or make it yourself with some simple cookware. Either way, it’s more affordable than dedicated cleaning solutions. If you don’t already own any microfiber towels, Amazon sells affordable 24-packs you can get for about $10.
One other product I would recommend is a Giottos Rocket Blower. I can’t say enough good things about this little tool. It will save you from buying expensive and wasteful cans of compressed air.
As for the actual process of cleaning your Mac, remember to start with a clean cloth (that’s part of the reason we recommend buying them in bulk). You’ll save yourself time and frustration this way. Begin by turning off your computer and unplugging it. If you bought a Rocket Blower, use it now to remove any dust. If not, take a dry microfiber cloth and go over your computer. Take special care around the keys, particularly if you own an older Mac with a butterfly keyboard.
Next, dampen one side of your cleaning cloth with water. Never spray any liquid directly on your computer. You’ll have more control this way and you’ll avoid getting any moisture into your Mac’s internals. I always clean the display first since the last thing I want to do is create more work for myself by transferring dirt from some other part of my computer to the screen.
The last step is to buff and polish your computer with the dry side of the cloth. Be gentle here as you don’t want to scratch the screen or any other part of. That’s it. Your Mac should be looking clean again.
How to organize your hard drive
Igor Bonifacic / Engadget
One of the trickiest parts of cleaning your Mac’s hard drive is knowing where to start; most of us have apps on our computers we don’t even remember installing in the first place. Thankfully, macOS comes with a tool to help you with that exact issue.
Navigate to System Settings > General > Storage. Here you’ll find a tool that separates your storage into broad categories like "Applications," "Documents," "Music," "Photos" and so on. Either double-click on an item in the list or click the circled i icon to see the last time you used an app and how much space it’s taking up. You can delete the apps from the same window.
The applications section is particularly helpful since you can see the last time you used a program, as well as if it’s no longer supported by the operating system or if it’s outdated thanks to a more recent release.
You don’t need me to tell you to uninstall programs you don’t use, but what you might not know is that there’s a better way to erase them than simply dragging them to the trash can. A free program called AppCleaner will help you track down any files and folders that would get left behind if you were just to delete an application.
Igor Bonifacic / Engadget
After deleting any apps you don’t need, move to the Documents section. The name is somewhat misleading here since you’ll find more than just text files and Keynote spreadsheets. In this case, documents turns out to be the tool’s catch-all term for a variety of files, including ones that take up a large amount of space. You can also safely delete any DMGs (disc image files with the extension .dmg) for which you’ve installed the related app.
The other sections in the storage space are self-explanatory. The only other thing I’ll mention is if you’ve been using an iPhone for a while, there’s a good chance you’ll have old iOS backups stored on your computer. You can safely delete those, too.
Tips and tricks for keeping a neat Desktop and Finder
Igor Bonifacic
Let’s start with the menu bar. It may not technically be part of the desktop, but a tidy one can go a long way toward making everything else look less cluttered. My recommendation here is to download an app called Bartender. At first glance, it’s a simple program allowing you to hide unwanted menu bar items behind a three-dots icon, but the strength of Bartender is that you get a lot of customization options. For example, you can set a trigger that will automatically move the battery status icon out from hiding when your computer isn’t connected to a power outlet.
While we’re on the subject of the menu bar, take a second to navigate to System Settings > General > Login Items & Extensions and look at all the apps that launch when you boot up your system. You can speed up your system by paring down this list to only the programs you use frequently.
When it comes to the desktop itself, less is more. Nothing will make your computer look like a cluttered mess more than a busy desktop. Folders and stacks can help, but for most people, I suspect part of the problem is they use their desktop as a way to quickly and easily find files that are important to them.
If you’ve ever struggled to find a specific file or folder on your computer, try using your Mac’s tagging capabilities instead. Start by opening the Finder Settings menu (Command + ,) and click the Tags tab. You can use the default ones provided by macOS or make your own. Drag the ones you think you’ll use most often to the favorites areas at the bottom of the preferences window. This will make it so that they’re easily accessible when you want to use them. To append a tag to a file or folder, click on it while holding the ctrl key and select the one you want from the dropdown menu. You can also tag a file while working on it within an app. Keep in mind you can apply multiple tags to a single file or folder, and you can even apply them to applications.
Igor Bonifacic / Engadget
What makes tags so useful in macOS is that they can appear in the sidebar of the Finder window, and are easily searchable either directly with Finder or using Siri. As long as you have a system for organizing your files, even a simple one, you’ll find it easier to keep track of them. As one example, I like to apply an Engadget tag to any files related to my work. I’ll add an “Important” tag if it’s something that’s critical and I want to find quickly.
One tool that can help supercharge your Finder experience is Alfred. It’s effectively a more powerful version of Apple’s Spotlight feature. Among other things, you can use Alfred to find and launch apps quickly. There’s a bit of a learning curve, but once you get a hang of it, Alfred will change how you use your Mac for the better.
How to organize your windows and tabs
Igor Bonifacic / Engadget
If you’ve used both macOS and Windows 10, you’ll know that Apple’s operating system doesn’t come with the best window management tools. You can click and hold on the green full-screen button to tile a window to either the left or right side of your screen, but that’s about it and the feature has always felt less precise than its Windows counterpart.
My suggestion is to download an app that replicates Windows 10’s snapping feature. You have several competing options that more or less offer the same functionality. My go-to is a $5 program called Magnet. If you want a free alternative, check out Rectangle. Another option is BetterSnapTool, which offers more functionality than Magnet but doesn’t have as clean of an interface. All three apps give you far more ways to configure your windows than what you get through the built-in tool in macOS. They also come with shortcut support, which means you can quickly set up your windows and get to work.
This article originally appeared on Engadget at https://www.engadget.com/computing/how-to-clean-your-mac-macbook-cleaning-supplies-digital-organization-153007592.html?src=rss
With Claude enjoying a moment of newfound popularity among regular people, Anthropic is previewing an update designed to make its chatbot better at explaining some concepts. Starting today, Claude can generate charts and diagrams as part of its responses, either when asked directly or when it decides visuals might be helpful to the user.
For example, try asking Claude what's the best way to fold a paper plane. Where previously it was limited to text, now it can show you step by step how to fold a Nakamura lock plane. Anthropic is quick to point out what it's introducing today isn't image generation. When producing visual aids, Claude will use HTML code and XML vector graphics. Anthropic likens it to giving Claude access to its own whiteboard.
The new feature is available to all Claude users, regardless of whether you pay for one of Anthropic's subscriptions. However, the company does warn it's releasing beta software, so expect some quirks along the way. The feature also isn’t available on mobile just yet. This release comes just days after OpenAI made ChatGPT capable of generating interactive visuals when explaining science and math concepts.
This article originally appeared on Engadget at https://www.engadget.com/ai/claude-can-now-generate-charts-and-diagrams-160000369.html?src=rss
In recent weeks, Google has been busy adding AI features to all of its most popular apps. Following Gmail and Chrome, Maps is now the latest service to get a Gemini makeover, with a redesign of the driving experience headlining the update.
Google is billing the new "Immersive Navigation" mode as the most significant update to driving directions in Maps in about a decade. Now instead of displaying a 2D map of the area around your car, Maps will render the surroundings in 3D. Google believes this transformation will make it easier for drivers to orient themselves, with the new view giving greater depth to nearby landmarks like buildings and overpasses.
Behind the scenes, the company's Gemini models power the experience, deciding how to render elements to remove distractions. Pulling information from Google's Street View database and aerial photos, Google says its models are also smart enough to know when to highlight road elements like crosswalks, traffic lights and stop signs to ensure you don't miss an off ramp or important turn. At the same time, Google has made the voice guidance in Maps sound more natural. For instance, when you're driving along the highway, looking for where you need to get off, the voice assistant will say something along the lines of "go past this exit and take the next one." I imagine this will be especially helpful when driving in a foreign country with unfamiliar road names.
The new intelligence Google has built into the redesigned navigation experience extends to alternative routes. Now, when the app suggests taking a different way of getting somewhere, it will detail the associated tradeoffs with that route. For example, it might tell you it might take longer to travel but you'll encounter less traffic along the way. Before you start your journey, Maps will now also provide a Street View preview of your destination and recommend where to park.
This being a new release in Google's self-proclaimed Gemini era, the company has naturally found a way to add its chatbot to Maps. Inside the app, you'll find a new icon labelled Ask Maps. Tap the icon, write a natural language prompt and Gemini will use all the information contained within Maps to craft a response.
Google is pitching the feature as a way to get information no traditional map can provide. For example, you could ask Gemini to find you a place where you can charge your phone and grab a cup of coffee, all without having to wait a long time in line. Google suggests finding the answer to a specific question like that would have previously required sifting through countless reviews. Not so anymore. The results Gemini produces through Ask Maps will contain personalized results based on places you searched for and saved in the past. You can also act on any recommendations Gemini surfaces, making it easy to book restaurants, save locations and more.
Google is starting to roll out the new immersive driving experience today in the US, with availability to expand over the coming months to Android and iOS devices, as well as CarPlay, Android Auto and cars with Google built-in. Ask Maps, meanwhile, is rolling out to Android and iOS devices in the US and India, with desktop support coming soon.
This article originally appeared on Engadget at https://www.engadget.com/ai/google-maps-brings-a-3d-map-to-your-driving-directions-123000843.html?src=rss
Microsoft plans to begin shipping early units of its next generation console, codenamed Project Helix, to game studios starting sometime next year. “We're sending alpha versions of Project Helix to developers starting in 2027,“ said Jason Ronald, vice-president of next generation for Xbox, according to IGN, which was present at the company’s GDC 2026 presentation where it shared early details about the new device. Ronald did not clarify what he meant by “alpha version,” but given the keynote’s developer focus, presumably he meant devkits, which studios could use to start creating games for the new console.
Additionally, Ronald reiterated that the new system would be capable of playing both Xbox console games and PC games, and said it would incorporate a custom AMD-made system-on-a-chip capable of rendering graphics with path tracing. Judging from a slide the company shared, Microsoft and AMD are working on many of the same technologies and capabilities AMD is co-designing with Sony for next PlayStation console. For instance, Ronald said Helix would be capable of ray regeneration, a technique designed to produce better-looking ray-traced effects. The new console will also offer multi-frame frame generation and machine learning-based upscaling.
“It delivers an order of magnitude leap in ray tracing performance and capability, integrates intelligence directly into the graphics and compute pipeline, and drives meaningful gains in efficiency, scale, and visual ambition. The result is more realistic, immersive, and dynamic worlds for players,” Ronald wrote in a blog post published after his presentation.
Ronald didn’t speak to any specific compute numbers, likely due to the fact Microsoft has yet to finalize the Helix hardware. We’ll likely learn more of those details the closer we get to 2027.
This article originally appeared on Engadget at https://www.engadget.com/gaming/xbox/microsoft-will-start-providing-game-studios-with-project-helix-consoles-in-2027-180352458.html?src=rss
When you think of an AI-forward PC, you might think of something like NVIDIA's $3,999 DGX Spark — a computer with enough computing power to run complex large language models locally. That's not what Rabbit is trying to build with Project Cyberdeck. Instead, the company's goal is to produce a device tailored for vibe coding, and Engadget was given an exclusive first look at the upcoming PC.
Rabbit began working on Project Cyberdeck after the company's CEO, Jesse Lyu, saw how much his software engineers were using Claude Code. Lyu thought a small form factor PC, like the netbooks that were popular in the late aughts, with a command line interface would be ideal for on-the-go vibe coding, but when he went online to look for something that fit the bill, he was disappointed.
"They all come with shitty rubber dome keyboards," Lyu says of low-cost PCs like the latest Chromebooks, which use flexible silicone sheets under their keys to save on space and cost. "They're not something you would enjoy typing on for an extended period of time." So Rabbit decided to build its own device. For inspiration, Lyu and company looked to an unlikely source: the Sony Vaio P.
The Cyberdeck takes inspiration from the Sony Vaio P.
Sony
Sony's netbook was only briefly available from the start of 2009 to about the end of 2010. At the time, the 8-inch Vaio P was the world's lightest netbook, weighing just 1.4 pounds, but it had a host of issues. It was also expensive, costing considerably more than other Intel Atom notebooks of the time. In 2009, the most affordable Vaio P would set you back $900 (about $1,365 adjusted for inflation). With Project Cyberdeck, Rabbit is aiming for a device that costs about $500, and hopefully avoids a similar fate.
I saw a few early renders of Project Cyberdeck, which Rabbit isn't ready to share publicly yet. Imagine a cross between the Rabbit R1, Vaio P and the original Nintendo DS. It looks cute. All the renders had four USB-C ports to allow users to connect the device to external monitors and peripherals, though the actual IO specs are as-yet undecided.
The company is in the process of sourcing components and working towards a final design, so details can — and will — change. I saw some of the parts Lyu has been testing in his office, but no final prototype as such.
For one, Rabbit still needs to decide on a chipset. The company is aiming for a performance benchmark relative to the Raspberry Pi 5, which has a Broadcom BCM2712 quad-core Arm Cortex A76 processor clocked at 2.4GHz. With 16GB of RAM, the Raspberry Pi 5 can run two external monitors, a capability Rabbit hopes to match with the Cyberdeck. The idea here is to make a device that's powerful enough it won't feel slow when it's communicating with Anthropic and OpenAI's servers, but affordable enough to make it a no-brainer purchase for developers.
The company confirmed Project Cyberdeck will run Linux. Rabbit will allow users to modify the operating system and install any third-party tools they want. Additionally, all the software features the company has developed for RabbitOS will be available through command-line prompts.
Two parts of the device Lyu hopes are major differentiating factors are the keyboard and screen. Lyu appears set on shipping a computer with a 40 percent keyboard that has low-profile mechanical switches and a fully hot swappable PCB, so users can tweak the typing feel to their liking. Lyu also had a sample 7-inch OLED screen on his desk when I spoke to him. That specific panel offers touch input, a 165Hz refresh rate and 815 nits of brightness. While it might not be the one Rabbit settles on, OLED is the goal, because of what it would mean for battery life.
For the uninitiated, OLED panels produce black values by turning off individual diodes, and since each diode is self-emitting, there's no need for a power-hungry backlight. Like every smartphone manufacturer, Rabbit is taking advantage of this by planning to offer a dark mode interface from day one.
One aspect of the Cyberdeck's design Lyu can't definitively speak to is how much RAM it will feature. The entire industry is dealing with datacenter demand for high-bandwidth memory that has sent the price of computers, smartphones and other electronics soaring. Lyu believes Rabbit won't be forced to delay the Cyberdeck out of 2026, but he also didn't rule out the possibility either. If things change for the better, he's confident Rabbit would be able to take advantage, since it took the company about 93 days to ship the first R1 device after it began working on the design.
Separately, I wonder if people will want to carry around a second device solely for their coding needs? You don't need a dedicated machine to access Claude Code or OpenAI Cursor. Even companies like Apple have begun integrating vibe coding services into their development environments. Rabbit could be on track for a repeat of the R1, but with so many details of the Cyberdeck left undecided, for now, it's too early to know for sure. The company will get to make its case when it shares more details in the coming weeks and months.
This article originally appeared on Engadget at https://www.engadget.com/ai/rabbits-cyberdeck-is-a-modern-take-on-a-netbook-151907273.html?src=rss
At the start of the year, Google brought a host of new Gemini-powered features, including built-in Nano Banana image generation, to Chrome. After debuting in the United States, those features are now making their way to Chrome users in Canada, India and New Zealand, with support for 50 additional in tow. Among the new languages Gemini in Chrome can now converse in are French, Gujarati, Hindi and Spanish.
To try out Gemini in Chrome, tap the sparkle icon at the top right of the interface. This will open the sidebar interface Google introduced in January. From there, you can chat with the company's Gemini chatbot without the need to switch tabs. From the sidebar, you can also access Google's in-house image generator. Additionally, Gemini in Chrome offers integrations with Gmail, Maps, Calendar, YouTube and other Google apps. If you live outside Canada, India or New Zealand, Google says it will make Gemini in Chrome available in more countries and languages throughout the rest of 2026. Oh, and if don’t want to use Gemini in Chrome, you can right click on the sparkle icon and select unpin to never see it again.
This article originally appeared on Engadget at https://www.engadget.com/ai/google-starts-rolling-out-gemini-in-chrome-to-users-in-canada-india-and-new-zealand-023000528.html?src=rss