OpenAI brings ChatGPT’s Voice mode to CarPlay

In a surprise release, OpenAI has made ChatGPT's Voice mode available through Apple CarPlay. If you're running the latest version of both iOS and the ChatGPT app, and own a CarPlay-compatible vehicle, you can check out the experience. To get started, download all the necessary software, connect your iPhone to CarPlay and select "New voice chat" from ChatGPT. When the in-app text indicates ChatGPT is "listening," you can start a conversation.         

There are some notable limitations to using ChatGPT Voice with CarPlay. For one, OpenAI's chatbot can't control car functions. If you want to adjust the cabin temperature or skip tracks, you'll still need Siri for those tasks. Due to Apple's restrictions, you also can't start using ChatGPT through a wake word like you can Siri. For example, to resume a previous conversation, you need to open the ChatGPT app from CarPlay and tap a recent or pinned chat.  

With those limitations in mind, OpenAI suggests you can use Voice mode to get how-to advice, brainstorm ideas and practice languages. Personally, I like to listen to podcasts and music when I'm driving, but if talking with ChatGPT is your thing, you do you.    

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-brings-chatgpts-voice-mode-to-carplay-191422297.html?src=rss

Google releases Gemma 4, a family of open models built off of Gemini 3

When Google released Gemini 3 Pro at the end of last year, it was a significant step forward for the company's proprietary large language models. Now, the company is bringing some of the same technology and research that made those models possible to the open source community with the release of its new family of Gemma 4 open-weight models.

Google is offering four different versions of Gemma 4, differentiated by the number of parameters on offer. For edge devices, including smartphones, the company has the 2-billion and 4-billion "Effective" models. For more powerful machines, there’s the 26-billion "Mixture of Experts" and 31-billion "Dense" systems. For the unfamiliar, parameters are the settings a large language model can tweak to generate an output. Typically, models with more parameters will deliver better answers than ones with less, but running them also requires more powerful hardware. 

With Gemma 4, Google claims it's managed to engineer systems with "an unprecedented level of intelligence-per-parameter." To back up this claim, the company points to the performance of Gemma 4's 31-billion and 26-billion variants, which claimed the third and sixth spots respectively on Arena AI's text leaderboard, beating out models 20 times their size.     

All of the models can process video and images, making them ideal for tasks like optical character recognition. The two smaller models are also capable of processing audio inputs and understanding speech. Separately, Google says the Gemma 4 family is capable of generating offline code, meaning you could use them to do vibe coding without an internet connection. Google has also trained the models in more than 140 languages.    

Google is releasing the Gemma 4 family under an Apache 2.0 license. The company made previous Gemma models available through its own Gemma license. The move will give people a greater deal of freedom to modify the new systems to their needs.  

"This open-source license provides a foundation for complete developer flexibility and digital sovereignty; granting you complete control over your data, infrastructure and models." Google said. "It allows you to build freely and deploy securely across any environment, whether on-premises or in the cloud." 

If you want to give one of the systems a try for yourself, the model weights are available through Hugging Face, Kaggle and Ollama. 

This article originally appeared on Engadget at https://www.engadget.com/ai/google-releases-gemma-4-a-family-of-open-models-built-off-of-gemini-3-160000332.html?src=rss

Claude Code leak suggests Anthropic is working on a ‘Proactive’ mode for its coding tool

What should have been a routine release has revealed some of the features Anthropic has been working on for Claude Code. As reported by Ars Technica, The Verge and others, after the company released Claude Code's 2.1.88 update on Tuesday, users found it contained a file that exposed the app's source code. Before Anthropic took action to plug the leak, the codebase was uploaded to a public GitHub repository, where it was subsequently copied more than 50,000 times. All told, the entire internet (and Anthropic's competitors) got a chance to examine more than 512,000 lines of code and 2,000 TypeScript files. 

In the aftermath, some people claim to have found evidence of upcoming features Anthropic is working to develop. Over on X, Alex Finn, the founder of AI startup Creator Buddy, says he found a flag for a feature called Proactive mode that will see Claude Code work even when the user hasn't prompted it to do something. Finn claims he also found evidence of a crypto-based payment system that could potentially allow AI agents to make autonomous payments. In a Reddit post spotted by The Verge, another person found evidence that Anthropic might have been working on a Tamagotchi-like virtual companion that "reacts to your coding" as a kind of April Fools joke.    

"A Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson told Bleepingcomputer. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again."

As with any other leak, it's worth remembering plans can and often do change. Just because a company has written the code to support a feature doesn't mean it will eventually ship said feature. 

This article originally appeared on Engadget at https://www.engadget.com/ai/claude-code-leak-suggests-anthropic-is-working-on-a-proactive-mode-for-its-coding-tool-150107049.html?src=rss

The RAM crisis is Apple’s best chance in decades to capture the PC market

In the current RAM crisis, no company is better positioned to not only weather the storm but turn it to its advantage like Apple. It proved that when it released the MacBook Neo in early March. Despite only including 8GB of RAM, the Neo doesn't feel compromised, a testament to the company's silicon and software engineering. For Apple, it may be tempting to treat its latest MacBook as a one-off. That would be a mistake, because at this moment, the business decisions that made the Neo possible represent a once-in-a-generation opportunity to become a bigger player in the PC market. 

If you read Engadget, there's a good chance you know the contours of the global memory shortage, but it's worth repeating just how bad things have become in recent months. Just three companies — SK Hynix, Samsung and Micron — produce more than 90 percent of the world's memory chips. At the end of last year, Micron announced it would end its consumer-facing business to focus on providing RAM and other components to AI customers.  

Citing data from TrendForce, The Wall Street Journal reported in January that data centers would consume 70 percent of the high-end memory produced in 2026. As the Big Three shift more of their production to meet enterprise demand, they're allocating fewer wafers for consumer products, leading to dramatic price increases in that market segment. According to data from Counterpoint Research, the price of memory — including consumer RAM kits and SSDs, as well as LPDDR5X memory for smartphones — increased by 50 percent during the final quarter of 2025. Before the end of the current quarter, the firm predicts prices will increase by another 40 to 50 percent, and the CEO of SK Hynix recently warned shortages could last until 2030

Since nearly all consumer electronics need some amount of RAM and storage, the trickle-down effects have come fast and hard. In December, before the situation got as bad as it is now, TrendForce warned that most of the major PC manufacturers were either considering, if not already planning, price hikes. This month, the firm warned laptop prices could increase by as much as 40 percent if manufacturers and retailers moved to protect their margins. Such a scenario would send the cost of a $900 model to about $1,260.

Amid all that, Apple added another point of pressure: the $600 MacBook Neo. During a recent investor call, Nick Wu, the chief financial officer of ASUS, described the Neo as "a shock to the entire market," adding "all PC vendors, including upstream vendors like Microsoft, Intel and AMD" are taking the cute device "very seriously." Wu warned ASUS would "need more time" before it could ready a response.         

For ASUS and other Windows manufacturers, any response realistically may take a year or more to formulate. That's because the Neo represents both a technical and logistical hurdle. 

To start, it's a fundamentally different machine from the one most Windows OEMs are making right now. It has the advantage of using "unified memory" instead of a set of traditional RAM modules. The 8GB of RAM the Neo has is shared between the A18 Pro's CPU and GPU, meaning it can more efficiently use the RAM that it does have. That's part of the reason the Neo doesn't feel like a Windows PC with 8GB of RAM. Apple didn't get to the A18 Pro and the MacBook Neo by accident. It has spent more than a decade designing its own chips. 

Since 2024, Microsoft has mandated 16GB of RAM — and 256GB of solid-state storage — for PCs that are part of its Copilot+ AI program. That branding effort may not have amounted to much, with Copilot+ AI PCs accounting for just 1.9 percent of all computers sold in the first quarter of 2025, but it did push OEMs, including ASUS, Dell and others to make more capable machines. It also saw Microsoft rework Windows to better support ARM-based processors from Qualcomm. Still, it's hard to see how Windows manufacturers can challenge Apple by going back to existing or older x86 chips with with less RAM. 

Qualcomm's Snapdragon X2 processors could offer a potential response, but there are question marks there too. At CES 2026, the company announced the Snapdragon X2 Plus, a pared down version of its X2 Elite chipset with a six-core CPU. On paper, it should offer similar performance to the A18 Pro, but it doesn't seem Qualcomm has produced the chip at scale or that Windows OEMs have shown much interest in it. As of the writing of this story, the company's website lists just four X2 Plus-equipped models. I was only able to find one of those in stock, the $1,050 HP Omnibook 5. It has an OLED screen and more RAM than the Neo. Could HP repurpose something like the Omnibook 5 to take on the Neo? Maybe, but I'm not sure there's getting around the need for 16GB to get Windows 11 running decently.    

Even if the Snapdragon X2 Plus offers a stopgap measure, no company operates a supply chain quite like Apple. It has spent billions of dollars to make itself independent of companies like Qualcomm by designing its own Wi-Fi and Bluetooth chips, for example. It also doesn't need to pay Microsoft a licensing fee to use a bloated Windows 11. Those are all factors that lead to OEMs like ASUS and Lenovo operating on razor thin margins.  

Per Statista, Apple earned a nearly 36.8 percent gross profit margin on its products in 2025. That's almost exactly half as much as the gross margin it made on services, which grew to a record 75.4 percent last year. For comparison, ASUS has seen its profit margins erode to about 15.3 percent in recent quarters, or less than a third of Apple's 2025 average of 46.9 percent. For ASUS and other Windows OEMs, the short-term outlook isn’t good. HP recently told investors RAM now accounts for more than a third of the cost of its PCs. And if memory shortages continue, many of them will be forced to raise their prices to protect their margins. 

Apple is in no such position. The iPhone recently had its best quarter ever, contributing $85.27 billion to the company's Q1 revenue. The fact that Mac revenue declined from $8.9 billion to $8.3 billion year-over-year didn't make a dent to Apple's bottom line. For the companies that must now compete against the Neo, it's not a fair playing field. To Lenovo, Dell, HP and ASUS, PC sales are almost everything to their business. For Apple, it's a side hustle.       

As the company prepares to kick off its 51st year, it should consider it may never be in a better position to claw ahead in the market where it all started for the company. In both the PC and smartphone segments, Apple's market share has always been a distant second (and sometimes third and forth) to Windows and Android, in part because commoditization has consistently worked against the company. But when a single part now accounts for a third of the cost of a new PC, the regular rules don't apply. 

It's not just that the company is better insulated than nearly every other player against runaway RAM costs, it's that it also has a technological edge and the profit margins to compete on price at the same time. In recent quarters, the company's share of the PC market has hovered around the 9 to 10 percent mark, meaning it's consistently been about the fourth largest manufacturer. 

For as long as the RAM shortage continues, Apple should seriously consider sacrificing some of its PC profits to become a bigger player. So far, the company has moved to protect the margins on its more expensive devices. For example, it increased the price of the latest MacBook Air and MacBook Pro by $100. The company doubled the amount of base storage to make up for the hike. 

Moving forward, it should do everything it can to maintain, and maybe even lower the price of its computers to a point where its competitors can't meet it. If the Lenovos and HPs of the world can't compete on either price or performance, consumers will move to Mac computers. As Apple looks to the next 50 years, it may not get another opportunity like the one it has right now. 

This article originally appeared on Engadget at https://www.engadget.com/computing/laptops/the-ram-crisis-is-apples-best-chance-in-decades-to-capture-the-pc-market-130000672.html?src=rss

This Frankenstein PlayStation PCB reads games from microSD and outputs video over HDMI

We're living in the golden age of retro console modding. If you have an old Game Boy Advance lying around, it's possible to give it a new lease on life with aftermarket parts like an IPS display and USB-C charging. But as amazing as those mods are, most still require an original GBA motherboard with a working processor and RAM. That's what makes the PlayStation Hybrid from YouTuber Secret Hobbyist so cool. Over the past couple of months, they've been working to design, prototype and build the ultimate PlayStation PCB, one that incorporates the best parts of different model revisions while adding a couple of modern conveniences. 

The specific motherboards Secret Hobbyist's PCB pulls parts from are the PM-41 v2 and the PU18, with the former being a PSOne board while the latter was sourced from a "phat" model. The decision to incorporate parts from different PlayStation variants makes a lot of sense if you know something about the history of the console. Between the release of the PlayStation in 1994 and the smaller PSOne in 2000, Sony made multiple revisions to the original design to address hardware issues and eke out cost savings. 

One component that you can find on older models, but not the PSOne, is an Asahi Kasei-made digital-to-analog audio converter (DAC). Over the years, this DAC has gained something of a cult following among audiophiles, with some of the earliest models like the SCPH-1000 and SCPH-3000 being particularly sought after as CD players because they also came with RCA outputs, a feature Sony later cut from subsequent revisions. As for the PU18, it has a part that makes it compatible with the X Station, a CD replacement that allows a modded PlayStation to read games from a microSD card.   

From the PSOne, Secret Hobbyist sourced the console's GPU and CPU, which are more power efficient than the ones found on its older siblings. Lastly, they incorporated an FPGA chip from a Hispeedido mod kit to make their hybrid PlayStation capable of outputting video over HDMI.

The final result is a custom PCB that is even smaller than the PSOne's PM-41 v2, draws less than two watts of power and works with modern displays. That power draw means the Hybrid PlayStation could be engineered to be a handheld. Secret Hobbyist still has yet to design an enclosure for their new Frankenstein console, but judging from the comments on their video, people are excited to see the final result. In the meantime, be sure to watch the full video to learn more about the project and see some incredible soldering work.

This article originally appeared on Engadget at https://www.engadget.com/gaming/playstation/this-frankenstein-playstation-pcb-reads-games-from-microsd-and-outputs-video-over-hdmi-211002114.html?src=rss

Google begins rolling out Search Live globally

Following a false start last week, Google has begun rolling out Search Live globally. The tool allows you to point your phone's camera at an object or scene and ask questions about what you see in front of you. With today's expansion, Google is making Search Live available in every location and language where it offers its AI Mode chatbot. With that, people in more than 200 countries and territories can use Search Live to get answers to their questions. 

Behind the expansion is Google's Gemini 3.1 Flash Live model. According to the company, the new AI system was designed to be natively multilingual, and capable of more natural conversations. It should also be more reliable and faster.

Separately from Search Live, Google is bringing Live Translate to iOS. Live Translate, if you need a reminder, allows you to put on a pair of headphones and get a real-time translation of what another person is saying. With today's announcement, Google is also bringing the feature to more countries, including Germany, Italy, Spain, Japan and the UK, across both Android and iOS. All told, Live Translate can now understand more than 70 languages and work with any set of headphones. Neat.


This article originally appeared on Engadget at https://www.engadget.com/ai/google-begins-rolling-out-search-live-globally-180938407.html?src=rss

How to use Apple’s Playlist Playground to make AI-generated mixes

With the release of iOS 26.4, Apple Music's Playlist Playground can now generate playlists with the help of AI. Best of all, you don't need an Apple Intelligence-capable iPhone to take advantage of the new feature. As long as you're a US Apple Music subscriber with your language set to English, you can start using Playlist Playground right now. Here's how to get started. 

A pair of screenshots showing off Apple Music's new Playlist Playground feature.
A pair of screenshots showing off Apple Music's new Playlist Playground feature.
Igor Bonifacic for Engadget

For the time being, there are two ways to access Playlist Playground. For the time being, the company is highlighting the feature within the "Top Picks for You" section of Apple Music's Home tab. If you don't see a shortcut there, Apple integrated the feature into the app's existing playlist creation tool. Just tap the new icon found in the Library tab. If you're new to Apple Music, the flow looks like this: 

  1. Open Apple Music. 

  2. Navigate to the "Library" tab.

  3. Tap the playlist creation button.

  4. Write a prompt describing the mood or style of music you want to hear. 

To help people get started, Apple provides a selection of sample prompts. One pro tip: it's possible to use metadata in conjunction with Playlist Playground. For example, after Apple Music generates a playlist, you can tell Apple's model to edit it by removing any songs released before 2016. Of course, you're also free to add and remove songs manually as you please. 

Once you're happy with your new playlist, Apple Music treats all Playlist Playground mixes like it does any other playlist, meaning you can save it to your Library, download for offline playback, play it from your Apple Watch and share it with friends and invite them to add songs.   

As of the writing of this article, Playlist Playground is a beta release only available to Apple Music subscribers in the US with their preferred language set to English. An iPhone or iPad running iOS 26.4, or an Apple Vision headset running visionOS 26.4 is also required.     

As Apple releases the feature in more countries and languages, we'll update this article. 

Yes, if you use Apple Music on Android, Playlist Playground is available there too.  

When generating mixes, Playlist Playground pulls from both trending data and your personal listening history. Along with other AI-powered Apple Music features like AutoMix and Lyrics Translation, Playlist Playground runs as part of the Apple Music service. That’s one of the reasons Apple can offer it outside of Apple Intelligence-capable devices. 


This article originally appeared on Engadget at https://www.engadget.com/entertainment/music/how-to-use-apples-playlist-playground-to-make-ai-generated-mixes-134500610.html?src=rss

Anthropic releases safer Claude Code ‘auto mode’ to avoid mass file deletions and other AI snafus

Anthropic has begun previewing "auto mode" inside of Claude Code. The company describes the new feature as a middle path between the app's default behavior, which sees Claude request approval for every file write and bash command, and the "dangerously-skip-premissions" command some coders use to make the chatbot function more autonomously. 

With auto mode enabled, a classifier system guides Claude, giving it permission to carry out actions it deems safe, while redirecting the chatbot to take a different approach when it determines Claude might do something risky. In designing the system, Anthropic's goal was to reduce the likelihood of Claude carrying out mass file deletions, extracting sensitive data or executing malicious code. 

Of course, no system is perfect, and Anthropic warns as such. "The classifier may still allow some risky actions: for example, if user intent is ambiguous, or if Claude doesn't have enough context about your environment to know an action might create additional risk," the company writes. 

Anthropic doesn't mention a specific incident as inspiration for auto mode, but the recent 13-hour AWS outage Amazon suffered after one of the company's AI tools reportedly deleted a hosting environment, was probably front of mind for the company. Amazon blamed that specific incident on human error, saying the staffer involved in the incident had "broader permissions than expected."

Team plan users can preview auto mode starting today, with the feature set to roll out to Enterprise and API users in the coming days.

This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-releases-safer-claude-code-auto-mode-to-avoid-mass-file-deletions-and-other-ai-snafus-142500615.html?src=rss

OpenAI is shutting down its Sora video generation app

OpenAI is shutting down its Sora video generation app. "We're saying goodbye to Sora," the company wrote in a X post published Tuesday afternoon. For now, OpenAI has yet to say when the app and its related API service would become unavailable. Instead, promising to share those details at a later date.   

"We've decided to discontinue Sora in the consumer app and API. As we focus and compute demand grows, the Sora research team continues to focus on world simulation research to advance robotics that will help people solve real-world, physical tasks," an OpenAI spokesperson told Engadget.  

While today's news might come as a surprise for some, there were warning signs Sora was heading in this direction since the start of the year. While Sora hit the top of the US App Store charts shortly after its debut, interest in the platform appears to have quickly fizzled out thereafter. At the start of 2026, data from analytics firm Appfigures suggested the app was seeing successive month-over-month declines in both new installs and user spending. In December alone, a time of year when most apps typically flourish, Sora reportedly saw a 32 percent decline in new downloads from November. 

The shutdown also aligns with OpenAI's recent shift in strategy. Since the release of GPT-5.2, the company's "code red" response to Google's Gemini 3 Pro model, OpenAI has tried to court professionals like coders and data analysts with systems that excel in those domains, seeing enterprise customers as a route toward profitability. However, today’s shutdown does appear to come with an additional cost for OpenAI. According to The Hollywood Reporter, Disney is exiting the deal it signed with the AI lab at the end of last year, and won’t, as a result, invest $1 billion into it.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-is-shutting-down-its-sora-video-generation-app-211023358.html?src=rss

Microsoft will yank Copilot from some Windows apps and let you move the taskbar again

After one too many of you threatened to switch to Linux, Microsoft has published a long list of changes it plans to make to Windows 11. In a lengthy blog titled "Our commitment to Windows quality," Pavan Davuluri, the executive vice president of Windows and Devices, said the company has spent a "great deal" of time in recent months reading feedback from users. "What came through was the voice of people who care deeply about Windows and want it to be better," he said. To that end, Windows Insiders can expect to see some of the changes Microsoft plans in response to all criticism begin rolling out starting this month.  

Most notably, Microsoft ease up on the AI pedal. "You will see us be more intentional about how and where Copilot integrates across Windows, focusing on experiences that are genuinely useful and well-crafted," writes Davuluri. As a first step, Microsoft says it will remove "unnecessary Copilot entry points," starting with apps like the Snipping Tool, Photos, Widgets and Notepad. 

Elsewhere, users can look forward to additional taskbar customization, allowing them to position the interface element at the top or sides of the screen; less disruptive updates, with the option to shut down or restart your device without being forced to install a new patch; and a faster, less janky File Explorer. "Our first round of improvements will focus on a quicker launch experience, reduced flicker, smoother navigation and more reliable performance for everyday file tasks," said Davuluri.  

Looking beyond the next two months, Microsoft notes it will work to improve performance across Windows, with “lowering the baseline memory footprint” of the operating system a key area of focus. Presumably, this plan of action is as much a response to the global memory shortage as it is user feedback. PC manufacturers are struggling right now, with a recent estimate warning the market could shrink as much as 8.9 percent year-over-year in 2026 due to the high cost of RAM and SSDs. On the subject of reliability, the company says reducing OS-level crashes and releasing higher quality drivers is a priority, as is making Bluetooth and USB connections less prone to errors and disconnects.

Microsoft's promise to fix Windows 11 is long overdue. In January, the company released a couple of emergency updates after what should have been a routine security patch caused bugs that left some PCs unable to shut down and broke Outlook. The general state of the operating system has led many to explore Linux alternatives like Bazzite. With Apple also recently releasing the $600 MacBook Neo, a laptop that few Windows manufacturers can match right now, Microsoft’s dominance in the PC market is looking vulnerable for the first time in more than a decade.

This article originally appeared on Engadget at https://www.engadget.com/computing/microsoft-will-yank-copilot-from-some-windows-apps-and-let-you-move-the-taskbar-again-202857203.html?src=rss