Earlier this year, Google announced it would shut down its standalone podcast app in 2024. Since then, the company has started moving podcasts into YouTube and its companion app YouTube Music. As a way to ease the transition, Google will be rolling out a migration tool for its current podcast app users. With the tool, users in the US will be able to move their favorite pod subscriptions from Google Podcasts to YouTube Music, or export them for use in other podcast apps.
In the coming weeks, the migration tool will be available through a banner in Google Podcasts. There are step-by-step instructions on how to use the migration tool in Google's Help Center. The entire process is just four steps and you’ll need to have both Google Podcasts and YouTube Music installed on your device to complete the transfer. After the transfer, Google notes it may take a few minutes for everything to show up in your YouTube Music library.
Google's move to ditch its standalone podcast app doesn't come as a total surprise. Google Podcasts has been around since 2018 but it never quite took off like similar apps, including Overcast and Spotify. And YouTube is already a popular destination for podcast fans, with a recent study claiming over 23 percent of podcasts listeners use YouTube as their primary player. Many of today's trending podcasts are already available on YouTube. For podcasts that are not available on the platform, users can add shows directly to their YouTube Music library via RSS feed. This isn't Google's first rodeo. Back in 2020, the company nixed its standalone music app, Google Play Music, in favor of YouTube Music, and it also offered a comprehensive tool to transfer libraries to the new app.
Google Podcasts will remain live for listening through March 2024, after which users will be able to migrate or export their subscriptions through July 2024.
This article originally appeared on Engadget at https://www.engadget.com/heres-how-to-move-your-subscriptions-off-google-podcasts-before-it-shuts-down-194039938.html?src=rss
Earlier this year, Google announced it would shut down its standalone podcast app in 2024. Since then, the company has started moving podcasts into YouTube and its companion app YouTube Music. As a way to ease the transition, Google will be rolling out a migration tool for its current podcast app users. With the tool, users in the US will be able to move their favorite pod subscriptions from Google Podcasts to YouTube Music, or export them for use in other podcast apps.
In the coming weeks, the migration tool will be available through a banner in Google Podcasts. There are step-by-step instructions on how to use the migration tool in Google's Help Center. The entire process is just four steps and you’ll need to have both Google Podcasts and YouTube Music installed on your device to complete the transfer. After the transfer, Google notes it may take a few minutes for everything to show up in your YouTube Music library.
Google's move to ditch its standalone podcast app doesn't come as a total surprise. Google Podcasts has been around since 2018 but it never quite took off like similar apps, including Overcast and Spotify. And YouTube is already a popular destination for podcast fans, with a recent study claiming over 23 percent of podcasts listeners use YouTube as their primary player. Many of today's trending podcasts are already available on YouTube. For podcasts that are not available on the platform, users can add shows directly to their YouTube Music library via RSS feed. This isn't Google's first rodeo. Back in 2020, the company nixed its standalone music app, Google Play Music, in favor of YouTube Music, and it also offered a comprehensive tool to transfer libraries to the new app.
Google Podcasts will remain live for listening through March 2024, after which users will be able to migrate or export their subscriptions through July 2024.
This article originally appeared on Engadget at https://www.engadget.com/heres-how-to-move-your-subscriptions-off-google-podcasts-before-it-shuts-down-194039938.html?src=rss
Undoubtedly, 2023 has been the year of generative AI, and Google is marking its end with even more AI developments. The company has announced the creation of its most powerful TPU (formally known as Tensor Processing Units) yet, Cloud TPU v5p, and an AI Hypercomputer from Google Cloud. "The growth in [generative] AI models — with a tenfold increase in parameters annually over the past five years — brings heightened requirements for training, tuning, and inference," Amin Vahdat, Google's Engineering Fellow and Vice President for the Machine Leaning, Systems, and Cloud AI team, said in a release.
The Cloud TPU v5p is an AI accelerator, training and serving models. Google designed Cloud TPUs to work with models that are large, have long training periods, are mostly made of matrix computations and have no custom operations inside its main training loop, such as TensorFlow or JAX. Each TPU v5p pod brings 8,960 chips when using Google's highest-bandwidth inter-chip interconnect.
The Cloud TPU v5p follows previous iterations like the v5e and v4. According to Google, the TPU v5p has two times greater FLOPs and is four times more scalable when considering FLOPS per pod than the TPU v4. It can also train LLM models 2.8 times faster and embed dense models 1.9 times faster than the TPU v4.
Then there's the new AI Hypercomputer, which includes an integrated system with open software, performance-optimized hardware, machine learning frameworks, and flexible consumption models. The idea is that this amalgamation will improve productivity and efficiency compared to if each piece was looked at separately. The AI Hypercomputer's performance-optimized hardware utilizes Google's Jupiter data center network technology.
In a change of pace, Google provides open software to developers with "extensive support" for machine learning frameworks such as JAX, PyTorch and TensorFlow. This announcement comes on the heels of Meta and IBM's launch of the AI Alliance, which prioritizes open sourcing (and Google is notably not involved in). The AI Hypercomputer also introduces two models, Flex Start Mode and Calendar Mode.
Google shared the news alongside the introduction of Gemini, a new AI model that the company calls its "largest and most capable," and its rollout to Bard and the Pixel 8 Pro. It will come in three sizes: Gemini Pro, Gemini Ultra and Gemini Nano.
This article originally appeared on Engadget at https://www.engadget.com/google-announces-new-ai-processing-chips-and-a-cloud-hypercomputer-150031454.html?src=rss
Google is bringing Gemini, the new large language model it just introduced, to Android, beginning with the Pixel 8 Pro. The company’s flagship smartphone will run Gemini Nano, a version of the model built specifically to run locally on smaller devices, Google announced in a blog post. The Pixel 8 Pro is powered by the Google Tensor G3 chip designed to speed up AI performance.
This lets the Pixel 8 Pro add several smarts to existing features. The phone’s Recorder app, for instance, has a Summarize feature that currently needs a network connection to give you a summary of recorded conversations, interviews, and presentations. But thanks to Gemini Nano, the phone will now be able to provide a summary without needing a connection at all.
Gemini smarts will also power Gboard’s Smart Reply feature. Gboard will suggest high-quality responses to messages and be aware of context in conversations. The feature is currently available as a developer preview and needs to be enabled in settings. However, it only works with WhatsApp currently and will come to more apps next year.
“Gemini Nano running on Pixel 8 Pro offers several advantages by design, helping prevent sensitive data from leaving the phone, as well as offering the ability to use features without a network connection,” wrote Brian Rakowski, Google Pixel’s vice president of product management.
As part of today’s AI push, Google is upgrading Bard, the company’s ChatGPT rival, with Gemini as well, so you should see significant improvements when using the Pixel’s Assistant with Bard experience. Google is also rolling out a handful of AI-powered productivity and customization updates on other Pixel devices, including the Pixel Tablet and the Pixel Watch, although it isn’t immediately clear what they are.
Google
Gemini Nano is the smallest version of Google's large language model, while Gemini Pro is a larger model that will power not just Bard but other Google services like Search, Ads and Chrome, among others. Gemini Ultra, Google's beefiest model, will arrive in 2024 and will be used to further AI development.
Although today’s updates are focused on the Pixel 8 Pro, Google spoke today about AI Core, an Android 14 service that allows developers to access AI features like Nano. Google says AI Core is designed run on “new ML hardware like the latest Google Tensor TPU and NPUs in flagship Qualcomm Technologies, Samsung S.LSI and MediaTek silicon.” The company adds that “additional devices and silicon partners will be announced in the coming months.”
This article originally appeared on Engadget at https://www.engadget.com/googles-gemini-ai-is-coming-to-android-150025984.html?src=rss
The universal chat app Beeper just got a lot more, well, universal. The company just unveiled the Beeper Mini app, which makes the bold claim to bring true iMessage support to Android devices. Even bolder? It seems to actually work, according to users who have tried it. This isn’t done in a strange hacky way that could compromise privacy and security, like Nothing’s beleaguered attempt to play nice with iOS devices.
Beeper co-founder Eric Migicovsky, formerly of Pebble fame, told Engadget that his latest project is about scaling up his service. You see, the original Beeper app relied on a farm of Mac mini servers to act as relays, which left a lot of potential users on a waitlist. Then comes Beeper Mini, which taps straight into the official iMessage protocol thanks to some cunning reverse engineering. The texts are even sent to Apple’s servers before moving on to their final destination, just like a real iMessage created by an iPhone. Even weirder? All of this high-tech wizardry was created by a 16-year-old high school student.
Once you open the app, it goes through all of your text message conversations and flags the ones from iMessage users. The system then switches them over to blue bubble conversations via Apple’s official platform. From then on, every time you talk to that person, the bubbles will be bluer than a clear spring day — no more social stigma linked to green bubbles. You also don’t need an Apple ID to login, alleviating many of the security concerns that plagued rival offerings.
It’s worth reiterating: This platform isn’t hacking the iMessage experience so it works on Android. It is the iMessage experience working on Android, as it's sending actual iMessages. The tech was created by jailbreaking iPhones to get a good look at how the operating system handles iMessages, before recreating the software. As a bonus, Beeper actually encrypts messages end-to-end between iMessage and Android users, supposedly making the communication even safer.
Beeper is being really transparent here, and the company knows it's potentially skating on thin ice with regard to how Apple will respond. Apple has never been especially friendly to those it deems to be infringing on company secrets, but it did just announce forthcoming support for the RCS messaging standard. This will allow for greater interoperability between Android and iOS devices, so maybe it’ll let Beeper Mini slide for now. Should Apple want to put the kibosh on Beeper Mini, it would likely take a lot of work to completely revamp the iMessage protocol, Migicovsky explained to Engadget.
Beeper’s iMessage code will be open source to ensure there will be no security or privacy lapses. As for potential legal hurdles, the co-founder says his company is on the right side of the law, noting there’s no actual Apple code in Beeper Mini, just custom-made recreated code. Also, he cites legal precedence in copyright law that has sided with those who reverse engineer code. In any event, Beeper Mini is available, for now, and it's a $1.99-per-month subscription with one month free trial.
This article originally appeared on Engadget at https://www.engadget.com/beeper-says-it-reverse-engineered-imessage-into-an-android-app-172250419.html?src=rss
Apple is reportedly lobbying India to delay the implementation of a rule that requires all smartphones sold in the country to have a USB-C charging port. While Apple has already started shifting away from the Lightning port in the iPhone 15 lineup (and other products), the regulation differs from a similar one enacted in the European Union in that India may press Apple to switch to a USB-C port on older iPhones.
Other manufacturers, including Samsung, have agreed to India's plan to have a universal USB-C charging port on their smartphones by June 2025, which is six months after the EU's deadline (such OEMs have long been using USB-C charging ports anyway). Apple, however, is said to have pressed India to delay the implementation of the rule, or at least to exempt older iPhones from the requirement.
According to Reuters, Apple executives told Indian officials late last month that were the rule to be applied to older iPhones, the company would not be able to meet production targets as set out by the country's production-linked incentive (PLI) program. Under this scheme, India grants electronic manufacturers financial incentives to make new investments and generate incremental phone sales each year.
Apple suppliers such as Foxconn are said to have taken advantage of the program to boost iPhone production in India. Estimates suggest that between 12 and 14 percent of iPhones made this year will be manufactured in India. That proportion could rise to as much as 25 percent next year, according to analyst Ming-Chi Kuo.
Apple is said to have told officials that it can't change the design of earlier iPhones to include a USB-C port. The company reportedly argued that, unless it gains an exemption for pre-iPhone 15 models, it will need 18 months beyond the end of next year (i.e. until mid-2026) to comply with the regulation. That's presumably to give Apple enough time to phase out Lightning ports on older iPhones, which Indian consumers tend to prefer since they fall in price when the company releases new models.
This article originally appeared on Engadget at https://www.engadget.com/apple-reportedly-wants-india-to-exempt-older-iphones-from-usb-c-charging-rules-151558675.html?src=rss
A woman was photographed standing in front of two mirrors with an iPhone camera, but the actual photo shows three completely different arm positions. The arms are in different locations in mirror number one, mirror number two and in actual real life. Is it Photoshop? Is it a glitch in the Matrix? Did the woman take a 25-year trip inside of Twin Peak’s black lodge? No, it’s just a computational photography error, but it still makes for one heck of an image.
It all comes down to how modern smartphone cameras deal with photography. When you click that camera button, billions of computational operations occur in an instant, resulting in a photo you can post online in hopes of getting a few thumbs up. In this case, Apple’s software didn’t realize there was a mirror in the shot, so it treated each version of the subject as three different people. She was moving at the instant the photo was taken, so the algorithm stitched the photo together from multiple images. The end result? Well, you can see it above.
Smartphone camera software always pulls from many images at once, combining at will and adjusting for contrast, saturation, detail and lack of blur. In the vast majority of cases, this doesn’t present an issue. Once in a while, however, the software gets a tad bit confused. If it was three different people, instead of one with a mirror, each subject would have been properly represented.
This is something that can actually be recreated by just about anyone with an iPhone and some mirrors. As a matter of fact, there’s a TikTok trend in which folks do just that, making all kinds of silly photos and videos by leveraging the algorithm's difficulties when separating mirror images from actual people.
This article originally appeared on Engadget at https://www.engadget.com/what-did-an-iphone-camera-do-to-this-poor-womans-arms-201507227.html?src=rss
Apple pushed updates to iOS, iPadOS and macOS software today to patch two zero-day security vulnerabilities. The company suggested the bugs had been actively deployed in the wild. “Apple is aware of a report that this issue may have been exploited against versions of iOS before iOS 16.7.1,” the company wrote about both flaws in its security reports. Software updates plugging the holes are now available for the iPhone, iPad and Mac.
Researcher Clément Lecigne of Google’s Threat Analysis Group (TAG) is credited with discovering and reporting both exploits. As Bleeping Computernotes, the team at Google TAG often finds and exposes zero-day bugs against high-risk individuals, like politicians, journalists and dissidents. Apple didn’t reveal specifics about the nature of any attacks using the flaws.
The two security flaws affected WebKit, Apple’s open-source browser framework powering Safari. In Apple’s description of the first bug, it said, “Processing web content may disclose sensitive information.” In the second, it wrote, “Processing web content may lead to arbitrary code execution.”
The security patches cover the “iPhone XS and later, iPad Pro 12.9-inch 2nd generation and later, iPad Pro 10.5-inch, iPad Pro 11-inch 1st generation and later, iPad Air 3rd generation and later, iPad 6th generation and later, and iPad mini 5th generation and later.”
The odds your devices were affected by either of these are extremely minimal, so there’s no need to panic — but, to be safe, it would be wise to update your Apple gear now. You can update your iPhone or iPad immediately by heading to Settings > General > Software Update and tapping the prompt to initiate it. On Mac, go to System Settings > General > Software Update and do the same. Apple’s fixes arrived today in iOS 17.1.2, iPadOS 17.1.2 and macOS Sonoma 14.1.2.
This article originally appeared on Engadget at https://www.engadget.com/apple-patches-two-security-vulnerabilities-on-iphone-ipad-and-mac-215854473.html?src=rss
One of the major concessions Microsoft made to regulators to get its blockbuster acquisition of Activision Blizzard over the line was agreeing to let users of third-party cloud services stream Xbox-owned games. Starting today, you can play three Call of Duty games via NVIDIA GeForce Now: Modern Warfare 3, Modern Warfare 2 and Warzone.
They're the first Activision games to land on GeForce Now since Microsoft closed the $68.7 billion Activision deal in October. Activision Blizzard games were previously available on GeForce Now but only briefly, as the publisher pulled them days after the streaming service went live for all users in early 2020.
Microsoft first made its first-party games available on GeForce Now this year, beginning with Gears 5 in May. More recently, Microsoft started allowing GeForce Now users to stream PC Game Pass titles and Microsoft Store purchases.
Call of Duty titles are major additions, though, especially since that means Warzone fans can play the battle royale on their phone or tablet wherever they are without having to pay anything extra (free GeForce Now users are limited to one hour of gameplay per session). If you've bought MW2 or MW3 on Steam, you can play those through GeForce Now as well. NVIDIA notes that older CoD titles will be available through GeForce Now later.
Another key concession Microsoft made to appease UK regulators was to sell the cloud gaming rights for Activision Blizzard titles to Ubisoft. However, as evidenced here, Microsoft will still honor the agreements it made directly with various cloud gaming services.
This article originally appeared on Engadget at https://www.engadget.com/call-of-duty-games-start-landing-on-nvidia-geforce-now-195040692.html?src=rss
When I first got to see the Expressive E Osmose way back in 2019, I knew it was special. In my 15-plus years covering technology, it was one of the only devices I’ve experienced that actually had the potential to be truly “game changing.” And I’m not being hyperbolic.
But, that was four years ago, almost to the day. A lot has changed in that time. MPE (MIDI Polyphonic Expression) has gone from futuristic curiosity to being embraced by big names like Ableton and Arturia. New players have entered and exited the scene. More importantly, the Osmose is no longer a promising prototype, but an actual commercial product. The questions, then, are obvious: Does the Osmose live up to its potential? And, does it seem as revolutionary today as it did all those years ago? The answers, however, are less clear.
Terrence O'Brien / Engadget
What sets the Osmose ($1,799) apart from every other MIDI controller and synthesizer (MPE or otherwise) is its keybed. At first glance, it looks like almost any other keyboard, albeit a really nice one. The body is mostly plastic, but it feels solid and the top plate is made of metal. (Shoutout to Expressive E, by the way, for building the OSMOSE out of 66 percent recycled materials and for making the whole thing user repairable — no glue or speciality screws to be found.)
The keys themselves have this lovely, almost matte finish and a healthy amount of heft. It’s a nice change of pace from the shiny, springy keys on even some higher-end MIDI controllers. But the moment you press down on a key you’ll see what sets it apart — the keys move side to side. And this is not because it’s cheaply assembled and there’s a ton of wiggle. This is a purposeful design. You can bend notes (or control other parameters) by actually bending the keys, much like you would on a stringed instrument.
This is huge for someone like me who is primarily a guitar player. Bending strings and wiggling my fingers back and forth to add vibrato comes naturally. And, as I mentioned in my review of Roli’s Seaboard Rise 2, I find myself doing this even on keyboards where I know it will have no effect. It’s a reflex.
It’s a very simple thing to explain, but very difficult to encapsulate its effect on your playing. It’s all of the same things that make playing the Seaboard special: the slight pitch instability from the unintentional micro movements of your fingers, the ability to bend individual notes for shifting harmonies and the polyphonic aftertouch that allows you to alter things like filter cutoff on a per-note basis.
These tiny changes in tuning and expression add an almost ineffable fluidity to your playing. In particular, for sounds based on acoustic instruments like flutes and strings, it adds an organic element missing from almost every other synthesizer. There is a bit of a learning curve, but I got the hang of it after just a few days.
What separates it from the Roli, though, is its formfactor. While the Seaboard is keyboard-esque, it’s still a giant squishy slab of silicone. It might not appeal to someone who grew up taking piano lessons every week. The Osmose, on the other hand, is a traditional keyboard, with full-sized keys and a very satisfying action. It’s probably the most familiar and approachable implementation of MPE out there.
If you are a pianist, or an accomplished keyboard player, this is probably the MPE controller you’ve been waiting for. And it’s hands-down one of the best on the market.
Where things get a little dicier is when looking at the Osmose as a standalone synthesizer. But let’s start where it goes right: the interface. The screen to the left of the keyboard is decently sized (around 4 inches) and easy to read at any angle. There are even some cute graphics for parameters such as timbre (a log), release (a yo-yo) and drive (a steering wheel).
Terrence O'Brien / Engadget
There aren’t a ton of hands-on controls, but menu diving is kept to a minimum with some smart organization. The four buttons across the top of the screen take you to different sections for presets, synth (parameters and macros), sensitivity (MPE and aftertouch controls) and playing (mostly just for the arpeggiator at the moment). Then to the left of the screen there are two encoders for navigating the submenus, and the four knobs below control whatever option is listed above them on the screen. So, no, you’re not going to be doing a lot of live tweaking, but you also won’t spend 30 minutes trying to dial in a patch.
Part of the reason you won’t spend 30 minutes dialing in a patch is because there really isn’t much to dial in. The engine driving the Osmose is Haken Audio’s EaganMatrix and Expressive E keeps most of it hidden behind six macro controls. In fact, you can’t really design a patch from scratch — at least not the synth directly. You need to download the Haken Editor, which requires Max (not the streaming service), to do serious sound design. Then you need to upload your new patch to the Osmose over USB. Other than that, you’re stuck tweaking presets.
Terrence O'Brien / Engadget
This isn’t necessarily a bad thing because, frankly, EaganMatrix feels less like a musical instrument and more like a PHD thesis. It is undeniably powerful, but it’s also confusing as hell. Expressive E even describes it as “a laboratory of synthesis,” and that seems about right; patching in the EaganMatrix is like doing science. Except, it’s not the fun science you see on TV with fancy machines and test tubes. Instead it’s more like the daily grind of real life science where you stare at a nearly inscrutable series of numbers, letters, mathematical constants and formulas.
I couldn’t get the Osmose and Haken Editor to talk to each other on my studio laptop (a five-year-old Dell XPS), though I did manage to get it to work on my work-issue MacBook. That being said, it was mostly a pointless endeavor. I simply can’t wrap my head around the EaganMatrix. I was able to build a very basic patch with the help of a tutorial, but I couldn’t actually make anything usable.
There are some presets available on Patchstorage, but the community is nowhere near as robust as what you’d find for the Organelle or ZOIA. And, it’s not obvious how to actually upload those handful of presets to the Osmose. You can drag and drop the .mid files you download to the empty slots across the top of the Haken Editor and that will add them to the Osmose's user presets. But you wont actually see that reflected on the Osmose itself until you turn it off and turn it back on.
Honestly, many of the presets available on Patchstorage cover the same ground as 500 or so factory ones that ship with the Osmose. And it’s while browsing those hundreds of presets that both the power and the limitations of the EaganMatrix become obvious. It’s capable of covering everything from virtual analog, to FM to physical modeling, and even some pseudo-granular effects. Its modular, matrix-based patching system is so robust that it would almost certainly be impossible to recreate physically (at least without spending thousands of dollars).
Now, this is largely a matter of taste, but I find the sounds that come out of this obviously over-powered synth often underwhelming. They’re definitely unique and in some cases probably only possible with the EaganMatrix. But the virtual analog patches aren’t very “analog,” the FM ones lack the character of a DX7 or the modern sheen of a Digitone, and the bass patches could use some extra oomph. Sometimes patches on the Osmose feel like tech demos rather than something you’d actually use musically.
Terrence O'Brien / Engadget
That’s not to say there’s no good presets. There are some solid analog-ish sounds and there are a few decent FM pads. But it’s the physical modeling patches where EaganMatrix is at its best. They definitely land in a kind of uncanny valley, though — not convincing enough to be mistaken for the real thing, but close enough that it doesn’t seem quite right coming out of a synthesizer.
Still, the way tuned drums and plucked or bowed strings are handled by Osmose is impressive. Quickly tapping a key can get you a ringing resonant sound, while holding it down mutes it. Aftertouch can be used to trigger repeated plucks that increase in intensity as you press harder. And bowed patches can be smart enough to play notes within a certain range of each other as legato, while still allowing you to play more spaced out chords with your other hand. (This latter feature is called Pressure Glide and can be fine tuned to suit your needs.)
The level of precision with which you can gently coax sound out of some presets with the lightest touch is unmatched by any synth or MIDI controller I’ve ever tested. And that becomes all the more shocking when you realize that very same patch can also be a percussive blast if you strike the keys hard.
But, at the end of the day, I rarely find myself reaching for Osmose — at least not as a synthesizer. I’ve been testing one for a few months now, and while I have used it quite extensively in my studio, it’s been mostly as a controller for MPE-enabled soft synths like Arturia’s Pigments and Ableton’s Drift. It’s undeniably one of the most powerful MIDI controllers on the market. My one major complaint on that front being that its incredible arpeggiator isn’t available in controller mode.
The Osmose is a gorgeous instrument that, in the right hands, is capable of delivering nuanced performances unlike anything else. Even if, at times, the borrowed sound engine doesn’t live up to the keyboard’s lofty potential.
This article originally appeared on Engadget at https://www.engadget.com/expressive-e-osmose-review-a-game-changing-mpe-keyboard-but-a-frustrating-synthesizer-170001300.html?src=rss