Shure’s MV7+ USB/XLR mic has a customizable LED panel and built-in audio tools

Shure's MV7 microphone has been a solid option for podcasters and streamers since its introduction in 2020. With it, the company introduced the first mic with both USB and XLR connectivity. This hybrid setup offers the ability to connect easily to a computer or more robust recording setups as needed. It's also $150 cheaper than Shure's workhorse SM7B that you've likely seen in professional podcast videos. Now the company is back with a new version of the MV7, dubbed the MV7+, with a "sleeker design" and a host of software features aimed at improving audio before you fire up any editing workflows. 

The most noticeable change is the new multi-color LED touch panel. Shure says this component is fully customizable with over 16.8 million colors for a visual indicator of your sound levels. You can also opt for "an ambient pulse effect." What's more, a tap on the LED panel mutes the MV7+ when you need to cough, sneeze or clear your throat. 

In what Shure calls a "Real-time Denoiser," the MV7+ employs digital signal processing (DSP) to eliminate background distractions. The company says this works alongside the mic's voice isolation to produce excellent sound in noisy scenarios. The MV7+ also features a Digital Popper Stopper to combat the dreaded plosives, but this microphone does so virtually without an unsightly filter making an appearance on your livestream. 

Shure says it improved the Auto Level Mode on the MV7+, a feature that makes gain adjustments based on distance, volume and room characteristics to automatically balance the sound profile. There's also onboard reverb, offering three settings (Plate, Hall and Studio) before you start tweaking things in your to-go DAW. And just like the MV7, the MV7+ still has hybrid XLR and USB outputs to connect to mobile devices and laptops in addition to a more capable audio mixer. Where the previous model had a micro-USB port for both USB-A and USB-C cables, this new model is all USB-C. 

With the MV7+, Shure is also announcing the MOTIV Mix app. In addition to tweaking the colors of that LED panel, the software provides a five-track mixer alongside the ability to adjust settings like sound signature, gain and more. There's also a Soundcheck tool to assist with the optimal gain setting and a monitor mix slider provides individual adjustments for mic output and system audio playback. The company explains that this new Mix app will be available for older mics like the MV7 and MVX2U, but for now it's only available in beta to use with the MV7+

The MV7+ is available now in black and there's a white version on the way "in the upcoming weeks." Both are $279, $30 more than the MV7 was at launch. Shure is also selling a "podcast kit" that bundles the MV7+ with a basic Gator desktop mic stand for $299. If you'd prefer the more versatile boom stand, that package is $339. A three-meter USB-C to USB-C cable is included in the box whether you purchase the standalone microphone or either of the kits. 

This article originally appeared on Engadget at https://www.engadget.com/shures-mv7-usbxlr-mic-has-a-customizable-led-panel-and-built-in-audio-tools-142940237.html?src=rss

Shure’s MV7+ USB/XLR mic has a customizable LED panel and built-in audio tools

Shure's MV7 microphone has been a solid option for podcasters and streamers since its introduction in 2020. With it, the company introduced the first mic with both USB and XLR connectivity. This hybrid setup offers the ability to connect easily to a computer or more robust recording setups as needed. It's also $150 cheaper than Shure's workhorse SM7B that you've likely seen in professional podcast videos. Now the company is back with a new version of the MV7, dubbed the MV7+, with a "sleeker design" and a host of software features aimed at improving audio before you fire up any editing workflows. 

The most noticeable change is the new multi-color LED touch panel. Shure says this component is fully customizable with over 16.8 million colors for a visual indicator of your sound levels. You can also opt for "an ambient pulse effect." What's more, a tap on the LED panel mutes the MV7+ when you need to cough, sneeze or clear your throat. 

In what Shure calls a "Real-time Denoiser," the MV7+ employs digital signal processing (DSP) to eliminate background distractions. The company says this works alongside the mic's voice isolation to produce excellent sound in noisy scenarios. The MV7+ also features a Digital Popper Stopper to combat the dreaded plosives, but this microphone does so virtually without an unsightly filter making an appearance on your livestream. 

Shure says it improved the Auto Level Mode on the MV7+, a feature that makes gain adjustments based on distance, volume and room characteristics to automatically balance the sound profile. There's also onboard reverb, offering three settings (Plate, Hall and Studio) before you start tweaking things in your to-go DAW. And just like the MV7, the MV7+ still has hybrid XLR and USB outputs to connect to mobile devices and laptops in addition to a more capable audio mixer. Where the previous model had a micro-USB port for both USB-A and USB-C cables, this new model is all USB-C. 

With the MV7+, Shure is also announcing the MOTIV Mix app. In addition to tweaking the colors of that LED panel, the software provides a five-track mixer alongside the ability to adjust settings like sound signature, gain and more. There's also a Soundcheck tool to assist with the optimal gain setting and a monitor mix slider provides individual adjustments for mic output and system audio playback. The company explains that this new Mix app will be available for older mics like the MV7 and MVX2U, but for now it's only available in beta to use with the MV7+

The MV7+ is available now in black and there's a white version on the way "in the upcoming weeks." Both are $279, $30 more than the MV7 was at launch. Shure is also selling a "podcast kit" that bundles the MV7+ with a basic Gator desktop mic stand for $299. If you'd prefer the more versatile boom stand, that package is $339. A three-meter USB-C to USB-C cable is included in the box whether you purchase the standalone microphone or either of the kits. 

This article originally appeared on Engadget at https://www.engadget.com/shures-mv7-usbxlr-mic-has-a-customizable-led-panel-and-built-in-audio-tools-142940237.html?src=rss

Apple’s M2 MacBook Air drops to $849 at Amazon

If you're anything like me, then you put off buying a new MacBook until absolutely necessary or there's a big sale to avoid a large bill. Right now, the latter is making a purchase more tempting, with Apple's 2022 MacBook Air with M2 chip on sale for $849 from $999. The 15 percent sale brings the 13.6-inch 256GB device down to a record-low price. 

We gave the 2022 MacBook Air a 96 in our review when Apple first released it, dubbing the device "near-perfect." The MacBook is thinner than its predecessor, but the screen is a noticeable one-third of an inch larger — with a liquid retina display and up to 500 nits of brightness. Other perks include a 1080p FaceTime camera and a 60Hz refresh rate. We were also big fans of the 2022 MacBook Air's high-quality quad-speaker array and solid three-mic system. It also has a 3.5mm headphone jack, 2 USB-C thunderbolt ports and a MagSafe connector. 

Our pick for 2024's best budget MacBook is currently $200 cheaper than the latest M3 chip MacBook Air but has similar stats. Both laptops offer an 8-core CPU, up to 10-core GPU and up to 18 hours of battery life. Plus, both devices have up to 24GB of unified memory and can handle up to 2TB of storage. 

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/apples-m2-macbook-air-drops-to-849-at-amazon-134240095.html?src=rss

Apple’s M2 MacBook Air drops to $849 at Amazon

If you're anything like me, then you put off buying a new MacBook until absolutely necessary or there's a big sale to avoid a large bill. Right now, the latter is making a purchase more tempting, with Apple's 2022 MacBook Air with M2 chip on sale for $849 from $999. The 15 percent sale brings the 13.6-inch 256GB device down to a record-low price. 

We gave the 2022 MacBook Air a 96 in our review when Apple first released it, dubbing the device "near-perfect." The MacBook is thinner than its predecessor, but the screen is a noticeable one-third of an inch larger — with a liquid retina display and up to 500 nits of brightness. Other perks include a 1080p FaceTime camera and a 60Hz refresh rate. We were also big fans of the 2022 MacBook Air's high-quality quad-speaker array and solid three-mic system. It also has a 3.5mm headphone jack, 2 USB-C thunderbolt ports and a MagSafe connector. 

Our pick for 2024's best budget MacBook is currently $200 cheaper than the latest M3 chip MacBook Air but has similar stats. Both laptops offer an 8-core CPU, up to 10-core GPU and up to 18 hours of battery life. Plus, both devices have up to 24GB of unified memory and can handle up to 2TB of storage. 

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/apples-m2-macbook-air-drops-to-849-at-amazon-134240095.html?src=rss

Google announces Axion, its first Arm-based CPU for data centers

Google Cloud Next 2024 has begun, and the company is starting the event with some big announcements, including its new Axion processor. It's Google's first Arm-based CPU specifically created for data centers, which was designed using Arm's Neoverse V2 CPU.

According to Google, Axion performs 30 percent better than its fastest general purpose Arm-based tools in the cloud and 50 percent better than the most recent, comparable x86-based VMs. They also claim it's 60 percent more energy efficient than those same x86-based VMs. Google is already using Axion in services like BigTable and Google Earth Engine, expanding to more in the future.

The release of Axion could bring Google into competition with Amazon, which has led the field of Arm-based CPUs for data centers. The company's cloud business, Amazon Web Services (AWS), released the Graviton processor back in 2018, releasing the second and third iterations over the following two years. Fellow chip developer NVIDIA released its first Arm-based CPU for data centers in 2021 named Grace, and companies like Ampere have also been making gains in the area.

Google has been developing its own processors for several years now, but they've been primarily focused on consumer products. The original Arm-based Tensor ship first shipped in the Pixel 6 and 6 Pro smartphones, which were released in late 2021. Subsequent Pixel phones have all been powered by updated versions of the Tensor. Prior to that, Google developed the "Tensor Processing Unit" (TPU) for its data centers. The company started using them internally in data centers in 2015, announced them publicly in 2016, and made them available to third parties in 2018. 

Arm-based processors are often a lower-cost and more energy-efficient option. Google's announcement came right after Arms CEO Rene Haas issued a warning about the energy usage of AI models, according to the Wall Street Journal. He called models such as ChatGPT "insatiable" regarding their need for electricity. "The more information they gather, the smarter they are, but the more information they gather to get smarter, the more power it takes, Haas stated. By the end of the decade, AI data centers could consume as much as 20 percent to 25 percent of US power requirements. Today that's probably four percent or less. That's hardly very sustainable, to be honest with you." He stressed the need for greater efficiency in order to maintain the pace of breakthroughs.

This article originally appeared on Engadget at https://www.engadget.com/google-announces-its-first-arm-based-cpu-for-data-centers-120508058.html?src=rss

Google announces Axion, its first Arm-based CPU for data centers

Google Cloud Next 2024 has begun, and the company is starting the event with some big announcements, including its new Axion processor. It's Google's first Arm-based CPU specifically created for data centers, which was designed using Arm's Neoverse V2 CPU.

According to Google, Axion performs 30 percent better than its fastest general purpose Arm-based tools in the cloud and 50 percent better than the most recent, comparable x86-based VMs. They also claim it's 60 percent more energy efficient than those same x86-based VMs. Google is already using Axion in services like BigTable and Google Earth Engine, expanding to more in the future.

The release of Axion could bring Google into competition with Amazon, which has led the field of Arm-based CPUs for data centers. The company's cloud business, Amazon Web Services (AWS), released the Graviton processor back in 2018, releasing the second and third iterations over the following two years. Fellow chip developer NVIDIA released its first Arm-based CPU for data centers in 2021 named Grace, and companies like Ampere have also been making gains in the area.

Google has been developing its own processors for several years now, but they've been primarily focused on consumer products. The original Arm-based Tensor ship first shipped in the Pixel 6 and 6 Pro smartphones, which were released in late 2021. Subsequent Pixel phones have all been powered by updated versions of the Tensor. Prior to that, Google developed the "Tensor Processing Unit" (TPU) for its data centers. The company started using them internally in data centers in 2015, announced them publicly in 2016, and made them available to third parties in 2018. 

Arm-based processors are often a lower-cost and more energy-efficient option. Google's announcement came right after Arms CEO Rene Haas issued a warning about the energy usage of AI models, according to the Wall Street Journal. He called models such as ChatGPT "insatiable" regarding their need for electricity. "The more information they gather, the smarter they are, but the more information they gather to get smarter, the more power it takes, Haas stated. By the end of the decade, AI data centers could consume as much as 20 percent to 25 percent of US power requirements. Today that's probably four percent or less. That's hardly very sustainable, to be honest with you." He stressed the need for greater efficiency in order to maintain the pace of breakthroughs.

This article originally appeared on Engadget at https://www.engadget.com/google-announces-its-first-arm-based-cpu-for-data-centers-120508058.html?src=rss

Google Gemini chatbots are coming to a customer service interaction near you

More and more companies are choosing to deploy AI-powered chatbots to deal with basic customer service inquiries. At the ongoing Google Cloud Next conference in Las Vegas, the company has revealed the Gemini-powered chatbots its partners are working on, some of which you could end up interacting with. Best Buy, for instance, is using Google's technology to build virtual assistants that can help you troubleshoot product issues and reschedule order deliveries. IHG Hotels & Resorts is working on another that can help you plan a vacation in its mobile app, while Mercedes Benz is using Gemini to improve its own smart sales assistant. 

Security company ADT is also building an agent that can help you set up your home security system. And if you happen to be a radiologist, you may end up interacting with Bayer's Gemini-powered apps for diagnosis assistance. Meanwhile, other partners are using Gemini to create experiences that aren't quite customer-facing: Cintas, Discover and Verizon are using generative AI capabilities in different ways to help their customer service personnel find information more quickly and easily. 

Google has launched the Vertex AI Agency Builder, as well, which it says will help developers "easily build and deploy enterprise-ready gen AI experiences" like OpenAI's GPTs and Microsoft's Copilot Studio. The Builder will provide developers with a set of tools they can use for their projects, including a no-code console that can understand natural language and build AI agents based on Gemini in minutes. Vertex AI has more advanced tools for more complex projects, of course, but their common goal is to simplify the creation and maintenance of personalized AI chatbots and experiences. 

At the same event, Google also announced its new AI-powered video generator for Workspace, as well as its first ARM-based CPU specifically made for data centers. By launching the latter, it's taking on Amazon, which has been using its Graviton processor to power its cloud network over the past few years. 

This article originally appeared on Engadget at https://www.engadget.com/google-gemini-chatbots-are-coming-to-a-customer-service-interaction-near-you-120035393.html?src=rss

Google Gemini chatbots are coming to a customer service interaction near you

More and more companies are choosing to deploy AI-powered chatbots to deal with basic customer service inquiries. At the ongoing Google Cloud Next conference in Las Vegas, the company has revealed the Gemini-powered chatbots its partners are working on, some of which you could end up interacting with. Best Buy, for instance, is using Google's technology to build virtual assistants that can help you troubleshoot product issues and reschedule order deliveries. IHG Hotels & Resorts is working on another that can help you plan a vacation in its mobile app, while Mercedes Benz is using Gemini to improve its own smart sales assistant. 

Security company ADT is also building an agent that can help you set up your home security system. And if you happen to be a radiologist, you may end up interacting with Bayer's Gemini-powered apps for diagnosis assistance. Meanwhile, other partners are using Gemini to create experiences that aren't quite customer-facing: Cintas, Discover and Verizon are using generative AI capabilities in different ways to help their customer service personnel find information more quickly and easily. 

Google has launched the Vertex AI Agency Builder, as well, which it says will help developers "easily build and deploy enterprise-ready gen AI experiences" like OpenAI's GPTs and Microsoft's Copilot Studio. The Builder will provide developers with a set of tools they can use for their projects, including a no-code console that can understand natural language and build AI agents based on Gemini in minutes. Vertex AI has more advanced tools for more complex projects, of course, but their common goal is to simplify the creation and maintenance of personalized AI chatbots and experiences. 

At the same event, Google also announced its new AI-powered video generator for Workspace, as well as its first ARM-based CPU specifically made for data centers. By launching the latter, it's taking on Amazon, which has been using its Graviton processor to power its cloud network over the past few years. 

This article originally appeared on Engadget at https://www.engadget.com/google-gemini-chatbots-are-coming-to-a-customer-service-interaction-near-you-120035393.html?src=rss

Google’s new AI video generator is more HR than Hollywood

For most of us, creating documents, spreadsheets and slide decks is an inescapable part of work life in 2024. What's not is creating videos. That’s something Google would like to change. On Tuesday, the company announced Google Vids, a video creation app for work that the company says can make everyone a “great storyteller” using the power of AI.

Vids uses Gemini, Google’s latest AI model, to quickly create videos for the workplace. Type in a prompt, feed in some documents, pictures, and videos, and sit back and relax as Vids generates an entire storyboard, script, music and voiceover. "As a storytelling medium, video has become ubiquitous for its immediacy and ability to ‘cut through the noise,’ but it can be daunting to know where to start," said Aparna Pappu, a Google vice president, in a blog post announcing the app. "Vids is your video, writing, production and editing assistant, all in one."

In a promotional video, Google uses Vids to create a video recapping moments from its Cloud Next conference in Las Vegas, an annual event during which it showed off the app. Based on a simple prompt telling it to create a recap video and attaching a document full of information about the event, Vids generates a narrative outline that can be edited. It then lets the user select a template for the video — you can choose between research proposal, new employee intro, team milestone, quarterly business update, and many more — and then crunches for a few moments before spitting out a first draft of a video, complete with a storyboard, stock media, music, transitions, and animation. It even generates a script and a voiceover, although you can also record your own. And you can manually choose photos from Google Drive or Google Photos to drop them seamlessly into the video.


It all looks pretty slick, but it’s important to remember what Vids is not: a replacement for AI-powered video generation tools like OpenAI’s upcoming Sora or Runway’s Gen-2 that create videos from scratch from text prompts. Instead. Google Vids uses AI to understand your prompt, generate a script and a voiceover, and stitch together stock images, videos, music, transitions, and animations to create what is, effectively, a souped up slide deck. And because Vids is a part of Google Workspace, you can collaborate in real time just like Google Docs, Sheets, and Slides.

Who asked for this? My guess is HR departments and chiefs of staff, who frequently need to create onboarding videos for new employees, announce company milestones, or create training materials for teams. But if and when Google chooses to make Vids available beyond Workspace, which is typically used by businesses, I can also see people using this beyond work like easily creating videos for a birthday party or a vacation using their own photos and videos whenever it becomes available more broadly

Vids will be available in June and is first coming to Workspace Labs, which means you’ll need to opt in to test it. It’s not clear yet when it will be available more broadly.

This article originally appeared on Engadget at https://www.engadget.com/googles-new-ai-video-generator-is-more-hr-than-hollywood-120034992.html?src=rss

Google’s new AI video generator is more HR than Hollywood

For most of us, creating documents, spreadsheets and slide decks is an inescapable part of work life in 2024. What's not is creating videos. That’s something Google would like to change. On Tuesday, the company announced Google Vids, a video creation app for work that the company says can make everyone a “great storyteller” using the power of AI.

Vids uses Gemini, Google’s latest AI model, to quickly create videos for the workplace. Type in a prompt, feed in some documents, pictures, and videos, and sit back and relax as Vids generates an entire storyboard, script, music and voiceover. "As a storytelling medium, video has become ubiquitous for its immediacy and ability to ‘cut through the noise,’ but it can be daunting to know where to start," said Aparna Pappu, a Google vice president, in a blog post announcing the app. "Vids is your video, writing, production and editing assistant, all in one."

In a promotional video, Google uses Vids to create a video recapping moments from its Cloud Next conference in Las Vegas, an annual event during which it showed off the app. Based on a simple prompt telling it to create a recap video and attaching a document full of information about the event, Vids generates a narrative outline that can be edited. It then lets the user select a template for the video — you can choose between research proposal, new employee intro, team milestone, quarterly business update, and many more — and then crunches for a few moments before spitting out a first draft of a video, complete with a storyboard, stock media, music, transitions, and animation. It even generates a script and a voiceover, although you can also record your own. And you can manually choose photos from Google Drive or Google Photos to drop them seamlessly into the video.


It all looks pretty slick, but it’s important to remember what Vids is not: a replacement for AI-powered video generation tools like OpenAI’s upcoming Sora or Runway’s Gen-2 that create videos from scratch from text prompts. Instead. Google Vids uses AI to understand your prompt, generate a script and a voiceover, and stitch together stock images, videos, music, transitions, and animations to create what is, effectively, a souped up slide deck. And because Vids is a part of Google Workspace, you can collaborate in real time just like Google Docs, Sheets, and Slides.

Who asked for this? My guess is HR departments and chiefs of staff, who frequently need to create onboarding videos for new employees, announce company milestones, or create training materials for teams. But if and when Google chooses to make Vids available beyond Workspace, which is typically used by businesses, I can also see people using this beyond work like easily creating videos for a birthday party or a vacation using their own photos and videos whenever it becomes available more broadly

Vids will be available in June and is first coming to Workspace Labs, which means you’ll need to opt in to test it. It’s not clear yet when it will be available more broadly.

This article originally appeared on Engadget at https://www.engadget.com/googles-new-ai-video-generator-is-more-hr-than-hollywood-120034992.html?src=rss