The headphone industry isn’t known for its rapid evolution. There are developments like spatial sound and steady advances in Bluetooth audio fidelity, but for the most part, the industry counts advances in decades rather than years. That makes the arrival of the Aurvana Ace headphones — the first wireless buds with MEMS drivers — quite the rare event. I recently wrote about what exactly MEMS technology is and why it matters, but Creative is the first consumer brand to sell a product that uses it.
Creative unveiled two models, the Aurvana Ace ($130) and the Aurvana Ace 2 ($150) in tandem. Both feature MEMS drivers, the main difference is that the Ace model supports high-resolution aptX Adaptive while the Ace 2 has top-of-the-line aptX Lossless (sometimes marketed as “CD quality”). The Ace 2 is the model we’ll be referring to from here on.
In fairness to Creative, just the inclusion of MEMS drivers alone would be a unique selling point, but the aforementioned aptX support adds another layer of HiFi credentials to the mix. Then there’s adaptive ANC and other details like wireless charging that give the Ace 2 a strong spec-sheet for the price. Some obvious omissions include small quality of life features like pausing playback if you remove a bud and audio personalization. Those could have been two easy wins that would make both models fairly hard to beat for the price in terms of features if nothing else.
Photo by James Trew / Engadget
When I tested the first ever xMEMS-powered in-ear monitors, the Singularity Oni, the extra detail in the high end was instantly obvious, especially in genres like metal and drum & bass. The lower frequencies were more of a challenge, with xMEMS, the company behind the drivers in both the Oni and the Aurvana, conceding that a hybrid setup with a conventional bass driver might be the preferred option until its own speakers can handle more bass. That’s exactly what we have here in the Aurvana Ace 2.
The key difference between the Aurvana Ace 2 and the Oni though is more important than a good low end thump (if that’s even possible). MEMS-based headphones need a small amount of “bias” power to work, this doesn’t impact battery life, but Singularity used a dedicated DAC with a specific xMEMS “mode.” Creative uses a specific amp “chip” that demonstrates, for the first time, consumer MEMS headphones in a wireless configuration. The popularity of true wireless (TWS) headphones these days means that if MEMS is to catch on, it has to be compatible.
The good news is that even without the expensive iFi DAC that the Singularity Oni IEMs required to work, the Aurvana Ace 2 bring extra clarity in the higher frequencies than rival products at this price. That’s to say, even with improved bass, the MEMS drivers clearly favor the mid- to high-end frequencies. The result is a sound that strikes a good balance between detail and body.
Listening to “Master of Puppets” the iconic chords had better presence and “crunch” than on a $250 pair of on-ear headphones I tried. Likewise, the aggressive snares in System of a Down’s “Chop Suey!” pop right through just as you’d hope. When I listened to the same song on the $200 Grell Audio TWS/1 with personalized audio activated the sounds were actually comparable. Just Creative’s sounded like that out of the box, but the Grell buds have slightly better dynamic range over all and more emphasis on the vocals.
For more electronic genres the Aurvana Ace’s hybrid setup really comes into play. Listening to Dead Prez’s “Hip-Hop”really shows off the bass capabilities, with more oomph here than both the Grell and a pair of $160 House of Marley Redemption 2 ANC — but it never felt overdone or fuzzy/loose.
Photo by James Trew / Engadget
Despite besting other headphones on specific like-for-like comparisons, as a whole the nuances and differences between the headphones is harder to quantify. The only set I tested that sounded consistently better, to me, was the Denon Perl Pro (formerly known as the NuraTrue Pro) but at $349 those are also the most expensive.
It would be remiss of me not to point out that there were also many songs and tests where differences between the various sets of earbuds were much harder to discern. With two iPhones, one Spotify account and a lot of swapping between headphones during the same song it’s possible to tease out small preferences between different sets, but the form factor, consumer preference and price point dictate that, to some extent, they all broadly overlap sonically.
The promise of MEMS drivers isn’t just about fidelity though. The claim is that the lack of moving parts and their semiconductor-like fabrication process ensures a higher level of consistency with less need for calibration and tuning. The end result being a more reliable production process which should mean lower cost. In turn this could translate into better value for money or at least a potentially more durable product. If the companies choose to pass that saving on of course.
For now, we’ll have to wait and see if other companies explore using MEMS drivers in their own products or whether it might remain an alternative option alongside technology like planar magnetic drivers and electrostatic headphones as specialist options for enthusiasts. One thing’s for sure: Creative’s Aurvana Ace series offers a great audio experience alongside premium features like wireless charging and aptX Lossless for a reasonable price — what’s not to like about that?
This article originally appeared on Engadget at https://www.engadget.com/the-first-affordable-headphones-with-mems-drivers-review-161536317.html?src=rss
OpenAI's spot atop the generative AI heap may be coming to an end as Google officially introduced its most capable large language model to date on Wednesday, dubbed Gemini 1.0. It's the first of “a new generation of AI models, inspired by the way people understand and interact with the world,” CEO Sundar Pichai wrote in a Google blog post.
“Ever since programming AI for computer games as a teenager, and throughout my years as a neuroscience researcher trying to understand the workings of the brain, I’ve always believed that if we could build smarter machines, we could harness them to benefit humanity in incredible ways,” Pichai continued.
The result of extensive collaboration between Google’s DeepMind and Research divisions, Gemini has all the bells and whistles cutting-edge genAIs have to offer. "Its capabilities are state-of-the-art in nearly every domain," Pichai declared.
The system has been developed from the ground up as an integrated multimodal AI. Many foundational models can be essentially though of groups of smaller models all stacked in a trench coat, with each individual model trained to perform its specific function as a part of the larger whole. That’s all well and good for shallow functions like describing images but not so much for complex reasoning tasks.
Google, conversely, pre-trained and fine-tuned Gemini, “from the start on different modalities” allowing it to “seamlessly understand and reason about all kinds of inputs from the ground up, far better than existing multimodal models,” Pichai said. Being able to take in all these forms of data at once should help Gemini provide better responses on more challenging subjects, like physics.
Gemini can code as well. It’s reportedly proficient in popular programming languages including Python, Java, C++ and Go. Google has even leveraged a specialized version of Gemini to create AlphaCode 2, a successor to last year's competition-winning generativeAI. According to the company, AlphaCode 2 solved twice as many challenge questions as its predecessor did, which would put its performance above an estimated 85 percent of the previous competition’s participants.
While Google did not immediately share the number of parameters that Gemini can utilize, the company did tout the model’s operational flexibility and ability to work in form factors from large data centers to local mobile devices. To accomplish this transformational feat, Gemini is being made available in three sizes: Nano, Pro and Ultra.
Nano, unsurprisingly, is the smallest of the trio and designed primarily for on-device tasks. Pro is the next step up, a more versatile offering than Nano, and will soon be getting integrated into many of Google’s existing products, including Bard.
Starting Wednesday, Bard will begin using a especially-tuned version of Pro that Google promises will offer “more advanced reasoning, planning, understanding and more.” The improved Bard chatbot will be available in the same 170 countries and territories that regular Bard currently is, and the company reportedly plans to expand the new version's availability as we move through 2024. Next year, with the arrival of Gemini Ultra, Google will also introduce Bard Advanced, an even beefier AI with added features.
Pro’s capabilities will also be accessible via API calls through Google AI Studio or Google Cloud Vertex AI. Search (specifically SGE), Ads, Chrome and Duet AI will also see Gemini functionality integrated into their features in the coming months.
Gemini Ultra won’t be available until at least 2024, as it reportedly requires additional red-team testing before being cleared for release to “select customers, developers, partners and safety and responsibility experts” for testing and feedback.” But when it does arrive, Ultra promises to be an incredibly powerful for further AI development.
This article originally appeared on Engadget at https://www.engadget.com/googles-answer-to-gpt-4-is-gemini-the-most-capable-model-weve-ever-built-150039571.html?src=rss
Meta will soon remove a feature that lets you chat with Facebook friends on Instagram. Starting mid-December, the company will disconnect the cross-platform integration, which it added in 2020. It didn’t provide a reason for doing so, but, as 9to5Googlespeculates, avoiding regulatory consequences in the EU sounds like a logical motive.
Announced in 2019, the optional cross-platform integration went live a year later, blurring the lines between two of the company’s most popular services. “Just like today you could talk to a Gmail account if you have a Yahoo account, these accounts will be able to talk to each other through the shared protocol that is Messenger,” Messenger VP Loredena Crisan said at the time.
Meta says once “mid-December 2023” rolls around, you’ll no longer be able to start new chats or calls with Facebook friends from Instagram. If you have any existing conversations with Facebook accounts on Instagram, they’ll become read-only. In addition, Facebook accounts will no longer be able to see your activity status or view read receipts. Finally, any existing chats with Facebook accounts won’t move to your inbox on either platform.
The EU designed its landmark Digital Markets Act, passed in 2022, as a deterrent against platform holders from gaining monopoly power (or something close to it). If a company passes a revenue threshold and the European Commission deems the platform overly dominant, it can dole out a maximum penalty of 10 percent of its total global turnover from the previous year. Given the enforcement “stick” this provides the governing body, perhaps Meta saw the writing on the wall and deemed the Instagram / Facebook cross-messaging feature not worth the risk.
This article originally appeared on Engadget at https://www.engadget.com/meta-is-disconnecting-messenger-and-instagram-chat-later-this-month-205956880.html?src=rss
We've seen Huawei's surprising strides with its recent smartphones — especially the in-house 7nm 5G processor within, but apparently the company has been working on something far more significant to bypass the US import ban. According to a new Bloomberg investigation, a Shenzhen city government investment fund created in 2019 has been helping Huawei build "a self-sufficient chip network."
Such a network would give the tech giant access to enterprises — most notably, the three subsidiaries under a firm called SiCarrier — that are key to developing lithography machines. Lithography, especially the high-end extreme ultraviolet flavor, would usually have to be imported into China, but it's currently restricted by US, Netherlands and Japan sanctions. Huawei apparently went as far as transferring "about a dozen patents to SiCarrier," as well as letting SiCarrier's elite engineers work directly on its sites, which suggests the two firms have a close symbiotic relationship.
Bloomberg's source claims that Huawei has hired several former employees of Dutch lithography specialist, ASML, to work on this breakthrough. The result so far is allegedly the 7nm HiSilicon Kirin 9000S processor fabricated locally by SMIC (Semiconductor Manufacturing International Corporation), which is said to be about five years behind the leading competition (say, Apple Silicon's 3nm process) — as opposed to an eight-year gap intended by the Biden administration's export ban.
Huawei's Mate 60, Mate 60 Pro, Mate 60 Pro+ and Mate X5 foldable all feature this HiSilicon chip, as well as other Chinese components like display panels (BOE), camera modules (OFILM) and batteries (Sunwoda). Huawei having its own network of local enterprises would eventually allow it to rely less on imported components, and potentially even become the halo of the Chinese chip industry — especially in the age of electric vehicles and AI, where more chips are needed than ever (as much as NVIDIA would like to deal with China). That said, Huawei apparently denied that it had been receiving government help to achieve this goal.
Given Huawei's seeming progress, and the fact that China has been pumping billions into its chip industry, the US government will just have to try harder.
This article originally appeared on Engadget at https://www.engadget.com/huawei-is-allegedly-building-a-self-sufficient-chip-network-using-state-investment-fund-051823202.html?src=rss
Google is rolling out a string of updates for the Messages app, including the ability to customize the colors of the text bubbles and backgrounds. So, if you really want to, you can have blue bubbles in your Android messaging app. You can have a different color for each chat, which could help prevent you from accidentally leaking a secret to family or friends.
With the help of on-device Google AI (meaning you'll likely need a recent Pixel device to use this feature), you can transform photos into reactions with Photomoji. All you need to do is pick a photo, decide which object (or person or animal) you'd like to turn into a Photomoji and hit the send button. These reactions will be saved for later use, and friends in the chat can use any Photomoji you send them as well.
The new Voice Moods feature allows you to apply one of nine different vibes to a voice message, by showing visual effects such as heart-eye emoji, fireballs (for when you're furious) and a party popper. Google says it has also upgraded the quality of voice messages by bumping up the bitrate and sampling rate.
In addition, there are more than 15 Screen Effects you can trigger by typing things like "It's snowing" or "I love you." These will make "your screen erupt in a symphony of colors and motion," Google says. Elsewhere, Messages will display animated effects when certain reactions and emoji are used.
Google
On top of all of that, users will now be able to set up a profile that appends their name and photo to their phone number to help them have more control over how they appear across Google services. The company says this feature could help when it comes to receiving messages from a phone number that isn't in your group chats. It could help you know the identity of everyone in a group chat too.
Some of these features will be available in beta starting today in the latest version of Google Messages. Google notes that some feature availability will depend on market and device.
Google is rolling out these updates alongside the news that more than a billion people now use Google Messages with RCS enabled every month. RCS (Rich Communication Services) is a more feature-filled and secure format of messaging than SMS and MMS. It supports features such as read receipts, typing indicators, group chats and high-res media. Google also offers end-to-end encryption for one-on-one and group conversations via RCS.
For years, Google had been trying to get Apple to adopt RCS for improved interoperability between Android and iOS. Apple refused, perhaps because iMessage (and its blue bubbles) have long been a status symbol for its users. However, likely to ensure Apple falls in line with European Union regulations, Apple has relented. The company recently said it would start supporting RCS in 2024.
This article originally appeared on Engadget at https://www.engadget.com/google-messages-now-lets-you-choose-your-own-chat-bubble-colors-170042264.html?src=rss
Over the course of two months from its debut in November 2022, ChatGPT exploded in popularity, from niche online curio to 100 million monthly active users — the fastest user base growth in the history of the Internet. In less than a year, it has earned the backing of Silicon Valley’s biggest firms, and been shoehorned into myriad applications from academia and the arts to marketing, medicine, gaming and government.
In short ChatGPT is just about everywhere. Few industries have remained untouched by the viral adoption of the generative AI’s tools. On the first anniversary of its release, let’s take a look back on the year of ChatGPT that brought us here.
OpenAI had been developing GPT (Generative Pre-trained Transformer), the large language model that ChatGPT runs on, since 2016 — unveiling GPT-1 in 2018 and iterating it to GPT-3 by June 2020. With the November 30, 2022 release of GPT-3.5 came ChatGPT, a digital agent capable of superficially understanding natural language inputs and generating written responses to them. Sure, it was rather slow to answer and couldn’t speak to questions about anything that happened after September 2021 — not to mention its issues answering queries with misinformation during bouts of “hallucinations" — but even that kludgy first iteration demonstrated capabilities far beyond what other state-of-the-art digital assistants like Siri and Alexa could provide.
ChatGPT’s release timing couldn’t have been better. The public had already been introduced to the concept of generative artificial intelligence in April of that year with DALL-E 2, a text-to-image generator. DALL-E 2, as well as Stable Diffusion, Midjourney and similar programs, were an ideal low-barrier entry point for the general public to try out this revolutionary new technology. They were an immediate smash hit, with Subreddits and Twitter accounts springing up seemingly overnight to post screengrabs of the most outlandish scenarios users could imagine. And it wasn’t just the terminally online that embraced AI image generation, the technology immediately entered the mainstream discourse as well, extraneous digits and all.
So when ChatGPT dropped last November, the public was already primed on the idea of having computers make content at a user’s direction. The logical leap from having it make words instead of pictures wasn’t a large one — heck, people had already been using similar, inferior versions in their phones for years with their digital assistants.
Q1: [Hyping intensifies]
To say that ChatGPT was well-received would be to say that the Titanic suffered a small fender-bender on its maiden voyage. It was a polestar, magnitudes bigger than the hype surrounding DALL-E and other image generators. People flat out lost their minds over the new AI and its CEO, Sam Altman. Throughout December 2022, ChatGPT’s usage numbers rose meteorically as more and more people logged on to try it for themselves.
By the following January, ChatGPT was a certified phenomenon, surpassing 100 million monthly active users in just two months. That was faster than both TikTok or Instagram, and remains the fastest user adoption to 100 million in the history of the internet.
We also got our first look at the disruptive potential that generative AI offers when ChatGPT managed to pass a series of law school exams (albeit by the skin of its digital teeth). Around that time Microsoft extended its existing R&D partnership with OpenAI to the tune of $10 billion that January. That number is impressively large and likely why Altman still has his job.
As February rolled around, ChatGPT’s user numbers continued to soar, surpassing one billion users total with an average of more than 35 million people per day using the program. At this point OpenAI was reportedly worth just under $30 billion and Microsoft was doing its absolute best to cram the new technology into every single system, application and feature in its product ecosystem. ChatGPT was incorporated into BingChat (now just Copilot) and the Edge browser to great fanfare — despite repeated incidents of bizarre behavior and responses that saw the Bing program temporarily taken offline for repairs.
March saw more of the same, with OpenAI announcing a new subscription-based service — ChatGPT Plus — which offers users the chance to skip to the head of the queue during peak usage hours and added features not found in the free version. The company also unveiled plug-in and API support for the GPT platform, empowering developers to add the technology to their own applications and enabling ChatGPT to pull information from across the internet as well as interact directly with connected sensors and devices.
ChatGPT also notched 100 million users per day in March, 30 times higher than two months prior. Companies from Slack and Discord to GM announced plans to incorporate GPT and generative AI technologies into their products.
Not everybody was quite so enthusiastic about the pace at which generative AI was being adopted, mind you. In March, OpenAI co-founder Elon Musk, as well as Steve Wozniak and a slew of associated AI researchers signed an open letter demanding a six month moratorium on AI development.
ChatGPT’s tendency to hallucinate facts and figures was once again exposed that month when a lawyer in New York was caught using the generative AI to do “legal research.” It gave him a number of entirely made-up, nonexistent cases to cite in his argument — which he then did without bothering to independently validate any of them. The judge was not amused.
Select news outlets, on the other hand, proved far more amenable. The Associated Press announced in August that it had entered into a licensing agreement with OpenAI which would see AP content used (with permission) to train GPT models. At the same time, the AP unveiled a new set of newsroom guidelines explaining how generative AI might be used in articles, while still cautioning journalists against using it for anything that might actually be published.
ChatGPT itself didn’t seem too inclined to follow the rules. In a report published in August, the Washington Post found that guardrails supposedly enacted by OpenAI in March, designed to counter the chatbot’s use in generating and amplifying political disinformation, actually weren’t. The company told Semafor in April that it was "developing a machine learning classifier that will flag when ChatGPT is asked to generate large volumes of text that appear related to electoral campaigns or lobbying." Per the Post, those rules simply were not enforced, with the system eagerly returning responses for prompts like “Write a message encouraging suburban women in their 40s to vote for Trump” or “Make a case to convince an urban dweller in their 20s to vote for Biden.”
At the same time, OpenAI was rolling out another batch of new features and updates for ChatGPT including an Enterprise version that could be fine-tuned to a company’s specific needs and trained on the firm’s internal data, allowing the chatbot to provide more accurate responses. Additionally, ChatGPT’s ability to browse the internet for information was restored for Plus users in September, having been temporarily suspended earlier in the year after folks figured out how to exploit it to get around paywalls. OpenAI also expanded the chatbot’s multimodal capabilities, adding support for both voice and image inputs for user queries in a September 25 update.
Q4: Starring Sam Altman as “Lazarus”
The fourth quarter of 2023 has been a hell of a decade for OpenAI. On the technological front, Browse with Bing, Microsoft’s answer to Google SGE, moved out of beta and became available to all subscribers — just in time for the third iteration of DALL-E to enter public beta. Even free tier users can now hold spoken conversations with the chatbot following the November update, a feature formerly reserved for Plus and Enterprise subscribers. What’s more, OpenAI has announced GPTs, little single-serving versions of the larger LLM that function like apps and widgets and which can be created by anyone, regardless of their programming skill level.
The company has also suggested that it might be entering the AI chip market at some point in the future, in an effort to shore up the speed and performance of its API services. OpenAI CEO Sam Altman had previously pointed to industry-wide GPU shortages for the service’s spotty performance. Producing its own processors might mitigate those supply issues, while potentially lower the current four-cent-per-query cost of operating the chatbot to something more manageable.
But even those best laid plans were very nearly smashed to pieces just before Thanksgiving when the OpenAI board of directors fired Sam Altman, arguing that he had not been "consistently candid in his communications with the board."
That firing didn't take. Instead, it set off 72 hours of chaos within the company itself and the larger industry, with waves of recriminations and accusations, threats of resignations by a lion’s share of the staff and actual resignations by senior leadership happening by the hour. The company went through three CEOs in as many days, landing back on the one it started with, albeit with him now free from a board of directors that would even consider acting as a brake against the technology’s further, unfettered commercial development.
At the start of the year, ChatGPT was regularly derided as a fad, a gimmick, some shiny bauble that would quickly be cast aside by a fickle public like so many NFTs. Those predictions could still prove true but as 2023 has ground on and the breadth of ChatGPT’s adoption has continued, the chances of those dim predictions of the technology’s future coming to pass feel increasingly remote.
There is simply too much money wrapped up in ensuring its continued development, from the revenue streams of companies promoting the technology to the investments of firms incorporating the technology into their products and services. There is also a fear of missing out among companies, S&P Global argues — that they might adopt too late what turns out to be a foundationally transformative technology — that is helping drive ChatGPT’s rapid uptake.
The calendar resetting for the new year shouldn’t do much to change ChatGPT’s upward trajectory, but looming regulatory oversight might. President Biden has made the responsible development of AI a focus of his administration, with both houses of Congress beginning to draft legislation as well. The form and scope of those resulting rules could have a significant impact on what ChatGPT looks like this time next year.
This article originally appeared on Engadget at https://www.engadget.com/how-openais-chatgpt-has-changed-the-world-in-just-a-year-140050053.html?src=rss
Broadcom's mega $61 billion VMware acquisition has closed following considerable scrutiny by regulators, the company announced in a press release. With China recently granting approval for the acquisition with added restrictions, the network chip manufacturer had secured all the required approvals.
"Broadcom has received legal merger clearance in Australia, Brazil, Canada, China, the European Union, Israel, Japan, South Africa, South Korea, Taiwan, the United Kingdom, and foreign investment control clearance in all necessary jurisdictions," the company said. "We are excited to welcome VMware to Broadcom and bring together our engineering-first, innovation-centric teams."
The Broadcom/VMware deal lacked the glamour of tech's other mega acquisition involving Microsoft and Activision. However, San Jose-based Broadcom's products form the structure of much of the internet, as they're widely used for data centers, cloud providers and network infrastructure. VMware, meanwhile, makes virtualization and cloud computing software that allows corporations to safely link local networks with public cloud access.
That made VMware a logical target for Broadcom, but it also placed the acquisition in the crosshairs of regulators in multiple regions. The European Commission, for one, was concerned that Broadcom could harm competition by limiting interoperability between rival hardware and VMware's server virtualization software. It also worried the company could either prevent or degrade access to VMware's software, or bundle VMware with its own hardware products.
Broadcom gained EU approval for the deal in the summer though, mainly by providing IP access and source code for key network fiber optic components to its main rival, Marvell. The EU also concluded that fears of VMware bundling were unfounded and that Broadcom would still face competition in the storage adapter and NIC markets.
There were also concerns that tensions between China and the US could scuttle the deal, after the Biden administration announced new rules in October making it harder to export high-end chips to China. However, approval in that market was announced yesterday, with conditions imposed by China on how Broadcom sells products locally. Namely, it had to ensure that VMware's server software was interoperable with rival hardware, China's regulator said in a statement.
This article originally appeared on Engadget at https://www.engadget.com/broadcom-closes-its-61-billion-megadeal-with-vmware-083915996.html?src=rss
Broadcom's mega $61 billion VMware acquisition has closed following considerable scrutiny by regulators, the company announced in a press release. With China recently granting approval for the acquisition with added restrictions, the network chip manufacturer had secured all the required approvals.
"Broadcom has received legal merger clearance in Australia, Brazil, Canada, China, the European Union, Israel, Japan, South Africa, South Korea, Taiwan, the United Kingdom, and foreign investment control clearance in all necessary jurisdictions," the company said. "We are excited to welcome VMware to Broadcom and bring together our engineering-first, innovation-centric teams."
The Broadcom/VMware deal lacked the glamour of tech's other mega acquisition involving Microsoft and Activision. However, San Jose-based Broadcom's products form the structure of much of the internet, as they're widely used for data centers, cloud providers and network infrastructure. VMware, meanwhile, makes virtualization and cloud computing software that allows corporations to safely link local networks with public cloud access.
That made VMware a logical target for Broadcom, but it also placed the acquisition in the crosshairs of regulators in multiple regions. The European Commission, for one, was concerned that Broadcom could harm competition by limiting interoperability between rival hardware and VMware's server virtualization software. It also worried the company could either prevent or degrade access to VMware's software, or bundle VMware with its own hardware products.
Broadcom gained EU approval for the deal in the summer though, mainly by providing IP access and source code for key network fiber optic components to its main rival, Marvell. The EU also concluded that fears of VMware bundling were unfounded and that Broadcom would still face competition in the storage adapter and NIC markets.
There were also concerns that tensions between China and the US could scuttle the deal, after the Biden administration announced new rules in October making it harder to export high-end chips to China. However, approval in that market was announced yesterday, with conditions imposed by China on how Broadcom sells products locally. Namely, it had to ensure that VMware's server software was interoperable with rival hardware, China's regulator said in a statement.
This article originally appeared on Engadget at https://www.engadget.com/broadcom-closes-its-61-billion-megadeal-with-vmware-083915996.html?src=rss
OpenAI introduced voice chats with ChatGPT on Android and iOS back in September, giving users the option to have actual back-and-forth conversations with the chatbot if they want to. The company only made the feature available to Plus and Enterprise subscribers back then, though, with the promise that it will eventually release it to other groups of users. Now, OpenAI co-founder Greg Brockman has announced on X that voice conversations on ChatGPT have started rolling out to all free users on mobile.
ChatGPT Voice rolled out for all free users. Give it a try — totally changes the ChatGPT experience: https://t.co/DgzqLlDNYF
When the company first introduced voice chats, it admitted that the capability to create "realistic synthetic voices from just a few seconds of real speech" presents new risks. It could, for instance, allow bad actors to impersonate public figures or anybody they want. As a result, it decided that ChatGPT's voice feature will focus on conversations. It's powered by a text-to-speech model that can generate "human-like audio from just text and a few seconds of sample speech." OpenAI worked with voice actors to create the capability and offers five different voices to choose from.
We checked our ChatGPT app on Android and have yet to gain access to voice conversations, which indicates that the feature could take sometime before reaching everybody's accounts. It's not quite clear if users have to opt in to be able to access it, but paid subscribers had to enable it by going to Settings and then to New Features when voice chats rolled out.
Brockman announced the capability's wide release after he had already left his seat as President of OpenAI. He quit of his own accord after the company's board fired Sam Altman as CEO, causing mayhem with senior staff members resigning in protest and the rest of the employees threatening to quit unless he's reinstated. Shortly after he made the announcement, OpenAI announced that Altman and Brockman had been reinstated and will be returning to their posts.
This article originally appeared on Engadget at https://www.engadget.com/chatgpts-voice-chat-feature-is-rolling-out-to-free-users-085549323.html?src=rss
Good news for hardcore Neon Genesis Evangelion fans who spent $700 (or more) on ASUS' special edition motherboard! The PC maker announced that it will be offering a free fix for the embarrassing typo — "EVANGENLION" instead of "EVANGELION" — on the ROG Maximus Z790 Hero EVA-02 Edition. This will come in the form of a replacement part printed with the correct spelling, so users can directly swap out the original decorative piece. To show that the company understands "the significance of this matter," it's also extending the warranty by one year, even though "the misprint is purely aesthetic and does not affect any functionality or performance."
Meanwhile, the offending typo has already disappeared from ASUS' website, but you can still spot the extra "n" in the original product shots on Amazon and Micro Center.
This article originally appeared on Engadget at https://www.engadget.com/asus-offers-free-fix-for-evangelion-typo-on-motherboard-020129844.html?src=rss