XLR microphone owners, rejoice: Shure's successor to the MVX2U Digital Audio Interface (DAI) brings the adapter into the mobile era. The compact MVX2U Gen 2 adds more advanced onboard audio processing. But its most notable upgrade is mobile device compatibility, so you're no longer limited to desktop use.
The single-channel MVX2U Gen 2 provides up to +60dB of gain and 48V phantom power. On the processing front, the adapter includes an auto-level mode. There’s also a real-time denoiser to minimize background noise and a pop filter ("Popper Stopper" in Shure marketing-speak) to soften those harsh plosive sounds. Shure says the new model improves the noise floor, headphone amp and DAC.
The adapter stores your last-used audio processing settings. That way, you don't have to worry about your settings getting wiped out when switching between devices or software.
The Shure MVX2U Gen 2 includes a 1m (3.3 ft) USB-C to USB-C cable, so you can get it up and running quickly on your phone, tablet or PC. The adapter is now available for $139 from the company website.
This article originally appeared on Engadget at https://www.engadget.com/audio/shures-next-gen-dai-adds-mobile-support-140000205.html?src=rss
Lenovo is updating its business-focused laptop lineup at MWC 2026. The best-selling ThinkPad T-series is getting a full refresh, and there's an updated ThinkBook 2-in-1 and an all-new Android tablet.
The ThinkPad T-Series, the backbone of Lenovo's business PC lineup, now (optionally) ships with a 5MP camera that supports computer vision and vHDR. The 2026 versions of the laptops have larger speakers and a new color (“cosmic blue”) on some models.
The ThinkPad T14 Gen 7 and ThinkPad T16 Gen 5 (each starting at $1,799) are the all-around workhorses of the ThinkPad lineup. Lenovo touts the 2026 models' 10/10 iFixit repairability score. They ship with either an Intel Core Ultra Series 3 (with Intel vPro) or an AMD Ryzen AI Pro 400 Series processor.
ThinkPad T14s Gen 7
Lenovo
Meanwhile, the T14s Gen 7 ($1,899+) is an even lighter version of Lenovo's slim ThinkPad variant. The 2026 model weighs 2.45 lbs (1.1kg), making it the lightest T-series model to date. The T14s Gen 7 is powered by Intel Core Ultra Series 3 (with Intel vPro) or AMD Ryzen AI Pro 400 Series processors.
Rounding out the T-Series is the second-generation of the 360-degree-folding ThinkPad T14s 2-in-1. The 2026 model ($1,849+) is ever-so-slightly lighter than its predecessor, now weighing in at 3.06lbs (1.39kg). The new version includes a garaged pen, with its storage slot living above the screen.
ThinkPad X13 Detachable
Lenovo
The ThinkPad X13 Detachable is the lineup's take on the Surface Pro. The tablet has Intel Core Ultra Series 3 processors and up to 64GB of RAM. Its 13-inch display supports up to 500 nits of brightness. It has a pair of Thunderbolt 4 ports, and its keyboard has full-sized keys with 1.5mm of travel. It ships with a "full-size ergonomic pen" that you can stash (and charge!) in a dedicated slot on the keyboard. The X13 Detachable starts at $1,999.
The $499 ThinkPad X11 is a rugged Android tablet for industrial environments. Powered by the Snapdragon 7s Gen 3 Mobile Platform, it has a 10.95-inch display with 2,560 x 1,600 resolution and 600 nits of brightness. It's MIL-STD-810H certified, meaning it passes stringent military testing for durability.
ThinkTab X11
Lenovo
Finally, there's the ThinkBook 14 2-in-1 Gen 6 ($1,754+). This Yoga-like folding device has a 14-inch WUXGA touch display. It runs on an Intel Core Ultra 7 (Series 3) processor and supports up to 32GB of RAM.
Most of the devices start shipping in Q2 2026. (That includes the ThinkPad T14, T16, T14s, T14s 2-in-1, ThinkTab X11 and ThinkBook 14 2-in-1.) The lone exception is the ThinkPad X13 Detachable, which is slated for Q3 2026. You can learn more about the new business-focused devices on Lenovo's website.
This article originally appeared on Engadget at https://www.engadget.com/computing/laptops/lenovos-thinkpads-get-a-spec-bump-at-mwc-2026-230100419.html?src=rss
Apple's mobile devices are secure enough for NATO. Following extensive testing by the German government, the iPhone and iPad are now considered secure enough for the NATO-restricted classified level.
Germany's Federal Office for Information Security (Bundesamt für Sicherheit in der Informationstechnik, or BSI) tested the devices. BSI first approved the iPhone and iPad for governmental use by German authorities in 2022. To take the additional step of NATO approval, Apple says BSI conducted exhaustive technical assessments, comprehensive testing and deep security analysis.
Unless you work for NATO, this won't mean a thing to you. But at least it appears to bolster some of Apple's marketing claims about security. (As for its privacy claims, well, that depends on which kind you mean.) Apple's press release emphasized that these are the first consumer devices to receive the certification, and they did so without any special software or settings. It applies to iPhones and iPads running iOS 26.
"Secure digital transformation is only successful if information security is considered from the beginning in the development of mobile products," BSI president Claudia Plattner is quoted as saying in Apple's press release. "Expanding on BSI's rigorous audit of iOS and iPadOS platform and device security for use in classified German information environments, we are pleased to confirm the compliance under NATO nations' assurance requirements."
This article originally appeared on Engadget at https://www.engadget.com/mobile/nato-approves-the-iphone-and-ipad-for-classified-use-200857276.html?src=rss
Burger King, the chain that leans into creepy when others don't dare, is at it again. The Vergereported on Thursday that the company is rolling out a new voice-controlled AI chatbot for its workers. That may sound like business as usual in 2026, but this assistant doesn't just help with meal prep and monitor inventory. It also has an unsettling habit of surveilling employees' voices for "friendliness."
The voice-controlled chatbot will live inside employees' headsets. The company said the AI is trained to recognize when its low-paid workers utter phrases like "welcome to Burger King," "please" and "thank you." Managers can then keep tabs on their location's "friendliness" performance.
"This is meant to be a coaching tool," Thibault Roux, Burger King's chief digital officer, told The Verge. However, he added that the company is also "iterating" the system to detect tone in conversations. Is there a chatbot that can warn Burger King executives about off-putting ideas?
Burger King retired its Creepy King mascot in 2025.
Burger King / YouTube (Commercial Ads)
The OpenAI-powered assistant's other duties sound potentially useful (and decidedly less creepy). It can answer workers' meal prep questions, like how many strips of bacon to put on burgers or instructions for cleaning the shake machine. It's also integrated into the chain's point-of-sale system, so it can tell managers when items are out of stock or machines are down.
The "Patty" chatbot is part of a broader BK Assistant platform the company is launching. It will roll out to all US locations by the end of 2026. Meanwhile, its "restaurant maintenance with a side of mass surveillance" chatbot is currently being piloted in 500 restaurants.
This article originally appeared on Engadget at https://www.engadget.com/ai/burger-king-will-use-ai-to-monitor-employee-friendliness-173349148.html?src=rss
Ubisoft's shakeups continue unabated. The creative director of the next Assassin's Creed game, codenamed Hexe, has left the company. The departure of Clint Hocking, a 20-year veteran of the company over two stints, was reportedly announced in a staff meeting this week.
Hocking's resume at Ubisoft included serving as creative director on Splinter Cell: Chaos Theory, Far Cry 2 and Watch Dogs: Legion. The details of why he's leaving the company haven't been reported.
Ubisoft told VGC, which first reported on Hocking's exit, that development on Hexe will continue. Jean Guedson, one of three new leaders of the Assassin's Creed franchise, will take over as the upcoming title's new creative director. Guedson had the same role for Assassin's Creed Origins and Black Flag, two of the franchise's most well-received entries.
To say sailing hasn't been smooth of late at Ubisoft would be an understatement. Last year, the company reorganized its corporate structure under a system of "creative houses." The first, Vantage Studios, is partly owned by Tencent and now oversees Assassin's Creed. Then in October, franchise head Marc-Alexis Côté left the company. He later claimed he was "asked to step aside" and is suing his former employer.
But have no fear; some aspects of the company are doing quite well. Take, for example, nepotism. The future is looking bright indeed for a rising company star who is now co-CEO of Vantage Studios. That title belongs to Charlie Guillemot, the son of Ubisoft CEO Yves Guillemot.
This article originally appeared on Engadget at https://www.engadget.com/gaming/the-next-assassins-creed-game-loses-its-creative-director-210119005.html?src=rss
Two stories about the Claude maker Anthropic broke on Tuesday that, when combined, arguably paint a chilling picture. First, US Defense Secretary Pete Hegseth is reportedly pressuring Anthropic to yield its AI safeguards and give the military unrestrained access to its Claude AI chatbot. The company then chose the same day that the Hegseth news broke to drop its centerpiece safety pledge.
On Tuesday, Anthropic said it was modifying its Responsible Scaling Policy (RSP) to lower safety guardrails. Up until now, the company's core pledge has been to stop training new AI models unless specific safety guidelines can be guaranteed in advance. This policy, which set hard tripwires to halt development, was a big part of Anthropic's pitch to businesses and consumers.
“Two and a half years later, our honest assessment is that some parts of this theory of change have played out as we hoped, but others have not,” Anthropic wrote. Now, its updated policy approaches safety relatively, rather than with strict red lines.
Anthropic's quotes in an interview with Time sound reasonable enough in a vacuum. "We felt that it wouldn't actually help anyone for us to stop training AI models," Jared Kaplan, Anthropic's chief science officer, told Time. "We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments… if competitors are blazing ahead."
Anthropic CEO Dario Amodei (Photo by David Dee Delgado/Getty Images for The New York Times)
David Dee Delgado via Getty Images
But you could also read those quotes as the latest example of a hot startup’s ethics becoming grayer as its valuation rises. (Remember Google’s old “Don’t be evil” mantra that it later removed from its code of conduct?) The latest versions of Claude have drawn widespread praise, especially in coding. In February, Anthropic raised $30 billion in new investments. It now has a valuation of $380 billion. (Speaking of the competition Kaplan referred to, rival OpenAI is currently valued at over $850 billion.)
In place of Anthropic's previous tripwires, it will implement new "Risk Reports" and "Frontier Safety Roadmaps." These disclosure models are designed to provide transparency to the public in place of those hard lines in the sand.
Anthropic says the change was motivated by a "collective action problem" stemming from the competitive AI landscape and the US's anti-regulatory approach. "If one AI developer paused development to implement safety measures while others moved forward training and deploying AI systems without strong mitigations, that could result in a world that is less safe," the new RSP reads. "The developers with the weakest protections would set the pace, and responsible developers would lose their ability to do safety research and advance the public benefit."
Defense Secretary Pete Hegseth (Photo by AAron Ontiveroz/The Denver Post)
AAron Ontiveroz via Getty Images
Neither Anthropic's announcement nor the Time exclusive mentions the elephant in the room: the Pentagon's pressure campaign. On Tuesday, Axiosreported that Hegseth told Anthropic CEO Dario Amodei that the company has until Friday to give the military unfettered access to its AI model or face penalties. The company has reportedly offered to adopt its usage policies for the Pentagon. However, it wouldn't allow its model to be used for the mass surveillance of Americans or weapons that fire without human involvement.
If Anthropic doesn't relent, experts say its best bet would be legal action. But will the Pentagon's proposed penalties be enough to scare a profit-driven startup into compliance? Hegseths' threats reportedly include invoking the Defense Production Act, which gives the president authority to direct private companies prioritize certain contracts in the name of national defense. The military could also sever its contract with Anthropic and designate it as a supply chain risk. That would force other companies working with the Pentagon to certify that Claude isn't included in their workflows.
Claude is the only AI model currently used for the military's most sensitive work. "The only reason we're still talking to these people is we need them and we need them now,” a defense official told Axios. “The problem for these guys is they are that good." Claude was reportedly used in the Maduro raid in Venezuela, a topic Amodei is said to have raised with its partner Palantir.
Time's story about the new RSP included reactions from a nonprofit director focused on AI risks. Chris Painter, director of METR, described the changes as both understandable and perhaps an ill omen. "I like the emphasis on transparent risk reporting and publicly verifiable safety roadmaps," he said. However, he also raised concerns that the more flexible RSP could lead to a "frog-boiling" effect. In other words, when safety becomes a gray area, a seemingly never-ending series of rationalizations could take the company down the very dark path it once condemned.
Painter said the new RSP shows that Anthropic "believes it needs to shift into triage mode with its safety plans, because methods to assess and mitigate risk are not keeping up with the pace of capabilities. This is more evidence that society is not prepared for the potential catastrophic risks posed by AI."
This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-weakens-its-safety-pledge-in-the-wake-of-the-pentagons-pressure-campaign-183436413.html?src=rss
Uber is one step closer to going airborne. On Wednesday, the company previewed its air taxi booking service ahead of an expected launch in Dubai later this year. The inaugural Uber Air program will let travelers book Joby Aviation's electric air taxis through a familiar process in the Uber app.
The experience of booking an air taxi will be much like reserving a four-wheeled Uber. In the app, after entering your destination, Uber Air will appear as an option for eligible routes. The Uber app will book a flight and an Uber Black to pick you up and drop you off at a Joby "vertiport."
The process of booking a flying taxi will be instantly familiar.
Uber
Joby's air taxis, built exclusively for city travel, can accommodate up to four passengers and luggage. (Uber says size and weight guidelines will be announced closer to launch.) The interior is about the size of an SUV and has "comfortable seating" with panoramic windows. They can travel up to 200 mph and have a range of up to 100 miles. Four battery packs and a triple-redundant flight computer are onboard for safety purposes.
The air taxis aren't (yet) autonomous and will each have a human pilot onboard. That would at least suggest high prices. After all, pilots aren't nearly as cheap as Uber's legion of independent-contractor drivers. But the company insists its air taxi rides will somehow be around as expensive as an Uber Black trip.
Joby's air taxis have "panoramic" windows with a view of the city below.
Joby
Dubai is only the beginning of the companies’ plans. The US-based Joby says it's in the final stage of FAA type certification and hopes to launch service in New York and Los Angeles. Globally, it's targeting the UK and Japan as well.
As for how realistic a US launch is anytime soon, well, that's up for debate. On one hand, President Trump signed executive orders last year that would create a pilot program to test such aircraft. But safety and cost considerations may require a grounding of expectations.
The aircraft requires a human pilot, at least in these early stages.
Joby
In November, Robert Ditchey, a Los Angeles-based aviation expert and test pilot, toldNBC News that he didn't think air taxi service "was ever going to happen" in American cities. "They're dangerous," he warned. "We have had helicopters fail and crash on top of buildings in Los Angeles. We've had helicopters fail at takeoff and landing in airports. They're dangerous not from a fire point of view but in terms of landing on top of people and buildings." In addition, he warned that air taxis can't be developed in sufficient numbers to make them economically viable "unless they are subsidized by a government."
Uber and Joby have partnered since 2019. In 2021, Joby bought the Uber Elevate ride-hailing division, which essentially integrated the companies’ services. Last year, Joby acquired Blade Air Mobility's passenger business, which could open the door to eventually electrifying Blade's routes.
The video below shows one of Joby’s air taxis taking a test flight in Dubai.
This article originally appeared on Engadget at https://www.engadget.com/transportation/uber-previews-its-dubai-air-taxi-service-130000603.html?src=rss
A common theme in online age verification laws is the tension between user privacy and preventing children from accessing harmful or inappropriate content. Now the UK is sending a not-so-subtle message to Reddit on the subject, to the tune of £14.5m ($19.6 million). The nation's Information Commissioner's Office (ICO) accused the company of using children’s data and potentially exposing them to inappropriate content.
“Children under 13 had their personal information collected and used in ways they could not understand, consent to or control,” UK Information Commissioner John Edwards wrote in a statement. “That left them potentially exposed to content they should not have seen. This is unacceptable and has resulted in today’s fine.”
In July 2025, Reddit began requiring age verification to access adult content in the UK, in compliance with the Online Safety Act. However, that's only used to block under-18 users from sexually explicit, violent or other mature posts. The platform also prohibits users under 13 from accessing it altogether — and enforcement of that policy is lax. It merely requires users to declare, when signing up, that they're over 13. The ICO (accurately) described the method as "easy to bypass."
In its defense, Reddit told the BBC that it "didn't require users to share information about their identities, regardless of age, because we are deeply committed to their privacy and safety." The company said it would appeal the decision. "The ICO's insistence that we collect more private information on every UK user is counterintuitive and at odds with our strong belief in our users' online privacy and safety," the spokesperson added.
"It's concerning that a company the size of Reddit failed in its legal duty to protect the personal information of UK children," Edwards said. "Companies operating online services likely to be accessed by children have a responsibility to protect those children by ensuring they’re not exposed to risks through the way their data is used. To do this, they need to be confident they know the age of their users and have appropriate, effective age assurance measures in place.”
“Reddit failed to meet these expectations,” he added. “They must do better, and we are continuing to consider the age assurance controls now implemented by the platform.” The ICO also accused Reddit of failing to conduct a data protection impact assessment by January 2025.
The Guardiannotes that the £14.5m fine is the third-largest handed down by the ICO. It trails only a £20m fine for British Airways involving a data breach disclosure and an £18.4m penalty for Marriott Hotels for exposing over 300 million customer records in a hack.
This article originally appeared on Engadget at https://www.engadget.com/social-media/reddit-fined-196-million-over-age-verification-checks-in-the-uk-173705048.html?src=rss
YouTube's "Ask" button is making its way to the living room. The Gemini-powered feature is now rolling out as an experiment on smart TVs, gaming consoles and streaming devices. 9to5Google first spotted a Google support page announcing the change.
Like on mobile devices and desktop, the feature is essentially a Gemini chatbot trained on each video's content. Selecting that "Ask" button will bring up a series of canned prompts related to the content. Alternatively, you can use your microphone to ask questions about it in your own words.
The "Ask about this video" feature on desktop
YouTube
Google says your TV remote's microphone button (if it has one) will also activate the “Ask” feature. The company listed sample questions in its announcement, such as "what ingredients are they using for this recipe?" and "what's the story behind this song's lyrics?"
The conversational AI tool is only launching for "a small group of users" at first. Google promises that it will "keep everyone up to speed on any future expansions."
This article originally appeared on Engadget at https://www.engadget.com/ai/youtube-is-bringing-the-gemini-powered-ask-button-to-tvs-173900295.html?src=rss
Nevada is taking action against the rapidly growing Wild West of prediction markets. The state's gambling regulators and attorney general sued Kalshi on Tuesday. They accuse the company of bypassing Nevada law by operating a sports gambling market without proper licenses. In addition, they say Kalshi provides services to individuals under 21, which violates state law.
The lawsuit follows a federal appeals court’s rejection of Kalshi's request to prevent the state from pursuing legal action. And it comes a day after the Trump administration claimed that only the federal government has the right to enforce the industry.
Prediction markets, which allow users to bet on events such as sports, political outcomes and wars, have exploded in popularity. Business Insiderreports that Kalshi did 27 times as much business during this year's Super Bowl as last year's. Some of that growth has been at the expense of regulated gambling; Nevada's gambling operations did less business during this year's game.
"Kalshi has continued to dramatically expand its business, rather than attempting to maintain any kind of status quo," Nevada regulators wrote in a letter this month.
Kalshi and rival Polymarket insist that their businesses are "event contracts" and should be regulated as financial investments rather than gambling. The Trump administration, rife with conflicts of interest in this area, agrees. The Chair of the Commodity Futures Trading Commission (CFTC) filed an amicus brief on Tuesday, claiming that it alone has the authority to enforce the prediction market.
"The CFTC will no longer sit idly by while overzealous state governments undermine the agency's exclusive jurisdiction over these markets by seeking to establish statewide prohibitions on these exciting products," CFTC Chair Michael Selig wrote in a Wall Street Journal op-ed.
Donald Trump Jr. (Photo by Olivier Touron / AFP via Getty Images)
OLIVIER TOURON via Getty Images
Not coincidentally, prediction markets are a growing part of the Trump family business. Donald Trump Jr. is a paid adviser to Kalshi. He's also an investor in and unpaid adviser to Polymarket. In January, his family's social media business said it would launch its own prediction market platform.
Prediction markets have the potential to be a hotbed of insider trading. According to blockchain analyst DeFi Oasis, fewer than 0.04 percent of Polymarket accounts have captured over 70 percent of the platform's total profits, totaling over $3.7 billion.
Last month, The Guardianhighlighted the case of a Polymarket user who bet tens of thousands of dollars on "yes" to the question, "Israel's military action against Iran by Friday?" Within 24 hours, Israel bombed Iran, leaving hundreds dead. The user made $128,000 on that bet. The Guardian traced the blockchain data to a wallet associated with an X account. Its location on the social platform was set to Beit Ha'shita, a northern Israeli settlement. The user later transferred their bets to two other accounts, apparently to avoid detection. In January, the accounts held 10 live bets on Israeli military strategy.
Another anonymous user made over $400,000 by betting that Nicolás Maduro would be ousted by the end of January. The bets were placed in the hours and days leading up to the US strikes on Venezuela. In another case, eight jointly owned accounts collectively generated over $161,000 by betting on the country's María Corina Machado Parisca winning the Nobel Peace Prize. The accounts' handles used names such as "fmaduro," "madurowilllose," "striketheboats" and "trumpdeservesit".
This article originally appeared on Engadget at https://www.engadget.com/big-tech/nevada-sues-kalshi-for-operating-a-sports-gambling-market-without-a-license-175721982.html?src=rss