TikTok just announced a couple of updates that that should make the app a bit more social. There's something called Shared Feed, which is exactly what it sounds like. It's a feed that friends and family can watch together, though at different times.
This feed is shared via direct messaging and pulls up relevant content to everyone involved in the chat. TikTok says this is a "new way to discover content together." It consists of a daily curated selection of 15 videos that are generated by TikTok activity.
These feeds are shared via invitation and the participants can leave the chat at any time. There's also a new dashboard that lets viewers check out their Shared Like history and other metrics. The Shared Feed tool rolls out globally in the coming months. It sounds similar to something Instagram began offering earlier this year. Instagram is typically the onecopying TikTok, so this is a nice change of pace.
TikTok
TikTok has also announced something called Shared Collections. This is like the aforementioned Shared Feed, but for saved content. The tool lets users collect, organize and share groups of videos, with TikTok citing that people could use it to share reading lists, local restaurants to try and, of course, products to buy.
All you have to do is save a video, create a Shared Collection and send that list to someone else via direct message. Users must follow one another to access one of these lists. The tool is available globally right now to folks over the age of 16.
Finally, TikTok is rolling out themed holiday cards that can be sent in direct messages. They will be available globally later this month.
This article originally appeared on Engadget at https://www.engadget.com/apps/tiktok-announces-shared-feed-and-collections-features-193857725.html?src=rss
Rivian is about to give the public and its investors another taste of its future with an event focused on autonomy and AI on December 11. The company's Autonomy and AI day starts at 12PM ET. You can watch the event via the Rivian website. We'll be liveblogging the Autonomy and AI day right here on Engadget, so we'll be recapping the major news as it happens and sharing our reactions.
As for what to expect, the name of the event clearly indicates that Rivian will be talking about autonomous operation of its vehicles. RivianTrackr speculates, quite reasonably, that the company will share more about what CEO RJ Scaringe has referred to as a Universal Hands Free feature. Scaringe recently said he'd spent two hours traveling around Palo Alto, California, in a second-gen Rivian R1 with the vehicle taking care of everything by itself. It stands to reason that Rivian will at least offer up a demo of Universal Hands Free ahead of the company’s more affordable R2 model making its debut in 2026.
Earlier this year, Rivian said that, for 2026, "a hands-off/eyes-off feature is planned for controlled conditions with our current Gen 2 vehicles." So, this Autonomy and AI day seems as good an opportunity as any for the company to share more details about that. When Rivian unveiled the first-generation R1T and R1S in 2018, it said those would support Level 3 autonomy, allowing for the driver to take their hands and eyes off the road for short spells while they're on the freeway.
RivianTrackr also suggests that we may hear more about Rivian's sensor strategy as well as its AI and fleet-learning initiatives. The company may offer up a more detailed autonomy roadmap as well. However, the publication suggests Rivian isn't quite ready to announce rollout retails or firm pricing for full hands-off driving features.
This article originally appeared on Engadget at https://www.engadget.com/transportation/evs/how-to-watch-rivians-autonomy-and-ai-day-and-what-to-expect-192838410.html?src=rss
Meta will soon allow Facebook and Instagram users in the European Union to choose to share less data and see less personalized ads on the platform, the European Commission announced. The change will begin to roll out in January, according to the regulator.
"This is the first time that such a choice is offered on Meta's social networks," the commission said in a statement. "Meta will give users the effective choice between: consenting to share all their data and seeing fully personalised advertising, and opting to share less personal data for an experience with more limited personalised advertising."
The move from Meta comes after the European Commission had fined the company €200 million over its ad-free subscription plans in the EU, which the regulator deemed "consent or pay." Meta began offering ad-free subscriptions to EU users in 2023, and later lowered the price of the plans in response to criticism from the commission. Those plans haven't been very popular, however, with one Meta executive admitting earlier this year that there's been "very little interest" from users.
In a statement, a Meta spokesperson said that "we acknowledge" the European Commission's statement. "Personalized ads are vital for Europe’s economy — last year, Meta’s ads were linked to €213 billion in economic activity and supported 1.44 million jobs across the EU."
This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-will-let-facebook-and-instagram-users-in-the-eu-share-less-data-183535897.html?src=rss
The Dyson x Porter OnTrac Limited Edition collaboration arrives as a pointed departure from typical brand partnerships. Rather than applying co-branded graphics to existing products, this project positions two objects as components of a single system built around commuter behavior. The headphones and bag share materials, color logic, and ergonomic intent. They function as a kit, not a bundle. The production run is limited to 380 individually numbered sets distributed through select retail locations in Japan and China, plus official online channels.
Designer: Dyson x Porter
Porter, the accessories division of Yoshida & Co., approaches its 90th anniversary with a history rooted in textile construction and hardware refinement. Dyson enters audio as an engineering house known for motors, airflow systems, and computational design. The collaboration required both parties to subordinate individual brand language to a shared design constraint. The scarcity is intentional. This is not a mass market recommendation. It is a design artifact that demonstrates what becomes possible when two craft traditions converge on a single behavioral problem.
Collaboration Context
Porter operates under Yoshida & Co., a Japanese company founded in 1935. The brand built its reputation on hand construction, obsessive material selection, and a visual language drawn from military surplus, particularly the MA 1 flight jacket. Porter bags are assembled by hand in Japan, often incorporating dozens of discrete components into a single product. The 90th anniversary celebration, designated Project 006, called for a collaboration that would extend Porter’s construction philosophy into new territory.
Dyson’s audio division emerged more recently with the Zone headphones in 2022, combining noise cancellation with air purification in an ambitious but polarizing form factor. OnTrac followed as a more focused over-ear design, retaining Dyson’s emphasis on driver quality, noise isolation, and extended battery performance. Jake Dyson, chief engineer and son of founder James Dyson, supervised the Porter collaboration.
Both companies ceded ground to produce objects that read as parts of a single system rather than co-branded accessories. Porter’s expertise in understanding how objects move with the body informed Dyson’s thinking about where headphones rest when not in use.
Headphones as Object One
The OnTrac headphones in this collaboration begin with Dyson’s existing flagship architecture. The cups use angled geometry that exposes machined aluminum surfaces and microfiber cushions. What distinguishes this edition is the outer cap treatment. Custom panels carry the Porter logo, and the color blocking shifts to navy, green, and orange, tones drawn directly from the MA 1 flight jacket vocabulary that has defined Porter’s aesthetic for decades. The palette establishes visual continuity with the bag.
The driver assembly uses 40 millimeter neodymium transducers with 16 ohm impedance, spanning a frequency response from 6 Hz to 21 kHz. Eight microphones power the active noise cancellation system, capable of reducing ambient sound by up to 40 dB. Battery life extends to 55 hours with ANC engaged. USB-C fast charging restores usable runtime quickly. Bluetooth 5.0 handles connectivity, and the MyDyson app provides listening mode control and voice assistant integration. These specifications remain unchanged from the standard OnTrac.
The weight sits at approximately 0.45 kg, a figure that exceeds many competitors as a consequence of Dyson’s aluminum construction and driver housing decisions. The cushion geometry distributes pressure across a wider contact area, and the microfiber surface reduces heat buildup during extended sessions. The comfort profile favors long commutes over lightweight portability. The headphones are designed to be worn for hours, not minutes.
The industrial aesthetic leans toward precision equipment rather than consumer electronics. Exposed metal, visible fasteners, and functional geometry communicate that these headphones prioritize engineering integrity over lifestyle signaling. The joystick controls on the right cup allow volume adjustment, track navigation, and mode switching without reaching for a phone.
Porter’s contribution is a shoulder bag engineered specifically around headphone storage and deployment. The design is not a general purpose satchel with a headphone pocket added as an afterthought. The entire geometry responds to a single question: how does a commuter remove, wear, and store over-ear audio equipment with minimal friction? The construction involves 77 discrete components, each cut and stitched by hand in Japan.
The outer shell uses water-repellent nylon with abrasion-resistant weave, a material choice that protects against rain, scuffs, and the wear patterns of daily transit. Interior compartments accommodate the standard commuter loadout: phone, wallet, tablet, small camera, cables. Pockets are sized and positioned to prevent shifting during movement. The signature detail is the dedicated headphone loop integrated into the shoulder strap. When the headphones are not in use, they hang from this loop in a stable, accessible position at chest height. The strap itself employs Porter’s Carrying Equipment Strap mechanism, allowing one-handed length adjustment through a quick-pull system. This ergonomic decision accommodates different body types and carry positions without requiring two-handed manipulation.
Color story extends throughout the bag. The body is navy. The zipper tape is bright orange. Interior lining and webbing introduce green and khaki accents.
Every material surface echoes the headphone palette, creating a unified visual identity even when the two objects are separated. The bag was designed with the headphones’ 0.45 kg mass already calculated into its geometry, ensuring weight distribution remains balanced during movement.
System Integration
The value of this collaboration lies in the integrated ritual it enables. A commuter leaves home with headphones docked on the shoulder strap loop. The loop holds them securely against the bag body, eliminating swing and bounce during movement. On the platform, a single motion lifts the headphones from the loop to the ears and activates ANC. At the destination, the headphones return to the loop without opening the bag or searching for a case.
The strap adjustment system allows the bag to shift position for crowded trains or escalator navigation. The Porter logo on the headphone caps and the Dyson branding on the bag interior reinforce system identity through consistent placement and scale.
Design System Comparison
Design Element
OnTrac Headphones
Porter Shoulder Bag
Primary function
High-fidelity audio with active noise cancellation optimized for commuting
Compact daily carry satchel engineered around headphone storage and quick access
Navy headband and shells, green cushions, orange accent stitching
Navy exterior, orange zipper tape, green webbing accents, khaki interior
Heritage reference
MA 1 flight jacket palette adapted to audio hardware
MA 1 flight jacket palette extended to bag construction
Signature feature
Porter branded outer caps with co-branded engraving
Integrated headphone loop on shoulder strap, one-pull length adjustment
System role
Audio delivery and noise isolation during transit
Storage, transport, and quick-access docking for headphones and daily essentials
Limited Edition Context
Production caps at 380 individually numbered sets. Each unit ships with a tech slice: a resin block containing frozen development components suspended like specimens. A steel aircraft-wire loop attaches this artifact to the bag. The tech slice serves no functional purpose. Its presence signals that this collaboration values process documentation as much as finished product. Pricing varies by region, with Japanese retail at ¥118,690, UK pricing at £649.99, and North American pricing in the $700 to $1,000 range depending on import and distribution variables.
This represents a significant premium over the standard OnTrac, which retails around $500. The delta purchases the Porter bag, the limited numbering, the tech slice, and the scarcity itself. Distribution is restricted to select Dyson and Porter retail locations in Japan and China, plus official online stores. The 380-unit cap ensures that most interested buyers will not acquire a set.
The collaboration positions itself as a design artifact rather than a mass-market commuter recommendation. This distinction matters. The limited production run is not a marketing tactic to generate urgency. It reflects the reality that hand-built Porter bags cannot scale beyond a certain output without compromising construction quality. The collaboration accepts that constraint rather than working around it.
The numbered tag and tech slice transform the set into a collector’s object, extending both companies’ internal prototype cultures outward to buyers.
Design Value and Trade-Offs
The integrated carry solves a genuine friction point in commuter life. Over-ear headphones are awkward to store and deploy in transit. The strap loop addresses this problem directly. Material quality on both objects meets expectations for premium products. The Porter bag’s hand construction and weather resistance exceed typical EDC pricing tiers. The 55-hour battery life and 40 dB ANC represent genuine engineering performance.
The trade-offs are equally visible. The headphones are heavy at 0.45 kg, heavier than many competing over-ears. This is a consequence of Dyson’s aluminum construction decisions. The premium pricing places this set beyond casual consideration. The 380-unit production run means that for most readers, this is an object to understand rather than acquire. Within the broader context of tech and fashion collaborations, this project signals a shift in approach. Most brand partnerships treat collaboration as a reskinning exercise: new colors, co-branded packaging, a press cycle. The Dyson and Porter set attempts something more structural. The bag exists because of the headphones. The strap loop exists because of the bag. The color palette exists because both objects needed to read as one. This is system design applied to the commute, not merchandise.
Closing Insight
Carrying sound functions as a design position in this collaboration, not as marketing language. Porter and Dyson asked a specific question: what would it mean to design a bag around the act of listening rather than the act of storing? The answer required rethinking strap ergonomics, loop placement, and access geometry. It required unifying two production cultures under a shared color language. It required limiting production to maintain the artifact status that justifies the premium.
Most products designed for commuting solve individual problems: block noise, carry belongings, protect against weather. This collaboration solves them together, as a system, with a coherence that most tech and fashion partnerships never attempt.
The project suggests a future where commuter accessories behave as a cohesive ecosystem, designed from the outset to interact seamlessly rather than coexist by accident. For the 380 people who acquire a set, the daily commute operates through a unified design language. For everyone else, the project demonstrates what becomes possible when two craft-driven houses apply system-level rigor to carrying sound.
Analogue just announced new colorways for its recently-launched Analogue 3D console. The appropriately-named Funtastic limited-edition consoles are heavily inspired by Nintendo's translucent N64 models from the late 1990s. Analogue even borrowed the Funtastic branding.
In other words, these are going for the nostalgic jugular for gamers of a certain age. There's even a see-through green colorway that calls to mind the Nintendo 64 variant that shipped as a bundle with Donkey Kong 64. Just imagine booting up that bad boy as you roam around the house spouting the lyrics of the DK rap song.
Analogue 3D - Funtastic - Limited Editions. Available in highly limited quantities.
Perfectly color matched to the originals N64 models.
$299.99
On Sale: Dec 10, 8am PST. Shipping: Dec 10 with Guaranteed delivery before Christmas. pic.twitter.com/PPYgIw0vxU
There are eight translucent colors to choose from and accompanying 8BitDo controllers available as a separate purchase. The consoles cost $300 and the controllers are priced out at $45.
The Analogue 3D Funtastic consoles go on sale on December 10 at 11AM ET, with the company promising they'll ship within 48 hours to ensure delivery by Christmas. The company is also restocking the traditional colors, which will be available for purchase at the same time but won't ship until January.
Tim Stevens for Engadget
We praised the Analogue 3D in our official review. It's a fantastic way to play N64 cartridges, even if the original games don't always hold up. The 4K CRT emulation is top-notch and the overall hardware design is solid.
This article originally appeared on Engadget at https://www.engadget.com/gaming/analogue-is-weaponizing-your-nostalgia-with-these-translucent-versions-of-its-3d-console-181105740.html?src=rss
Remember when catching fireflies in a jar was peak childhood entertainment? Yeah, me neither, because apparently we’re all too busy doom-scrolling. But here’s the thing: a group of designers just created something that might actually get today’s kids to put down their tablets and start chasing butterflies instead. And honestly? It’s kind of brilliant.
Meet Rebug, an urban insect adventure brand that’s basically the lovechild of Pokemon Go and a nature documentary. Created by designers Jihyun Back, Yewon Lee, Wonjae Kim, and Seoyeon Hur, this isn’t your grandmother’s butterfly net situation. It’s a whole ecosystem of beautifully designed products that make bug hunting feel less like a science project and more like the coolest treasure hunt ever.
Designers: Jihyun Back, Yewon Lee, Wonjae Kim, Seoyeon Hur
The backstory here is actually pretty important. We’re living through what experts are calling “nature-deficit disorder,” which sounds made up but is very real. Studies show that kids who spend time outside are happier, more focused, and way less anxious than their indoor counterparts. But between screens and city living, most children today are more likely to recognize a YouTube logo than a dragonfly. The research is genuinely alarming: kids in urban areas with frequent smartphone use are significantly less likely to do things like bird watching or insect catching. Which, you know, makes sense when you think about it. Why chase bugs when you can watch someone else do it on TikTok?
But Rebug flips the script. Instead of fighting against technology or pretending cities don’t exist, it works with both. The product line is this gorgeous collection of bug-catching tools in these dreamy pastels and neon brights that look more like designer home accessories than kids’ toys. There’s a translucent pink funnel catcher, a sky-blue observation dome that works like a tiny insect hotel, and my personal favorite: the Ripple Sparkle.
This thing is genuinely clever. It’s a device that attracts dragonflies by mimicking water ripples with a rotating metal plate. Dragonflies are naturally drawn to polarized light on water, so this gadget basically speaks their language. No chemicals, no tricks, just pure science-based attraction. The insects come to investigate, kids get to observe them up close, and then everyone goes their separate ways unharmed. It’s like speed dating for nature education.
What really gets me about Rebug is how it bridges the digital and physical worlds without being preachy about it. The brand includes this whole archiving system with colorful record cards and an app interface where kids can document their finds. Instead of just telling children to “go outside and play,” it gives them a mission. How many insects did you meet today? Where did you find that beetle? The app turns each discovery into a collectible moment, which, let’s be real, is exactly how kids’ brains work these days.
The visual design is also doing the most in the best way. The branding uses this electric yellow, hot pink, and bright blue color palette that feels more streetwear than science kit. The graphics pull from three sources: actual insect shapes, children’s scribbles, and digital glitch effects. That last one is particularly smart because it literally visualizes the brand’s whole mission of shifting kids from digital errors to natural wonders. It’s the kind of layered design thinking that makes you go “oh, they really thought about this.”
And here’s what makes this feel so timely: Rebug proves that urban spaces aren’t nature deserts. You don’t need to drive to a national park to find wildlife. There are ecosystems thriving on your sidewalk, in your local playground, in that patch of grass between buildings. Research shows that urban families often don’t realize these opportunities exist or don’t see meaningful ways to interact with city nature. Rebug hands them the tools, literally and figuratively, to start looking differently at their environment.
Could a beautifully designed bug kit actually combat screen addiction and nature disconnect? Probably not single-handedly. But it’s a start, and more importantly, it’s a conversation starter about what childhood exploration can look like in 2025. Plus, those product photos are absolutely gorgeous, which never hurts when you’re trying to convince people to try something new. Sometimes the best design solutions don’t reinvent the wheel. They just make you excited to get off the couch.
Every December, the Engadget staff compiles a list of the year’s biggest winners. We scour over articles from the previous 12 months to determine the people, companies, products and trends that made the most impact over the course of the year. Not all of that influence is positive, however, and some selections may also appear on our list of biggest losers. Still, sit back and enjoy our picks for the biggest winners of 2025.
Nintendo Switch 2
Playing Mario Kart World on the Switch 2 in handheld mode.
Sam Rutherford for Engadget
Aside from a big bump in battery life that many were hoping for, Nintendo took just about everything that made its last console such a phenomenon and upgraded it on the Switch 2. A sleeker design with magnetic Joy-Cons that are less likely to break, a larger (albeit LCD) 1080p display with HDR, much stronger performance, mouse controls and a boost to the base storage were all very welcome.
Of course, the vast majority of Switch games run on the Switch 2 (often with visual improvements or other upgrades), so the new console had a vast library right from the jump. Nintendo is building out its slate of first-party games with treats like Donkey Kong Bananza and Metroid Prime 4, and the third-party support is seriously impressive too. Cyberpunk 2077, Street Fighter 6 and Hitman: World of Assassination are already available, and the likes of Final Fantasy VII Remake Intergrade and FromSoftware's Switch 2 exclusive The Duskbloods are on the way.
The Switch 2 is an iteration, not a revolution, but Nintendo didn't need to reinvent the wheel to make another great system. It's little surprise, then, that we gave the Switch 2 a score of 93 in our review. The console is surpassing Nintendo's sales expectations as well. The company said in November that it believes it will sell 19 million units (up from 15 million) by the time its current fiscal year ends in March. — Kris Holt, Contributing reporter
NVIDIA
NVIDIA GeForce 5070 Ti
Devindra Hardawar for Engadget
Could things be any rosier for NVIDIA? Once just a video card company for gamers, NVIDIA's GPU hardware is now directly tied to the rise of the AI industry. Its stock has jumped a whopping 1,235 percent over the past five years, going from $13.56 per share in 2020 to a peak of $202.49 this past October. NVIDIA's server-grade cards are being used en masse to train AI models, as well as to power AI inferencing. At home, its GeForce GPUs are enabling local AI development and they're still the gaming cards to beat, despite AMD's steadily improving competition.
Clearly, the company's bet on parallel processing has paid off enormously. Its GPUs can handle tons of computations simultaneously, making them ideally suited for the demands of the AI industry. They're not exactly efficient — that's why neural processing units, or NPUs have sprung up to power consumer AI features — but it's hard to deny NVIDIA's raw computational power.
NVIDIA's AI success may not last forever, though. Companies like Google and Microsoft are already working on their own AI chips, and it's still unclear if consumers actually want widespread AI features as much as tech companies think. If the AI industry crashes, NVIDIA will be one of the first victims. — Devindra Hardawar, Senior reporter
Tech billionaires
US President Donald Trump speaks during a news conference with Elon Musk (L) in the Oval Office of the White House in Washington, DC, on May 30, 2025.
A silhouetted individual is seen holding a mobile phone with a Sora of ChatGPT OpenAI logo displayed in the background
SOPA Images via Getty Images
AI slop didn't start in 2025, but it reached new heights thanks to updates from Meta, Google, OpenAI and others that made it easier than ever to create a real-ish (emphasis on the ish) looking clips from nothing but your most unhinged mad libs. Now, AI-generated videos are just about impossible to avoid. Some platforms, like Pinterest and TikTok, have even begun offering people the ability to ask their algorithms to show less AI content in their feeds. Unfortunately, there's no way to stuff Shrimp Jesus back into the bottle.
AI video is everywhere and it's here to stay. It's not only overtaken Facebook and Instagram's recommendations, Meta created an entirely separate feed just for users' AI-generated fever dreams. OpenAI's Sora, which lets you make AI videos of real people, was downloaded a million times in just a few days. Google's Veo, which generated more than 40 million videos in a matter of weeks, is now built-in to YouTube Shorts.
It's now trivially easy for creators to churn out fake movie trailers, cute animal videos that never happened or viral clips of made up ICE raids. Hell, the president of the United States regularly shares bizarre, sometimes poop-themed, AI videos on his official social media channels. During the government shutdown, the official X account for Senate Republicans shared a deepfake of Senate minority leader Chuck Schumer.
AI video is winning not just because it's everywhere, but because so many are unable, or unwilling, to understand what's real and what isn't. More than half of Americans say they are not confident in their ability to distinguish between human and AI-generated content, according to Pew Research. Similar numbers of people report being "more concerned than excited about the increased use of AI in daily life." But those concerns have done little to stop AI slop from dominating all of our feeds, and there's no sign it will ever slow down. — Karissa Bell, Senior reporter
Galaxy Z Fold 7
Samsung Galaxy Z Fold 7
Sam Rutherford for Engadget
After seven generations, Samsung reached an important milestone this year with its Galaxy Z Fold line: It made a foldable phone that’s the same size as a regular handset. In fact, weighing 7.58 ounces and measuring 72.8mm wide, the Galaxy Z Fold 7 is actually lighter and narrower than an S25 Ultra, while being practically just as thin at 8.9mm (folded). It’s a real marvel of engineering, especially when you consider the phone also features a 200MP main camera, an IPX8 rating for water resistance and a 5,000 mAh battery with 45-watt wired charging. And of course, there's that huge 8-inch main screen hiding inside, which makes the Z Fold 7 both a phone and a tablet in one device. The only thing it's really missing is the improved dust resistance Google gave to the Pixel 10 Pro Fold.
But perhaps more importantly, the Z Fold 7's reduced size and weight have created a device with wider appeal. This has propelled sales of Samsung's latest flagship foldable up 50 percent compared to the previous generation while pushing shipments of foldables as a whole to record highs. Who knew that when Samsung focuses on creating world-class hardware instead of overindexing on AI, good things happen? Okay, maybe that’s a bit harsh. Regardless, for a phone category that has struggled with excess weight and bulk since its inception, the Z Fold 7 feels like a revelation and the beginning of a new era for handsets with flexible displays. Now, can we just bring their prices down, please? — Sam Rutherford, Senior reporter
Smart glasses
Senior reporter Karissa Bell wearing a pair of Ray Ban Display glasses.
Karissa Bell for Engadget
Like it or not, smart glasses are having a moment. Propelled by new devices like the Meta Ray-Ban Display and upcoming models like Xreal’s Project Aura, the idea of wearing specs with built-in screens suddenly became an attractive proposition. And that means a lot for a category of gadgets that’s often best remembered by the fashion tragedy that was Google Glass in 2013.
However, this development isn’t purely by chance. The latest generation of smart glasses has only just now become a reality due to the convergence of several branches of tech — including improved optics, lightweight batteries and, of course, AI. Now that last one might sound silly considering how many big companies seem to be betting the farm on machine learning being the next big thing, but AI will be a critical feature for enabling the hands-free experience that you need to make smartglasses work when you can’t rely on touch input. While this category is still in its early stages of development, the increased momentum we've seen from smart glasses this year seems poised to carry them towards being a future pillar of people's core tech kits. — S.R.
Fast charging
Fast charging on the Pixel Watch 4 is one implementation that impressed us this year.
Cherlynn Low for Engadget
Devices like tablets and smartwatches have matured to the point where each generation mostly sees iterative upgrades, making covering them seem boring. But this year, as the hardware review season came to a close, I noticed an interesting trend. One feature, across various product categories, genuinely excited myself and other reviewers at Engadget and around the internet: impressively fast charging.
By itself, high-speed charging isn’t new. But when I reviewed the Pixel Watch 4 in October, I was shocked that one seemingly little update changed how I went about my day. The new power system on Google’s smartwatch was so efficient that after about ten minutes on a cradle, the wearable went from below 20 percent to past 50 percent. With that boost, I stopped having to remind myself to plug the watch in — any time I ran low or was about to run out the door, I just plopped it on the charger and would have enough juice for hours.
Google wasn’t the only company to make fast-charging a meaningful addition to one of its 2025 products. Apple’s iPad Pro M5 is the first iPad to support the feature, and while in our testing it fell a little short of the 50 percent charge in 30 minutes that the company promised, our reviewer Nate Ingraham still found it a meaningful improvement.
Observers of the smartphone industry will likely point out two things. First, battery technology can be volatile, and larger, faster-charging cells might lead to exploding phones. So my optimism about this development is not without caution. Secondly, we’ve already seen all this come to handsets, especially in phones that launched outside the US first. OnePlus is known for its SUPERVOOC fast charging system, for example, and we’re seeing even more novel battery tech show up abroad. Calling fast charging a winner of 2025 may feel untimely to some.
Sure, it’s not the most eye-catching or novel technological development. But when counted in terms of precious time saved, fast charging coming to more types of devices certainly amounts to a greater good in gadgets in 2025. — Cherlynn Low, Managing editor
Magnets
The Pixel 10 Pro Fold and the Pixel Ring Stand
Sam Rutherford for Engadget
Two years after the announcement of the Qi 2 wireless charging standard and its support of magnetic attachment accessories (a la Apple’s MagSafe), we’re finally seeing one of the more mainstream Android devices adopt it. In 2025, Google became the first Android phone maker that’s not HMD to do so, bringing such magnetic capabilities to the Pixel 10 series. It also introduced Pixelsnap — its own version of a MagSafe accessory ecosystem, including a slim puck with a fold-out kickstand that you can snap onto a phone.
I love the Pixel Ring Stand and make sure to bring it with me whenever I can. It works perfectly with my iPhone 17 Pro, and has a compact footprint that makes it easy to take anywhere. Of course, it’s not the first of its kind — Case-Mate and PopSocket, among others, already make similar products but they’re either pricier or rated poorly.
But it’s not just Google that made a magnetic accessory I unexpectedly adored. When reports of Apple’s Crossbody Strap first trickled out, I was underwhelmed. Who cares about a crossbody strap for an iPhone? But when I was presented with one to try at the iPhone 17 launch event, my cynicism quickly melted into desire.
Setting aside the convenience of having your phone on your person when you don’t have pockets or a purse, the way magnets play a part here also won me over. To adjust the length of the straps, you just separate the two overlapping pieces that stick together magnetically, move them along each other till you’re satisfied with the length and let them snap back in place.
I’m sure Apple isn’t the first to make a crossbody strap accessory for iPhones, nor is it the first to use magnets to adjust such straps. But like many Redditors, I’ve slowly come to realize the differences between those products and the Crossbody Strap for iPhone 17. It’s far from perfect, but in 2025 it was another implementation of magnets in tech that caught my attention and brought convenience to my life. — C.L.
This article originally appeared on Engadget at https://www.engadget.com/techs-biggest-winners-of-2025-180000177.html?src=rss
Today, during the XR edition of The Android Show, Google showed off a bunch of updates and new features headed to its mixed reality OS. And while most of the news was aimed at developers, I got a chance to demo some of the platform's expanded capabilities on a range of hardware including Samsung's Galaxy XR headset, two different reference designs and an early version of Xreal's Project Aura smart glasses and I came away rather impressed. So here's a rundown of what I saw and how it will impact the rapidly growing ecosystem of head-mounted displays.
First up was one of Google's reference design smart glasses with a single waveguide RGB display built into its right lens. I've included a picture of it here, but try not to read too deeply into its design or aesthetics, as this device is meant to be a testbed for Android XR features and not an early look at upcoming models.
Try not to read too much into the appearance of Google's reference design smart glasses, as they are explicitly labeled as prototypes meant to test upcoming features in Android XR.
Sam Rutherford for Engadget
After putting them on, I was able to ask Gemini to play some tunes on YouTube Music before answering a call simply by tapping on the touchpad built into the right side of the frames. And because the reference model also had onboard world-facing cameras, I could easily share my view with the person on the other end of the line.
Naturally, I was curious about how glasses had the bandwidth to do all this, because in normal use, they rely on a Bluetooth or Bluetooth LE connection. When asked, Max Spear, Group Product Manager for XR, shared that depending on the situation, the device can seamlessly switch between both Bluetooth and Wi-Fi, which was rather impressive because I couldn't even detect when that transition happened. Spear also noted that one of Google's focuses for Android XR is making it easier for developers to port over the apps people already know and love.
This means for devices like the reference design I wore that feature a built-in display (or displays), the OS actually uses the same code meant for standard Android notifications (like quick replies) to create a minimalist UI instead of forcing app makers to update each piece of software to be compliant with an ever-increasing number of devices. Alternatively, for models that are super lightweight and rely strictly on speakers (like Bose Frames), Google has also designed Android XR so that you only need mics and voice controls to access a wide variety of apps without the need for visual menus.
This is the picture Google's reference design smart glasses created (via Gemini ) when I asked it to transform a photo I took of some pantry shelves into a sci-fi kitchen.
Sam Rutherford for Engadget
Meanwhile, if you're hoping to take photos with your smart glasses, there's a surprising amount of capability there, too. Not only was I able to ask Gemini to take a photo, the glasses were also able to send a higher-res version to a connected smartwatch, which is super handy in case you want to review the image before moving on to the next shot. And when you want to inject some creativity, you can ask Gemini to transform pictures into practically anything you can imagine via Nano Banana. In my case, I asked the AI to change a shot of a pantry into a sci-fi kitchen and Gemini delivered with aplomb, including converting the room into a metal-clad setting complete with lots of light strips and a few bursts of steam.
However, one of the most impressive demos was when I asked Google's reference glasses to look at some of that same pantry environment and then use the ingredients to create a recipe based on my specifications (no tomatoes please, my wife isn't a fan). Gemini went down an Italian route by picking pasta, jarred banana peppers, bell peppers (which I thought was a somewhat unusual combination) and more, before launching into the first steps of the recipe. Sadly, I didn't have time to actually cook it, but as part of the demo, I learned that Gemini has been trained to understand human-centric gestures like pointing and picking things up. This allows it to better understand context without the need to be super specific, which is one of those little but very impactful tricks that allows AI to feel way less robotic.
This is how Google Maps will look on Android XR. Note that this is the flat 2D version instead of the more detailed stereoscopic view available on smart glasses with dual displays.
Sam Rutherford for Engadget
Then I had a chance to see how Uber and Google Maps ran on the reference glasses, this time using models with both single and dual RGB displays. Surprisingly, even on the monocular version, Maps was able to generate a detailed map with the ability to zoom in and out. But when I switched over to the binocular model, I noticed a significant jump in sharpness and clarity along with a higher-fidelity map with stereoscopic 3D images of buildings. Now, it may be a bit early to call this, and the perception of sharpness varies greatly between people based on their head shape and other factors, but after seeing that, I'm even more convinced that the smart glasses with dual RGB displays are what the industry will settle on in the long term.
The second type of device I used was the Samsung Galaxy XR, which I originally tried out when it was announced back in October. However, in the short time since, Google has cooked up a few new features that really help expand the headset's capabilities. By using the goggle's exterior-facing cameras, I was able to play a game of I Spy with Gemini. Admittedly, this might sound like a small addition, but I think it's going to play a big part in how we use devices running Android XR, because it allows the headset (or glasses) to understand better what you're looking at in order to provide more helpful contextual responses.
Even though it was announced not long ago in late October, Samsung's Galaxy XR headset is already getting some new features thanks to some updates coming to Android XR.
Sam Rutherford for Engadget
However, the biggest surprise was when I joined a virtual call with someone using one of Google's new avatars, called Likeness. Instead of the low-polygon cartoony characters we've seen before in places like Meta Horizon, Google's virtual representations of people's faces are almost scary good. So good I had to double-check that they weren't real and from what I've seen they're even a step up from Apple's Personas. Google says that headsets like the Galaxy XR rely on interior sensors to track and respond to facial movements, while users will be able to create and edit their avatars using a standalone app due out sometime next year.
The person in the bottom right is using a Likeness, which during my demo looked surprisingly responsive and realistic.
Google
Next, I got a chance to test out the Android XR's PC connectivity by playing Stray on the Galaxy XR while it was tethered wirelessly to a nearby laptop. Not only did it run almost flawlessly with low latency, I was also able to use a paired controller instead of relying on hand-tracking or the laptop's mouse and keyboard. This is something I've been eagerly waiting to try because it feels like Google has put a lot of work into making Android XR devices play nicely with other devices and OSes. Initially, you'll only be able to connect Windows PCs to the Galaxy XR, but Google says it's looking to support macOS systems as well.
Finally, I got to try out Xreal's Project Aura glasses to see how Android XR works on a device primarily designed to give you big virtual displays in a portable form factor. Unfortunately, because this was a pre-production unit, I wasn't able to take photos. That said, as far as the glasses go, I was really impressed with their resolution and sharpness and the inclusion of electrochromic glass is a really nice touch, as it allows users to change how heavily the lenses are tinted with a single touch. Alternatively, the glasses can also adjust the tint automatically based on whatever app you are using to give you a more or less isolated atmosphere, depending on the situation. I also appreciate the Aura's increased 70-degree FOV, but if I'm nitpicking, I wish it were a bit higher, as I occasionally found myself wanting a bit more vertical display area.
Unfortunately, I wasn't allowed to take photos of Xreal's Project Aura smart glasses, as the model I used was still an early pre-production unit. So here's a shot provided by Google instead.
Google / Xreal
As a device that's sort of between lightweight smart glasses and a full VR headset, the Aura relies on a wired battery pack that also doubles as a touchpad and a hub for plugging in external devices like your phone, laptop or even game consoles.
While using the Aura, I was able to connect to a different PC and multitask in style, as the glasses were able to support multiple virtual displays while running several different apps at the same time. This allowed me to be on a virtual call with someone using a Likeness while I had two other virtual windows open on either side. I also played an AR game (Demeo) while I moved around in virtual space and used my hands to reposition the battlefield or pick up objects with my hands.
Now I will fully admit this is a lot and it took me a bit to process everything. But upon reflection, I have a few takeaways from my time with the various Android XR devices and prototypes. More than any other headset or smart glasses platform out now, it feels like Google is doing a ton to embrace a growing ecosystem of devices. That's really important because we're still so early in the lifecycle for wearable gadgets with displays that no one has really figured out a truly polished design like we have for smartphones and laptops. And until we get there, this means that a highly adaptable OS will go a long way towards supporting OEMs like Samsung, Xreal and others.
But that's not all. It's clear Google is focused on making Android XR devices easy to build for. That's because the company knows that without useful software that can highlight the components and features coming on next-gen spectacles, there's a chance that interest will remain rather niche — similar to what we've seen when looking at the adoption of VR headsets. So in a way, Google is waging a battle on two fronts, which makes navigating uncharted waters that much more difficult.
A major focus for Android XR while people are still figuring out how to make smart glasses is to support a wide variety of designs including those with single displays, dual displays or models without any displays that rely on cameras and speakers.
Sam Rutherford for Engadget
Google is putting a major emphasis on Android XR's ability to serve as a framework for future gadgets and support and address developer needs. This mirrors the approach the company takes with regular Android and the opposite of Apple's typical MO, because unlike the Vision Pro and visionOS, it appears Google is going to rely heavily on its partners like Xreal, Warby Parker, Gentle Monster and others to create engaging hardware. Furthermore, Google says it plans to support smart glasses that can be tethered to Android and iOS phones, as well as smartwatches from both ecosystems, though there will be some limitations for people using Apple devices due to inherent OS restrictions.
That's not to say that there won't be Pixel glasses sometime down the road, but at least for now, I think that's a smart approach and possibly a lesson Google learned after releasing Google Glass over a decade ago. Meanwhile, hi-res and incredibly realistic avatars like Likenesses could be a turning point for virtual collaboration, because, in a first for me, talking to a digital representation of someone else felt kind of natural. After my demos, I had a chance to talk to Senior Director of Product Management for XR Juston Payne, who highlighted the difference between smart glasses and typical gadgets by saying "Smart glasses have to be great glasses first. They need to have a good form factor, good lenses with prescription support, they need to look good and they have to be easy to buy."
That's no simple task and there's no guarantee that next-gen smart glasses and headsets will be a grand slam. But from what I've seen, Google is building a very compelling foundation with Android XR.
This article originally appeared on Engadget at https://www.engadget.com/wearables/heres-how-google-is-laying-the-foundation-for-our-mixed-reality-future-180000716.html?src=rss
Conference badges are usually flimsy cardboard, a lanyard, maybe a QR code, and they end up in a drawer once the event wraps up. In the maker world, people already strap LEDs and e‑paper to their jackets for fun, but those tend to be one‑off hacks held together with tape and hope. Pimoroni’s Badgeware line asks a simpler question, what if the badge itself was a tiny, finished computer you actually wanted to keep wearing.
Badgeware is a family of wearable, programmable displays powered by Raspberry Pi’s new RP2350 chip. The trio gets names and personalities, Badger with a 2.7 inch e‑paper screen, Tufty with a 2.8 inch full colour IPS display, and Blinky with a 3.6 inch grid of 872 white LEDs. Translucent polycarbonate shells in teal, orange, and lime glow softly when the rear lighting kicks in, making them look like finished toys instead of bare dev boards.
The shared hardware is serious for something pocket sized. An RP2350 running at 200 megahertz with 16 megabytes of flash and 8 megabytes of PSRAM, Wi‑Fi and Bluetooth 5.2, USB C, and a built in 1,000 milliamp hour LiPo with onboard charging. The Qw/ST expansion port on the back lets you plug in sensors and add ons without soldering, while user and system buttons plus four zone rear lighting give each badge its own under glow.
Badger is the quiet one, four shade e‑paper that sips power and holds static content like names, pronouns, and tiny dashboards for days. Tufty is the show off, full colour IPS and smooth animation for mini games, widgets, and scrolling text. Blinky is the extrovert, a dense LED matrix that spells messages and patterns bright enough to read across a room. Together they cover calm, expressive, and loud without changing the basic wearable form factor.
All three come pre loaded with a launcher and a bunch of open source apps, from silly games like Plucky Cluck to utilities like clocks and ISS trackers. Everything runs in MicroPython with Pimoroni’s libraries, and the optional STEM kit adds a multi sensor stick and a gamepad so badges can react to temperature, light, motion, and multiplayer button mashing, turning them into wearable sensors or tiny game consoles.
Double tapping reset drops the badge into disk mode so it shows up as a USB drive, letting you edit Python files directly without juggling tools or serial consoles. The cases have lanyard holes and can free stand on a desk, so they work as both wearable name tags and tiny desk dashboards. The clear shells and rear lighting make the electronics part of the aesthetic instead of something to hide.
Badgeware turns the throwaway conference badge into a reusable platform. Instead of printing your name once and tossing it, you get a little object that evolves from ID tag to art piece to sensor display as your code and curiosity grow. For people who like their gadgets small, expressive, and open ended, Badger, Tufty, and Blinky feel like digital jewellery that actually earns its lanyard space, whether you wear it to a meetup or keep it glowing on your desk.
Uber will begin offering customer data to marketers through a new insights platform called Uber Intelligence. The data will technically be anonymous, via the use of a platform called LiveRamp. This will "let advertisers securely combine their customer data with Uber's to help surface insights about their audiences, based on what they eat and where they travel."
Basically, it'll provide a broad view of local consumer trends based on collected data. Uber gives an example of a hotel brand using the technology to identify which restaurants or venues to partner with according to rideshare information.
Companies will also be able to use the Intelligence platform's insights to directly advertise to consumers. Business Insider reports it could be used to identify customers who are "heavy business travelers" and then plague them with ads in the app or in vehicles during their next trip to the airport. Fun times.
"That seamlessness is why we're so excited," Edwin Wong, global head of measurement at Uber Advertising, told Business Insider. Uber has stated that its ad business is already on track to generate $1.5 billion in revenue this year, and that's before implementing these changes.
Update, December 8, 7:25PM ET: This article previously stated that Uber was "selling customer data," but that was not accurate. Companies do not pay to access the Intellience platform. We regret the error. The article and its headline have been changed since publish to more accurately reflect the news.
This article originally appeared on Engadget at https://www.engadget.com/big-tech/uber-will-let-marketers-target-ads-based-on-users-trip-and-takeout-data-171011841.html?src=rss