Meta is reportedly working on a new AI model called ‘Avocado’ and it might not be open source

Mark Zuckerberg has for months publicly hinted that he is backing away from open-source AI models. Now, Meta's latest AI pivot is starting to come into focus. The company is reportedly working on a new model, known inside of Meta as "Avocado," which could mark a major shift away from its previous open-source approach to AI development. 

Both CNBC and Bloomberg have reported on Meta's plans surrounding "Avocado," with both outlets saying the model "could" be proprietary rather than open-source. Avocado, which is due out sometime in 2026, is being worked on inside of "TBD," a smaller group within Meta's AI Superintelligence Labs that's headed up by  Chief AI Officer Alexandr Wang, who apparently favors closed models.

It's not clear what Avocado could mean for Llama. Earlier this year, Zuckerberg said he expected Meta would "continue to be a leader" in open source but that it wouldn't "open source everything that we do." He's also cited safety concerns as they relate to superintelligence. As both CNBC and Bloomberg note, Meta's shift has also been driven by issues surrounding the release of Llama 4. The Llama 4 "Behemoth" model has been delayed for months; The New York Times reported earlier this year that Wang and other execs had "discussed abandoning" it altogether. And developers have reportedly been unimpressed with the Llama 4 models that are available. 

There have been other shakeups within the ranks of Meta's AI groups as Zuckerberg has spent billions of dollars building a team dedicated to superintelligence. The company laid off several hundred workers from its Fundamental Artificial Intelligence Research (FAIR) unit. And Meta veteran and Chief AI Scientist Yann LeCun, who has been a proponent for open-source and skeptical of LLMs, recently announced he was leaving the company. 

That Meta may now be pursuing a closed AI model is a significant shift for Zuckerberg, who just last year said "fuck that" about closed platforms and penned a lengthy memo titled "Open Source AI is the Path Forward." But the notoriously competitive CEO is also apparently intensely worried about falling behind OpenAI, Google and other rivals. Meta has said it expects to spend $600 billion over the next few years to fund its AI ambitions.

This article originally appeared on Engadget at https://www.engadget.com/ai/meta-is-reportedly-working-on-a-new-ai-model-called-avocado-and-it-might-not-be-open-source-215426778.html?src=rss

Apple TV and Apple Music were down for some users

Apple Music and Apple TV were briefly down during outage, according to Apple’s System Status page. The outage was logged on Apple’s own system at around 2:53PM ET and affected both of the company’s streaming services, along with Apple TV’s Channels feature, until the company resolved the issue around 4:31PM ET.

On DownDetector, reports of issues with Apple TV and Apple Music first appeared right around 2:33PM ET, a little before Apple officially confirmed the outage on its own site. Only “some” users were affected by the outage, according to Apple, and anecdotally, multiple members of Engadget’s staff were still able to stream content while the services were reportedly out.

Engadget has reached out to Apple for more information on the outage and how many people were impacted. We’ll update this article if we hear back.

Apple relies on cloud services from third-party companies like Amazon, and is ultimately only as stable the data centers it’s paying for. In October 2025, the company was impacted by the same Amazon Web Services outage that took down services and apps like Alexa, Fortnite and Snapchat for hours.

Update, December 10, 5:09PM ET: Article and headline updated to reflect that the outage has been resolved.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/streaming/apple-tv-and-apple-music-were-down-for-some-users-214425802.html?src=rss

Life-size 3D-Printed LEGO Technic dune buggy turns a classic toy Into a drivable machine

What usually begins as a childhood memory of snapping LEGO Technic beams together has been reimagined at full scale by maker Matt Denton, who has turned one of the most recognizable Technic sets ever produced into a life-size, fully drivable machine. By scaling the 1981 LEGO Technic 8845 Dune Buggy more than tenfold and rebuilding every component through precise 3D printing, Denton bridges the gap between nostalgic toy engineering and real-world mechanics, creating a vehicle that not only looks like its plastic counterpart but can actually be driven off the workbench and onto the road.

This is not surprising as He’s known for turning tiny models into life-sized rigs that are drivable. Denton started with the original 1981 kit, which contains 174 pieces. Rather than simply make a large display model, he redesigned the buggy with two critical changes for practical use: he scaled it up by a factor of 10.42 times, based on 50-millimeter axle bearings, and converted it into a single-seat vehicle with a center-mounted steering wheel.

Designer: Matt Denton

Every part was recreated using 3D printing. Denton used PLA filament and a belt-driven FDM printer, employing a 1 mm nozzle, two outer walls, and 10% infill to balance strength with manageability. Because of printing limitations, large plates and panels were split into smaller sections, so they would fit in the printer and to avoid warping. All curves and joints were first modeled precisely in CAD to ensure fit and performance under load. The final assembled buggy weighs about 102 kg — not light by any means, yet still light enough for hobby use. The build process reportedly took around 1,600 hours of printing and assembly, with numerous reprints required due to failed prints and printer issues.

To bring the build to life, an electric motor was mounted on the rear axle, connected via a belt-drive system. Steering is handled via a full-sized rack-and-pinion mechanism, molded as one giant LEGO-like piece, while the rear suspension arms connect over a steel tube to deliver stability. The tires themselves are printed from TPU wrapped around PLA cores, and each one weighs around 4.6 kg. They are manufactured as four quadrants for easier assembly and transport. Despite the technical hurdles, Denton succeeded as the buggy is completely drivable. During test runs, it demonstrated performance and handling that (while modest compared to a conventional motor vehicle) surpassed expectations for what began as a giant toy. That said, limitations remain as the vehicle shows signs of structural flex under load, and the electric motor setup delivers only modest power, limiting acceleration and top speed.

This project isn’t just a playful homage to a childhood classic; it’s also a demonstration of how modern 3D printing and careful engineering can push the boundaries of what’s possible, even with humble materials like PLA and TPU. It transforms a familiar childhood toy into a functional vehicle, and in doing so rekindles the wonder of imaginative play, but at a human scale. For hobbyists and builders, Denton’s dune buggy is an inspiration, as the line between toy and tool blurs, and a dream built in plastic bricks can eventually become something you can sit in and drive.

The post Life-size 3D-Printed LEGO Technic dune buggy turns a classic toy Into a drivable machine first appeared on Yanko Design.

Spotify’s new playlist feature gives users more control over their recommendation algorithm

Spotify is attempting to give users more control over the music the streaming service recommends with a new playlist feature called "Prompted Playlist." The beta feature is rolling out in New Zealand starting on December 11, and will let users write a custom prompt that Spotify can use — alongside their listening history — to create a playlist of new music.

By tapping on Prompted Playlist, Spotify subscribers participating in the beta will be presented with a prompt field where they can type exactly what they want to hear and how they want Spotify's algorithm to respond. And while past AI features took users' individual taste into consideration, Spotify claims Prompted Playlist "taps into your entire Spotify listening history, all the way back to day one." 

A gif showing how users will create a new Prompted Playlist.
Prompted Playlist will exist alongside Spotify's other playlist features.
Spotify

Prompts can be as broad or specific as users want, and Spotify says playlists can also be set to automatically update with new songs on a specific cadence. An "Ideas" tab in the Prompted Playlist setup screen can provide suggestions for users who need inspiration for their prompt. And interestingly, Spotify says each song in the playlist will be presented with a short description explaining why the algorithm chose it, which could help direct future fine-tuning.

If this all sounds familiar, it's because Spotify has already tried AI-generated playlists in the past. The difference here, besides Spotify framing the new feature as giving users more "control," is the detail of the prompts, the depth of user data Spotify is applying and the options users will have to keep playlists up-to-date. Prompted Playlist is only available in English for now, but Spotify says the feature will evolve as it adds more users.

Spotify isn't the first company to offer users more direct control over how content is recommended to them. Meta has recently started experimenting with algorithm-tuning options in Threads and Instagram, and TikTok lets users completely reset their For You page to start fresh. The irony of all these features is that algorithm-driven feeds were supposed to be able to recommend good music, posts and videos without additional prompting. Now that prompting is being pitched as a feature, rather than extra work.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/music/spotifys-new-playlist-feature-gives-users-more-control-over-their-recommendation-algorithm-203237903.html?src=rss

TWS Earbuds With Built-In Cameras Puts ChatGPT’s AI Capabilities In Your Ears

Everyone is racing to build the next great AI gadget. Some companies are betting on smartglasses, others on pins and pocket companions. All of them promise an assistant that can see, hear, and understand the world around you. Very few ask a simpler question. What if the smartest AI hardware is just a better pair of earbuds?

This concept imagines TWS earbuds with a twist. Each bud carries an extra stem with a built in camera, positioned close to your natural line of sight. Paired with ChatGPT, those lenses become a constant visual feed for an assistant that lives in your ears. It can read menus, interpret signs, describe scenes, and guide you through a city without a screen. The form factor stays familiar, the capabilities feel new. If OpenAI wants a hardware foothold, this is the kind of product that could make AI feel less like a demo and more like a daily habit. Here’s why a camera in your ear might beat a camera on your face.

Designer: Emil Lukas

The industrial design has a sort of sci fi inhaler vibe that I weirdly like. The lens sits at the end of the stem like a tiny action cam, surrounded by a ring that doubles as a visual accent. It looks deliberate rather than tacked on, which matters when you are literally hanging optics off your head. The colored shells and translucent tips keep it playful enough that it still reads as audio gear first, camera second.

The cutaway render looks genuinely fascinating. You can see a proper lens stack, a sensor, and a compact board that would likely host an ISP and Bluetooth SoC. That is a lot of silicon inside something that still has to fit a driver, battery, microphones, and antennas. Realistically, any heavy lifting for vision and language goes straight to the phone and then to the cloud. On device compute at that scale would murder both battery and comfort.

All that visual data has to be processed somewhere, and it is not happening inside the earbud. On-device processing for GPT-4 level vision would turn your ear canal into a hotplate. This means the buds are basically streaming video to your phone for the heavy lifting. That introduces latency. A 200 millisecond delay is one thing; a two second lag is another. People tolerate waiting for a chatbot response at their desk. They will absolutely not tolerate that delay when they ask their “AI eyes” a simple question like “which gate am I at?”

Then there is the battery life, which is the elephant in the room. Standard TWS buds manage around five to seven hours of audio playback. Adding a camera, an image signal processor, and a constant radio transmission for video will absolutely demolish that runtime. Camera-equipped wearables like the Ray-Ban Meta glasses get about four hours of mixed use, and those have significantly more volume to pack in batteries. These concept buds look bulky, but they are still tiny compared to a pair of frames.

The practical result is that these would not be all-day companions in their current form. You are likely looking at two or three hours of real-world use before they are completely dead, and that is being generous. This works for specific, short-term tasks, like navigating a museum or getting through an airport. It completely breaks the established user behavior of having earbuds that last through a full workday of calls and music. The utility would have to be incredibly high to justify that kind of battery trade-off.

From a social perspective, the design is surprisingly clever. Smartglasses failed partly because the forward-facing camera made everyone around you feel like they were being recorded. An earbud camera might just sneak under the radar. People are already accustomed to stems sticking out of ears, so this form factor could easily be mistaken for a quirky design choice rather than a surveillance device. It is less overtly aggressive than a lens pointed from the bridge of your nose, which could lower social friction considerably.

The cynical part of me wonders about the field of view. Ear level is better than chest level, but your ears do not track your gaze. If you are looking down at your phone while walking, those cameras are still pointed forward at the horizon. You would need either a very wide angle lens, which introduces distortion and eats processing power for correction, or you would need to train yourself to move your whole head like you are wearing a VR headset. Neither is ideal, but both are solvable with enough iteration. What you get in return is an AI that can actually participate in your environment instead of waiting for you to pull out your phone and aim it at something. That shift from reactive to ambient is the entire value proposition, and it only works if the cameras are always positioned and always ready.

The post TWS Earbuds With Built-In Cameras Puts ChatGPT’s AI Capabilities In Your Ears first appeared on Yanko Design.

Intel loses its latest challenge to 16-year-old EU antitrust case

Intel will have to pay up in an antitrust case dating back to 2009, Reuters reported on Wednesday. The company has lost its challenge against a €376 million ($438.7 million) regulatory fine levied by the European Commission. However, Intel managed to get the amount reduced to 237 million euros ($276.6 million).

The case began in 2009, when mobile computing was in its infancy and netbooks (remember those?) were all the rage in the PC space. At the time, the EU ruled that Intel violated antitrust laws on multiple fronts. First, it used illegal hidden rebates to push rivals out of the PC processor market. Second, it paid manufacturers to delay or stop production of AMD-powered products.

The latter, the portion that today's fine deals with, was classified as "naked restrictions." It regarded anticompetitive payments Intel made to HP, Acer and Lenovo between 2002 and 2006.

As often happens in these situations, the legal process bounced back and forth through the courts for years. In 2017, Europe's highest court ordered the case to be re-examined, citing a lack of proper economic assessment of how Intel's behavior affected its rivals. Europe's second-highest court then overturned the judgment from the first (hidden rebates) portion of the fine in 2022, a move confirmed by the EU Court of Justice last year. That penalty, initially set at a whopping €1.06 billion ($1.2 billion), was wiped off the books.

The second ("naked restrictions") fine was imposed in 2023 after European courts upheld that portion. Intel's latest challenge sought to have that one removed, too. Instead, it will have to settle for shaving one-third off the initial sum.

With today’s judgment, it's tempting to declare the matter over and done with. But the Commission and Intel can still appeal the decision to the EU Court of Justice on points of law. Tune in next year to see if this long, strange saga has another chapter.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/intel-loses-its-latest-challenge-to-16-year-old-eu-antitrust-case-200746004.html?src=rss

The world premieres and other hotness from The Game Awards 2025 Day of the Devs stream

You gotta love that post-Day of the Devs showcase feeling. The organization, founded by Double Fine Productions and iam8bit, consistently highlights top-tier games from independent developers across the globe, providing space for creators to share their stories in both online and in-person events. This year’s Day of the Devs: The Game Awards Digital Showcase was an hour-long celebration of 22 upcoming indie games, including six world premieres and three release date announcements.

Settle in and bask in the afterglow with us:

World Premieres

Virtue and a Sledgehammer - Deconstructeam

Deconstructeam is a small Spanish studio that’s responsible for some of the most cerebral, sexy and darkly philosophical games around, including Gods Will Be Watching, The Red Strings Club and The Cosmic Wheel Sisterhood. The team’s next project is Virtue and a Sledgehammer, and it represents a new look with 3D, cel-shaded animations and a third-person perspective rather than the studio’s typical pixelated planar fare. The vibes are just as sinister and introspective as expected, though.

Virtue and a Sledgehammer is a moody coming-of-age experience set in a wooded ghost town dotted with robots and lost locals. Spend quiet moments with old friends and then swing the sledgehammer to raze your hometown and uncover memories that can help you move on. The game’s buildings and objects are highly reactive, which can only help with the catharsis of it all.

Virtue and a Sledgehammer is due to hit Steam in 2026, published by Devolver Digital.

UN:Me - Shueisha Games

Now, this is a horror game. UN:Me comes from Japanese publisher Shueisha Games and developer Historia, and it’s a creepy, mind-bending exploration of primal fear. It stars a young woman with four souls trapped inside of her body, fighting for control of her consciousness. She wanders sterile, illogical hallways and encounters grotesque horrors representing common human fears like heights, authority figures and confined spaces. The souls switch randomly, each one manifesting a specific anxiety. As she wanders, the player has to choose souls to eliminate until only one remains. Whether it’s her real soul or a fake isn’t disclosed until the very end.

UN:Me is available to wishlist now on Steam.

Scramble Knights Royale - Funktronic Labs

Funktronic Labs is mainly known as a VR studio, with games like Cosmic Trip, Fujii and The Light Brigade under its belt, but its latest project doesn’t require a headset at all. Scramble Knights Royale is coming to PC and Xbox in 2026, and it’s a battle royale with adventure game twists. You begin on a boat with 30 to 40 other online players, make your way to land on the back of a turtle, and then it’s essentially Naked and Afraid from there. Find resources, fight creatures, upgrade your gear and play your own game, only battling other players when you encounter them in the wild.

Don’t let the sweet, clay-like animations fool you, either — Funktronic says the combat mechanics are incredibly deep and finely honed. Scramble Knights Royale also supports local split-screen.

Mirria - Mografi

Mografi made a name for itself with the adorable Jenny LeClue detective game, but now it’s time for something different. Mirria is an atmospheric puzzle experience from ISLANDS: Non-Places artist Carl Burton, published by Mografi, and it looks like a delicious mix of Kentucky Route Zero and Monument Valley. In Mirria, you explore mirror worlds and attempt to make the two realities match, paying attention to small details and making minute adjustments until the unsettling environments are perfect reflections. It looks and sounds like soul-soothing stuff.

Mirria is due out in 2026 on Steam.

CorgiSpace - Finji

In recent years, Finji founder Adam Saltsman has been involved in high-profile indie games like Overlands, Night in the Woods, Tunic and Usual June, but his new project taps into his simplistic and mechanics-driven Canabalt roots. Corgispace is a collection of 8-bit games with off-kilter premises, including the soulslike Rat Dreams where you can only dodgeroll, the no-jumping platformer Skeleton Jeleton, and Prince of Prussia, an adventure where you stab Nazis “but in a fun new way,” according to Saltsman. Also, he says there are no secrets in this game, which leads us to believe there is at least one secret in this game.

Corgispace is out now (!) on Steam and Itch.io.

Frog Sqwad - Panic Stations

If the former Fall Guys developers at Panic Stations know how to do one thing, it’s make a silly-physics multiplayer game, so that’s exactly what they’re doing. Frog Sqwad is a co-op experience where you and your fellow frogs search the sewers for food in order to satiate the swamp king. You can eat food to grow bigger and become the mega frog, vomit to shrink, and use your long sticky tongue to swing, hang and slingshot your friends. The sewer levels are procedurally generated, so your froggy playground will always be different, and each run gets harder as the swamp king requires more food.

Frog Sqwad is coming to Steam in 2026, with a playtest beforehand.

Release dates

  • Dogpile by Studio Folly, Toot Games and Foot: Today, like literally right now

  • Big Hops by Luckhsot Games: January 12, 2026

  • Demon Tides by Fabraz: February 19, 2026

And the rest

The stream featured a dozen other in-development titles, including the super spooky Lucid Falls, a 90s-grunge-band rhythm game called Rockbeasts, the soothing alien musicality of Soundgrass, an impressive-looking follow-up to The Invincible called Into the Fire, and Unshine Arcade, a creepy game about the secret lives of tamagotchis and claw machines.

Day of the Devs: The Game Awards Digital Showcase 2025 wrapped up with a neat little announcement. Day of the Devs partnered with the Video Game History Foundation to release Xcavator 2025, a finished version of a long-lost game from legendary programmer Chris Oberth. It was originally developed by Big Buck Hunter studio Incredible Technologies but never found a publisher. It’s been revived by Mega Cat Studios, Retrotainment Games and iam8bit, and an NES cartridge of Xcavator 2025 is available to pre-order now on iam8bit. Proceeds will benefit the Video Game History Foundation.

This article originally appeared on Engadget at https://www.engadget.com/gaming/the-world-premieres-and-other-hotness-from-the-game-awards-2025-day-of-the-devs-stream-200000447.html?src=rss

PS Plus Game Catalog additions for December include Assassin’s Creed Mirage

Sony just announced December's Game Catalog additions for PS Plus subscribers and it's a pretty decent lineup. All of these titles will be ready to play on December 16, except Skate Story which is already available.

Speaking of Skate Story, it's a really weird skateboarding sim that's set in a glass-covered world. The reviews have been positive, with many people praising the outlandish story, surreal locations and the satisfying trick mechanics. It's made by Sam Eng, who was behind the indie shooter Zarvot. This new game is only available for PS5 subscribers.

Assassin's Creed Mirage will be available for both PS4 and PS5 players. This is the mainline entry from 2023 and it's actually really fun. It boasts a "back to basics" design that old-school fans of the franchise should appreciate.

Granblue Fantasy: Relink is a 3D action RPG that started its life as a mobile game. However, this particular RPG features music compositions from Nobuo Uematsu and art direction from Hideo Minaba. Both worked together on some games in a mom-and-pop franchise called Final Fantasy. This console port will be playable on PS4 and PS5

Cat Quest III is a simple action RPG starring, well, cats (and dogs.) This one brings open world tomfoolery to a land teeming with islands, so expect plenty of pirate puns. I enjoyed the first two, as the gameplay loop is pretty addictive and the quests are fun. It'll be available on PS4 and PS5.

Other forthcoming games include Wo Long: Fallen Dynasty, Lego Horizon Adventures, Paw Patrol: Grand Prix and Planet Coaster 2. Sony is also offering a holiday promotion in which new annual PS Plus subscribers receive download credits for movies.

This article originally appeared on Engadget at https://www.engadget.com/gaming/playstation/ps-plus-game-catalog-additions-for-december-include-assassins-creed-mirage-193731745.html?src=rss

State Department: Calibri font was a DEI hire

The US Department of State is unwinding a 2023 decision to use san-serif Calibri font on all official communications and switching to Times New Roman instead, The New York Times reports. In a memo obtained by NYT titled "Return to Tradition: Times New Roman 14-Point Font Required for All Department Paper," Secretary of State Marco Rubio frames the change as a way to return professionalism to the State Department.

"Switching to Calibri achieved nothing except the degradation of the department’s official correspondence," Rubio said in the memo. That's because the font is "informal" and clashes with the State Department's letterhead, according to Rubio, while serif fonts like Times New Roman "connote tradition, formality and ceremony."

Former Secretary of State Antony Blinken originally switched the State Department to Calibri in 2023 to improve the accessibility of official communications. The curvy, flourish-free lines of sans-serif fonts work better with assistive technologies like screen readers and text-to-speech tools. Serif fonts, meanwhile, are typically used in things like newspapers to make small, printed text legible.

While Rubio notes that Calibri "was not among the department’s most illegal, immoral, radical or wasteful instances of D.E.I.A.," it seems clear that Rubio lumps the font in with those same diversity, equity, inclusion and accessibility initiatives. Getting rid of it is an easy (and weirdly petty) way to follow through on the second Trump administration's anti-DEI stance towards just about everything.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/state-department-calibri-font-was-a-dei-hire-190454957.html?src=rss

Hackers tricked ChatGPT, Grok and Google into helping them install malware

Ever since reporting earlier this year on how easy it is to trick an agentic browser, I've been following the intersections between modern AI and old-school scams. Now, there's a new convergence on the horizon: hackers are apparently using AI prompts to seed Google search results with dangerous commands. When executed by unknowing users, these commands prompt computers to give the hackers the access they need to install malware.

The warning comes by way of a recent report from detection-and-response firm Huntress. Here's how it works. First, the threat actor has a conversation with an AI assistant about a common search term, during which they prompt the AI to suggest pasting a certain command into a computer's terminal. They make the chat publicly visible and pay to boost it on Google. From then on, whenever someone searches for the term, the malicious instructions will show up high on the first page of results.

Huntress ran tests on both ChatGPT and Grok after discovering that a Mac-targeting data exfiltration attack called AMOS had originated from a simple Google search. The user of the infected device had searched "clear disk space on Mac," clicked a sponsored ChatGPT link and — lacking the training to see that the advice was hostile — executed the command. This let the attackers install the AMOS malware. The testers discovered that both chatbots replicated the attack vector.

As Huntress points out, the evil genius of this attack is that it bypasses almost all the traditional red flags we've been taught to look for. The victim doesn't have to download a file, install a suspicious executable or even click a shady link. The only things they have to trust are Google and ChatGPT, which they've either used before or heard about nonstop for the last several years. They're primed to trust what those sources tell them. Even worse, while the link to the ChatGPT conversation has since been taken off Google, it was up for at least half a day after Huntress published their blog post.

This news comes at a time that's already fraught for both AIs. Grok has been getting dunked on for sucking up to Elon Musk in despicable ways, while ChatGPT creator OpenAI has been falling behind the competition. It's not yet clear if the attack can be replicated with other chatbots, but for now, I strongly recommend using caution. Alongside your other common-sense cybersecurity steps, make sure to never paste anything into your command terminal or your browser URL bar if you aren't certain of what it will do.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/hackers-tricked-chatgpt-grok-and-google-into-helping-them-install-malware-185711492.html?src=rss