What to read this weekend: Keanu Reeves wrote a book with ‘weird fiction’ author China Miéville

New releases in fiction, nonfiction and comics that caught our attention.

The cover for The Book of Elsewhere showing neon purple text on a space background with a link line drawing

A few years ago, Keanu Reeves took a dive into the world of comics with a series called BRZRKR, which he wrote with longtime comic creator Matt Kindt. The limited series, which played out over 12 issues, follows a half-mortal, half-God warrior known as B who lives a violent existence but cannot die. And after 80,000 years of being alive, he really wants to. Eventually, he ends up working as a killing machine for the US government.

Netflix has plans for a film and anime spinoff of the series, and the BRZRKR universe is still growing even beyond that. This week, Reeves and author China Miéville — known for his works of “weird fiction” that blend sci-fi, fantasy and other genres — released The Book of Elsewhere, a novel that returns to the story of B in a pulpy, blood-soaked epic. It’s written with a unique style, starting off choppy in the prologue before shifting into something else entirely. If there’s one thing reviewers seem to agree on, it’s that this book is not afraid to get weird.

The book cover for Why Machines Learn: The Elegant Math Behind Modern AI

AI is all around us, and these days, conversations about the Big Tech race to build better and better systems sometimes feel almost escapable. But how often do we on the outside stop and take a look at how we got here in the technical sense, down to the math that made it all possible?

In Anil Ananthaswamy’s new book, Why Machines Learn: The Elegant Math Behind Modern AI, the award-winning science journalist and author explains the history and mathematics underlying machine learning as we know it today. It’s not exactly light reading, but sometimes it’s nice to put your brain to work a little. You don’t need to be a math whiz to keep up with it — Ananthaswamy has said a basic understanding of calculus should be enough.

The cover for the new Teenage Mutant Ninja turtles, showing the brothers in black-and white with their face masks in color, against a city background that is tinted green

The Teenage Mutant Ninja Turtles are back in another new comic series from IDW, written by Jason Aaron (Batman: Off-World, Thor, Scalped), with art by Joëlle Jones (Lady Killer, Catwoman). The first issue was released this week — and it finds Raphael behind bars.

Teenage Mutant Ninja Turtles (2024) celebrates the 40th anniversary of the franchise that we as a society just cannot seem to get enough of (no complaints here). In it, the turtles have all split off on their own and left New York, and it looks like the first few issues will each focus on one of the brothers. But, they’ll eventually be brought back together to do what they do best — fight bad guys and eat pizza. It’s meant to be something that even people who haven’t kept up with the many series over the years will be able to get into without feeling lost.

This article originally appeared on Engadget at https://www.engadget.com/what-to-read-this-weekend-keanu-reeves-book-of-elsewhere-ai-math-teenage-mutant-ninja-turtles-173909519.html?src=rss

The AI prison of the future is just an Outer Limits episode

According to the Prison Policy Institute, the US has a higher incarceration rate per 100,000 people in its population than any other NATO country and it’s even higher than the next five member states combined (the UK, Portugal, Canada, France and Belgium).

So what’s the solution? Hashem Al-Ghaili, a molecular biologist and science communicator from Yemen, claims he’s got it in an interview with Wired: build a virtual prison instead. He’s not talking about stapling a bunch of Meta Quest 3’s to prisoners' heads for years at a time, but it’s also not far off from that concept.

Al-Ghaili is proposing a new neurological prison system that he calls Cognify. He posted a proposal video of the virtual justice system on his Instagram and YouTube channel and it looks downright horrifying.

Here’s how Cognify works in a theoretical nutshell — Instead of locking prisoners up for long periods of time, prisoners would be subjected to artificial memories in a virtual environment. The system creates customized AI-generated content that’s converted to visual information and delivered to the prisoner’s brain as well as the parts of their DNA and RNA linked to memory formation to establish a long term memory pattern.

Currently, such technology does not exist and Cognify is only a proposal. However, Al-Ghaili claims that experiments conducted on animals prove this process could work on humans at some point in the future. For instance, a study published in March in the scientific journal Nature in March that used mice as its test subjects found that memories are possibly formed by broken and repaired strands of DNA.

Of course, there are ethical implications and effects that would need to be addressed if such a system were to become a reality. Al-Ghaili says Cognify could happen within a decade from now but only “if we could overcome the ethical restrictions that limit testing such technology.”

If that doesn’t send a shiver up your spine, then check your wrist for a pulse. Horror anthology fans like me will remember an episode from the 1990s reboot of The Outer Limits on Showtime called “The Sentence” in which a scientist played by David Hyde Pierce invents a very similar virtual prison system that simulates an entire life sentence within a matter of minutes. He, of course, subjects himself to his own invention that makes him believe he committed a murder and served an entire lifetime in prison. He wakes up only to start denouncing the very system he championed just a few minutes earlier.

You can watch the whole thing on YouTube for free. Someone should send it to this guy.

This article originally appeared on Engadget at https://www.engadget.com/the-ai-prison-of-the-future-is-just-an-outer-limits-episode-200937257.html?src=rss

Please don’t get your news from AI chatbots

This is your periodic reminder that AI-powered chatbots still make up things and lie with all the confidence of a GPS system telling you that the shortest way home is to drive through the lake.

My reminder comes courtesy of Nieman Lab, which ran an experiment to see if ChatGPT would provide correct links to articles from news publications it pays millions of dollars to. It turns out that ChatGPT does not. Instead, it confidently makes up entire URLs, a phenomenon that the AI industry calls “hallucinating,” a term that seems more apt for a real person high on their own bullshit.

Nieman Lab’s Andrew Deck asked the service to provide links to high-profile, exclusive stories published by 10 publishers that OpenAI has struck deals worth millions of dollars with. These included the Associated Press, The Wall Street Journal, the Financial Times, The Times (UK), Le Monde, El País, The Atlantic, The Verge, Vox, and Politico. In response, ChatGPT spat back made-up URLs that led to 404 error pages because they simply did not exist. In other words, the system was working exactly as designed: by predicting the most likely version of a story’s URL instead of actually citing the correct one. Nieman Lab did a similar experiment with a single publication — Business Insider — earlier this month and got the same result.

An OpenAI spokesperson told Nieman Lab that the company was still building “an experience that blends conversational capabilities with their latest news content, ensuring proper attribution and linking to source material — an enhanced experience still in development and not yet available in ChatGPT.” But they declined to explain the fake URLs.

We don’t know when this new experience will be available or how reliable it will be. Despite this, news publishers continue to feed years of journalism into OpenAI’s gaping maw in exchange for cold, hard cash because the journalism industry has consistently sucked at figuring out how to make money without selling its soul to tech companies. Meanwhile, AI companies are chowing down on content published by anyone who hasn’t signed these Faustian bargains and using it to train their models anyway. Mustafa Suleyman, Microsoft’s AI head, recently called anything published on the internet “freeware” that is fair game for training AI models. Microsoft was valued at $3.36 trillion at the time I wrote this.

There’s a lesson here: If ChatGPT is making up URLs, it’s also making up facts. That’s how generative AI works — at its core, the technology is a fancier version of autocomplete, simply guessing the next plausible word in a sequence. It doesn’t “understand” what you say, even though it acts like it does. Recently, I tried getting our leading chatbots to help me solve the New York Times Spelling Bee and watched them crash and burn.

If generative AI can’t even solve the Spelling Bee, you shouldn't use it to get your facts.

This article originally appeared on Engadget at https://www.engadget.com/please-dont-get-your-news-from-ai-chatbots-000027227.html?src=rss

Kobo Clara Colour review: Judging books by their covers is now more fun

Kobo isn’t the first on the color-ereader scene; Boox and Pocketbook have had color ereaders and tablets for years. Both of those companies make beautiful, premium devices that are highly capable and customizable — but they don’t offer the plug-and-play ereader experience of a Kindle or Kobo. Of all the ereaders I’ve tried over the past year, I’ve found Kobos do the best job of combining a user-friendly interface with quality hardware. And now that hardware has a new trick with a color screen on the Clara Colour.

It’s noteworthy that Kobo beat Kindle to the punch in getting a color ereader out the door. To be fair, Amazon is busy doing, well, everything, but it’s safe to bet that a color Kindle will be coming soon. For now, though, Kobo’s Clara Colour is the consumer-friendly color ereader to beat. A beefier processor makes it zippier than its already-fast predecessor, and the addition of color looks lovely, without detracting from the crisp and easy-to-read text. I’ll admit, I’m not an ereader diehard; I often return to my first love, print. But a few weeks with Kobo’s latest has me more excited than ever about reading on this cozy, effortless machine. 

Most e-paper devices rely on a display made by E Ink. The Clara Colour uses the company’s new Kaleido 3 panel, which adds a printed Color Filter Array (CFA) layer on top of the existing black-and-white microcapsule layer. The color layer can display around 4,000 colors, with a resolution of 150 dpi. To be clear, a full color page on the Clara Colour looks nothing like what you’d get from the most basic LED screen. E-paper colors are muted and saturated, reminiscent of ‘70s comic book covers. But, also unlike LED, E Ink color panels actually look better under bright light.

The Kobo Clara Colour and the Kobo Clara 2E sit side by side.
Comparing the two generations at the same settings. Kobo Clara Colour (left) is warmer and slightly dimmer at 100% than the Kobo Clara 2E (right).
Photo by Amy Skorheim / Engadget

The monochrome microcapsule layer creates sharp, 300 dpi text, same as the previous generation. But set side-by-side with the Clara 2E, the Clara Colour’s page does look less sharp. Get close to the screen and you’ll notice noise in the white parts of the page. The warm front light is more amber, too. That’s the nature of the color filter array: since it’s always there, any text you read is filtered through that layer. I have to stress that it’s only something I noticed because I’m writing this review and digging deep into the performance as compared to the previous generation. When it comes to actually reading, I found I preferred the softer, warmer effect of the Colour. It reminds me of the pulpy mass-market Stephen King and Anne Rice paperbacks I grew up reading.

Kobo’s customization options aren’t overly involved, but they grant enough control so you can change things like the typeface, font size, line spacing and margin width, as well as brightness and light warmth. On the outside, the Kobo Clara 2E and the Clara Colour look nearly identical. The screen is slightly more recessed on the Colour model and the soft-touch plastic is more textured, which is actually a benefit because it shows fewer fingerprints. The centimeter-wide bezels are just big enough for your thumb, which, along with the textured back, makes the reader easy to hold from different positions. It’s small enough I can grip it around the back, but I have larger hands, so that might not work for everyone.

With an IPX8 rating, the Clara Colour can handle full submersion in water. I haven’t gone that far with this review unit, but I did survive when I accidentally splashed water on it when washing my hands in the bathroom. Why was it in the bathroom? Because I stash my book near the toilet so I don’t sit there and stare at my phone. It’s the tactic that got me reading again after I had a kid and was temporarily convinced I’d never finish another book. I heartily recommend it, particularly with a reading device like this one that can handle the watery environment of a restroom. 

The Kobo Clara Colour and a trade paperback display the same page of a book.
Photo by Amy Skorheim / Engadget

The Clara Colour’s new chip makes loading menus, performing searches and flipping pages a touch faster than with the previous generation. The speed increase doesn’t amount to a drastically different experience, but quicker page turns keep the action going. Like if Murderbot is protecting its humans from HostileSecUnit1 and suddenly there’s another SecUnit at the bottom of the page, you need to know as fast as technologically possible what goes down next. Browsing for a new book and checking out previews is speedier, too, something I appreciate when everything on my dutifully curated TBR list looks like broccoli and I want ice cream.

The UX is the same as all Kobos that don’t support stylus input, with just four options along a bottom menu bar: Home, My Books, Discover and More. Discover takes you to the Kobo store, where you can look for ebooks, audiobooks and titles from KoboPlus, the company’s monthly subscription for unlimited access to a selection of books (aka Kobo’s answer to Amazon Unlimited).

Discover’s recommendation section has a running list of titles called Just for You and, under Related Reads, suggests books you might like based on works you’ve finished. The connective threads between the titles isn’t anything surprising, but they offer a good place to start if you’re noodling on what to read next.

The Kobo Clara Colour sits on concrete in full sun.
Photo by Amy Skorheim / Engadget

Kobo’s deep integration with OverDrive lets you borrow any title your local library has available with just a few seconds of setup and a library card. Clicking the three dots near the Buy button on any book brings up the option to borrow (or place a hold on) the ebook from your library. I admire how deeply Kobo supports the feature, placing something free and public on par with paid books and subscriptions.

Other features are nice to have, like gathering your Pocket articles from the web so you can read them later in the more focused environment of your Kobo. There’s also a beta web browser that I used to look up the Wikipedia entry on the Mason-Dixon line when I read Percival Everett’s James and the one for rook (the bird) when reading Tana French’s The Hunter. The browser’s not equipped for heavy surfing, but that’s a good thing. The extra effort it takes to browse keeps me on target with my reading. At the same time, I’m happy to dig up a little background info without picking up my phone, where the distractions are plentiful and compulsive.

There’s no escaping the fact that a Kobo ereader is not a Kindle. But the advantages Kindle has over Kobo are mostly in the availability of titles, not in hardware. The Kobo Clara Colour is most directly comparable to the standard Kindle. They have the same basic shape, the same size screen with 300 dpi text and 16GB of storage. But the Kindle is $50 cheaper.

However! Amazon’s device will serve you ads on the lockscreen and it costs $20 extra to remove them. It’s also not waterproof and has no warm light. No Kindle has a color display yet, but there are plenty of rumors suggesting that move is (pretty obviously) on the horizon. For now, though, color is another point in Kobo’s favor.

That said, if you’ve spent the past decade amassing a small library on Amazon, you won’t be able to access it on a Kobo without some major, quasi-unlawful finagling. I only have a few Kindle titles from my past, so starting over with Kobo didn’t feel like a loss.

Amazon’s ebook store is larger than Kobo’s, boosted by Kindle Direct Publishing exclusives and self-published books. Kobo has its own self-publishing program, but it’s far smaller. That said, every in-print book from a major publisher will show up in both the Kindle and the Kobo store. Every title I’ve searched for in the Kobo store was readily available.

The Kobo Clara Colour is propped up on a shelf with decorative doodads nearby. The device displays the cover of a fantasy novel.
Photo by Amy Skorheim / Engadget

Amazon’s subscription program, Kindle Unlimited, is bigger too, with four million combined audio- and ebook titles available. Comparatively, Kobo Plus currently claims 1.5 million ebooks and 150,000 audiobooks. Kobo’s plan is a tad cheaper at $10 per month to both read and listen, or $8 for ebooks only. Kindle Unlimited is $12 monthly and gives you access to both formats. Neither subscription includes bestselling titles from major authors, but there’s still plenty to choose from.

However, Kobo’s ebook access does outmatch Kindle's in two ways: the ability to shop third-party outlets and an easier OverDrive experience. Amazon uses its own digital rights management (DRM) technology, whereas most everyone else relies on Adobe’s DRM. That means if you buy a book from most major publishers on a third-party site (like ebooks.com or Google Books), you won’t be able to read the ePub file on your Kindle. There are a few extra steps for reading those titles on a Kobo, but it's easy enough. As for OverDrive, reading public library books on a Kindle isn’t hard, but you have to first go to OverDrive’s or your library’s site, find your book and select “read on Kindle” as the delivery option. With a Kobo, you click the three dots next to Buy, select Borrow and start reading seconds later on the same device.

The big question is whether the addition of color makes the Kobo Clara Colour better and worth the $10 over the previous generation. The faster processor alone makes up for the price hike and the waterproof build, warm front lights and lack of ads makes for a more premium ereader that justifies the $50 price disparity between the Clara Colour and the basic Kindle.

As for the color screen, it doesn’t make much difference when you’re reading a typical ebook. And the extra layer does add some noise to the whitespace and gives everything a warmer glow. But I didn’t mind the minute drop in clarity and actually preferred the softer, cozier appearance of the page. Colors look lovely on the book covers in my collection and recommended titles draw me to them with their muted blues and washed out reds.

You’ve probably heard of that trick where you switch your phone’s screen to grayscale to reduce its appeal. It seems to actually work, so I have to imagine the opposite is true, too. Anything that makes reading material more attractive — and better able to compete with the technicolor onslaught of digital distraction — is a win in my book.

This article originally appeared on Engadget at https://www.engadget.com/kobo-clara-colour-review-judging-books-by-their-covers-is-now-more-fun-130013382.html?src=rss

Twitch introduces new filtering tools that lets you exclude sexual and violent content

Twitch has updated its filtering tools to allow the exclusion of livestreams that feature mature themes, like sexual, violent and profane content. In other words, you won’t have to sift through hundreds of gross streams just to find someone innocently drinking soda pop and playing through Hades 2.

These new filter settings let people opt out of specific content labels, per the platform’s recently-introduced Content Classification Guidelines. These guidelines require creators to appropriately label livestreams if they include stuff like sexual imagery, depictions of violence, gambling, excessive profanity and drug use. These labels also apply when streaming mature-rated games.

This will allow for a more curated experience, as people will be able to hide entire categories when searching for something to watch. Previously, these content labels were only used as data points to help Twitch users make informed viewing decisions.

The menu.
Twitch

The content classification filters are found in profile settings under Content Display Preferences. Once turned on, the filters will apply to all recommendations and search results, in addition to streams that pop up when aimlessly browsing. The system will remember preferred filter adjustments, so it should be a one-and-done trip to the settings page. For those under 18, Twitch automatically applies the vast majority of these filter settings.

There’s also another semi-related tool rolling out today. Preview thumbnails can now be blurred for streams labeled as having sexual themes. This feature will be turned on by default and can be toggled on or off via settings. However, if you follow a channel the thumbnail won’t be blurred, even if your classification labels rule out sexual content.

Twitch has been trying to nail down its policies regarding sexual content for a while now. It recently opened up the platform to nudity, as long as it was properly labeled, before changing its mind. Currently, the platform requires streamers to cover up their buttocks, genitals and (for female-presenting streamers) the nipples and underbust areas. Visible outlines of genitals are also prohibited, though all of this is liable to change.

This article originally appeared on Engadget at https://www.engadget.com/twitch-introduces-new-filtering-tools-that-lets-you-exclude-sexual-and-violent-content-185219488.html?src=rss

Google Project Astra hands-on: Full of potential, but it’s going to be a while

At I/O 2024, Google’s teaser for Project Astra gave us a glimpse at where AI assistants are going in the future. It’s a multi-modal feature that combines the smarts of Gemini with the kind of image recognition abilities you get in Google Lens, as well as powerful natural language responses. However, while the promo video was slick, after getting to try it out in person, it's clear there’s a long way to go before something like Astra lands on your phone. So here are three takeaways from our first experience with Google’s next-gen AI.

Sam’s take:

Currently, most people interact with digital assistants using their voice, so right away Astra’s multi-modality (i.e. using sight and sound in addition to text/speech) to communicate with an AI is relatively novel. In theory, it allows computer-based entities to work and behave more like a real assistant or agent – which was one of Google’s big buzzwords for the show – instead of something more robotic that simply responds to spoken commands.

The first project Astra demo we tried used a large touchscreen connected to a downward-facing camera.
Photo by Sam Rutherford/Engadget

In our demo, we had the option of asking Astra to tell a story based on some objects we placed in front of camera, after which it told us a lovely tale about a dinosaur and its trusty baguette trying to escape an ominous red light. It was fun and the tale was cute, and the AI worked about as well as you would expect. But at the same time, it was far from the seemingly all-knowing assistant we saw in Google's teaser. And aside from maybe entertaining a child with an original bedtime story, it didn’t feel like Astra was doing as much with the info as you might want.

Then my colleague Karissa drew a bucolic scene on a touchscreen, at which point Astra correctly identified the flower and sun she painted. But the most engaging demo was when we circled back for a second go with Astra running on a Pixel 8 Pro. This allowed us to point its cameras at a collection of objects while it tracked and remembered each one’s location. It was even smart enough to recognize my clothing and where I had stashed my sunglasses even though these objects were not originally part of the demo.

In some ways, our experience highlighted the potential highs and lows of AI. Just the ability for a digital assistant to tell you where you might have left your keys or how many apples were in your fruit bowl before you left for the grocery store could help you save some real time. But after talking to some of the researchers behind Astra, there are still a lot of hurdles to overcome.

An AI-generated story about a dinosaur and a baguette created by Google's Project Astra
Photo by Sam Rutherford/Engadget

Unlike a lot of Google’s recent AI features, Astra (which is described by Google as a “research preview”) still needs help from the cloud instead of being able to run on-device. And while it does support some level of object permanence, those “memories” only last for a single session, which currently only spans a few minutes. And even if Astra could remember things for longer, there are things like storage and latency to consider, because for every object Astra recalls, you risk slowing down the AI, resulting in a more stilted experience. So while it’s clear Astra has a lot of potential, my excitement was weighed down with the knowledge that it will be some time before we can get more full-feature functionality.

Karissa’s take:

Of all the generative AI advancements, multimodal AI has been the one I’m most intrigued by. As powerful as the latest models are, I have a hard time getting excited for iterative updates to text-based chatbots. But the idea of AI that can recognize and respond to queries about your surroundings in real-time feels like something out of a sci-fi movie. It also gives a much clearer sense of how the latest wave of AI advancements will find their way into new devices like smart glasses.

Google offered a hint of that with Project Astra, which may one day have a glasses component, but for now is mostly experimental (the glasses shown in the demo video during the I/O keynote were apparently a “research prototype.”) In person, though, Project Astra didn’t exactly feel like something out of sci-fi flick.

During a demo at Google I/O, Project Astra was able to remember the position of objects seen by a phone's camera.
Photo by Sam Rutherford/Engadget

It was able to accurately recognize objects that had been placed around the room and respond to nuanced questions about them, like “which of these toys should a 2-year-old play with.” It could recognize what was in my doodle and make up stories about different toys we showed it.

But most of Astra’s capabilities seemed on-par with what Meta has already made available with its smart glasses. Meta’s multimodal AI can also recognize your surroundings and do a bit of creative writing on your behalf. And while Meta also bills the features as experimental, they are at least broadly available.

The Astra feature that may set Google’s approach apart is the fact that it has a built-in “memory.” After scanning a bunch of objects, it could still “remember” where specific items were placed. For now, it seems Astra’s memory is limited to a relatively short window of time, but members of the research team told us that it could theoretically be expanded. That would obviously open up even more possibilities for the tech, making Astra seem more like an actual assistant. I don’t need to know where I left my glasses 30 seconds ago, but if you could remember where I left them last night, that would actually feel like sci-fi come to life.

But, like so much of generative AI, the most exciting possibilities are the ones that haven’t quite happened yet. Astra might get there eventually, but right now it feels like Google still has a lot of work to do to get there.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-project-astra-hands-on-full-of-potential-but-its-going-to-be-a-while-235607743.html?src=rss

Google Project Astra hands-on: Full of potential, but it’s going to be a while

At I/O 2024, Google’s teaser for Project Astra gave us a glimpse at where AI assistants are going in the future. It’s a multi-modal feature that combines the smarts of Gemini with the kind of image recognition abilities you get in Google Lens, as well as powerful natural language responses. However, while the promo video was slick, after getting to try it out in person, it's clear there’s a long way to go before something like Astra lands on your phone. So here are three takeaways from our first experience with Google’s next-gen AI.

Sam’s take:

Currently, most people interact with digital assistants using their voice, so right away Astra’s multi-modality (i.e. using sight and sound in addition to text/speech) to communicate with an AI is relatively novel. In theory, it allows computer-based entities to work and behave more like a real assistant or agent – which was one of Google’s big buzzwords for the show – instead of something more robotic that simply responds to spoken commands.

The first project Astra demo we tried used a large touchscreen connected to a downward-facing camera.
Photo by Sam Rutherford/Engadget

In our demo, we had the option of asking Astra to tell a story based on some objects we placed in front of camera, after which it told us a lovely tale about a dinosaur and its trusty baguette trying to escape an ominous red light. It was fun and the tale was cute, and the AI worked about as well as you would expect. But at the same time, it was far from the seemingly all-knowing assistant we saw in Google's teaser. And aside from maybe entertaining a child with an original bedtime story, it didn’t feel like Astra was doing as much with the info as you might want.

Then my colleague Karissa drew a bucolic scene on a touchscreen, at which point Astra correctly identified the flower and sun she painted. But the most engaging demo was when we circled back for a second go with Astra running on a Pixel 8 Pro. This allowed us to point its cameras at a collection of objects while it tracked and remembered each one’s location. It was even smart enough to recognize my clothing and where I had stashed my sunglasses even though these objects were not originally part of the demo.

In some ways, our experience highlighted the potential highs and lows of AI. Just the ability for a digital assistant to tell you where you might have left your keys or how many apples were in your fruit bowl before you left for the grocery store could help you save some real time. But after talking to some of the researchers behind Astra, there are still a lot of hurdles to overcome.

An AI-generated story about a dinosaur and a baguette created by Google's Project Astra
Photo by Sam Rutherford/Engadget

Unlike a lot of Google’s recent AI features, Astra (which is described by Google as a “research preview”) still needs help from the cloud instead of being able to run on-device. And while it does support some level of object permanence, those “memories” only last for a single session, which currently only spans a few minutes. And even if Astra could remember things for longer, there are things like storage and latency to consider, because for every object Astra recalls, you risk slowing down the AI, resulting in a more stilted experience. So while it’s clear Astra has a lot of potential, my excitement was weighed down with the knowledge that it will be some time before we can get more full-feature functionality.

Karissa’s take:

Of all the generative AI advancements, multimodal AI has been the one I’m most intrigued by. As powerful as the latest models are, I have a hard time getting excited for iterative updates to text-based chatbots. But the idea of AI that can recognize and respond to queries about your surroundings in real-time feels like something out of a sci-fi movie. It also gives a much clearer sense of how the latest wave of AI advancements will find their way into new devices like smart glasses.

Google offered a hint of that with Project Astra, which may one day have a glasses component, but for now is mostly experimental (the glasses shown in the demo video during the I/O keynote were apparently a “research prototype.”) In person, though, Project Astra didn’t exactly feel like something out of sci-fi flick.

During a demo at Google I/O, Project Astra was able to remember the position of objects seen by a phone's camera.
Photo by Sam Rutherford/Engadget

It was able to accurately recognize objects that had been placed around the room and respond to nuanced questions about them, like “which of these toys should a 2-year-old play with.” It could recognize what was in my doodle and make up stories about different toys we showed it.

But most of Astra’s capabilities seemed on-par with what Meta has already made available with its smart glasses. Meta’s multimodal AI can also recognize your surroundings and do a bit of creative writing on your behalf. And while Meta also bills the features as experimental, they are at least broadly available.

The Astra feature that may set Google’s approach apart is the fact that it has a built-in “memory.” After scanning a bunch of objects, it could still “remember” where specific items were placed. For now, it seems Astra’s memory is limited to a relatively short window of time, but members of the research team told us that it could theoretically be expanded. That would obviously open up even more possibilities for the tech, making Astra seem more like an actual assistant. I don’t need to know where I left my glasses 30 seconds ago, but if you could remember where I left them last night, that would actually feel like sci-fi come to life.

But, like so much of generative AI, the most exciting possibilities are the ones that haven’t quite happened yet. Astra might get there eventually, but right now it feels like Google still has a lot of work to do to get there.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-project-astra-hands-on-full-of-potential-but-its-going-to-be-a-while-235607743.html?src=rss

Audible is testing book recommendations based on your Prime Video habits

Audible is testing a new category of book recommendations based on what a user watched recently on Prime Video. Which, as the name suggests, will show you audiobooks based on what you watch on the Amazon-owned service, TechCrunch reports.

The new carousel should appear on mobile and web apps for about half of users who have Amazon Prime Video and Audible subscriptions. You might see recommendations as straightforward as the book a movie you watched is based on or titles with storylines or authors that users with similar preferences to you have enjoyed.

Audible claims the decision came due to the uptick it saw in users accessing titles recently released as shows or movies. "There is a natural synergy between TV, movies, and books, and we see that clearly in how our customers engage with content on Audible," Andy Tsao, chief product and analytics officer at Audible, said in a statement. The company gives examples such as Reacher, which came out on Amazon Prime in 2022. Audible claims that the listenership of author Lee Child's books rose by almost 80 percent daily in the two weeks after its release.

This article originally appeared on Engadget at https://www.engadget.com/audible-is-testing-book-recommendations-based-on-your-prime-video-habits-123053133.html?src=rss

Ryan Gosling and Miller/Lord’s Project Hail Mary could be the sci-fi event of 2026

Do you like rip-roaring science fiction books? Do you like movies? Then you are in for a treat in, well, two years. Amazon MGM Studios just set a release date of March 20, 2026 for Project Hail Mary, according to Deadline. It’s based on the Andy Weir novel of the same name, which was one of our favorite books of the past few years, so color us excited.

The film stars honorary SNL cast member Ryan Gosling and will be directed by Phil Lord and Christopher Miller, the duo behind The Lego Movie and, allegedly, most of the good parts of Solo: A Star Wars Story. Lord also wrote a little-known movie called Spider-Man: Into the Spider-Verse.

The script was penned by Drew Goddard, who cut his teeth on TV shows like Buffy the Vampire Slayer and Lost before moving onto features. He directed Cabin in the Woods, which is somehow both iconic and underrated at the same time. If the name Andy Weir sounds familiar, it’s because he wrote a book called The Martian, which inspired the Matt Damon film. Incidentally, Goddard also wrote that script.

I’ve read the book and loved it. It’s more fantastical than The Martian, but still filled with the same science-based solutions to massive life-or-death problems. This time, the entire Earth is on the chopping block, instead of one lone astronaut. It’s also pretty dang funny, just like The Martian, so Lord and Miller are a good match to direct. The pair also signed on to direct an adaptation of another Weir novel, Artemis, but that project looks to have stalled.

Or course, a lot can happen in two years. Here’s to hoping our humble little society keeps clunking along so we can chomp down some popcorn in 2026. Speaking of, that year will also see the release of The Mandalorian & Grogu, the Rey Skywalker film, the sequel to The Super Mario Bros. Movie, Toy Story 5, The Batman Part II and, reportedly, Avengers: The Kang Dynasty

This article originally appeared on Engadget at https://www.engadget.com/ryan-gosling-and-millerlords-project-hail-mary-could-be-the-sci-fi-event-of-2026-174440164.html?src=rss

Ryan Gosling and Miller/Lord’s Project Hail Mary could be the sci-fi event of 2026

Do you like rip-roaring science fiction books? Do you like movies? Then you are in for a treat in, well, two years. Amazon MGM Studios just set a release date of March 20, 2026 for Project Hail Mary, according to Deadline. It’s based on the Andy Weir novel of the same name, which was one of our favorite books of the past few years, so color us excited.

The film stars honorary SNL cast member Ryan Gosling and will be directed by Phil Lord and Christopher Miller, the duo behind The Lego Movie and, allegedly, most of the good parts of Solo: A Star Wars Story. Lord also wrote a little-known movie called Spider-Man: Into the Spider-Verse.

The script was penned by Drew Goddard, who cut his teeth on TV shows like Buffy the Vampire Slayer and Lost before moving onto features. He directed Cabin in the Woods, which is somehow both iconic and underrated at the same time. If the name Andy Weir sounds familiar, it’s because he wrote a book called The Martian, which inspired the Matt Damon film. Incidentally, Goddard also wrote that script.

I’ve read the book and loved it. It’s more fantastical than The Martian, but still filled with the same science-based solutions to massive life-or-death problems. This time, the entire Earth is on the chopping block, instead of one lone astronaut. It’s also pretty dang funny, just like The Martian, so Lord and Miller are a good match to direct. The pair also signed on to direct an adaptation of another Weir novel, Artemis, but that project looks to have stalled.

Or course, a lot can happen in two years. Here’s to hoping our humble little society keeps clunking along so we can chomp down some popcorn in 2026. Speaking of, that year will also see the release of The Mandalorian & Grogu, the Rey Skywalker film, the sequel to The Super Mario Bros. Movie, Toy Story 5, The Batman Part II and, reportedly, Avengers: The Kang Dynasty

This article originally appeared on Engadget at https://www.engadget.com/ryan-gosling-and-millerlords-project-hail-mary-could-be-the-sci-fi-event-of-2026-174440164.html?src=rss