Google Project Astra hands-on: Full of potential, but it’s going to be a while

At I/O 2024, Google’s teaser for Project Astra gave us a glimpse at where AI assistants are going in the future. It’s a multi-modal feature that combines the smarts of Gemini with the kind of image recognition abilities you get in Google Lens, as well as powerful natural language responses. However, while the promo video was slick, after getting to try it out in person, it's clear there’s a long way to go before something like Astra lands on your phone. So here are three takeaways from our first experience with Google’s next-gen AI.

Sam’s take:

Currently, most people interact with digital assistants using their voice, so right away Astra’s multi-modality (i.e. using sight and sound in addition to text/speech) to communicate with an AI is relatively novel. In theory, it allows computer-based entities to work and behave more like a real assistant or agent – which was one of Google’s big buzzwords for the show – instead of something more robotic that simply responds to spoken commands.

The first project Astra demo we tried used a large touchscreen connected to a downward-facing camera.
Photo by Sam Rutherford/Engadget

In our demo, we had the option of asking Astra to tell a story based on some objects we placed in front of camera, after which it told us a lovely tale about a dinosaur and its trusty baguette trying to escape an ominous red light. It was fun and the tale was cute, and the AI worked about as well as you would expect. But at the same time, it was far from the seemingly all-knowing assistant we saw in Google's teaser. And aside from maybe entertaining a child with an original bedtime story, it didn’t feel like Astra was doing as much with the info as you might want.

Then my colleague Karissa drew a bucolic scene on a touchscreen, at which point Astra correctly identified the flower and sun she painted. But the most engaging demo was when we circled back for a second go with Astra running on a Pixel 8 Pro. This allowed us to point its cameras at a collection of objects while it tracked and remembered each one’s location. It was even smart enough to recognize my clothing and where I had stashed my sunglasses even though these objects were not originally part of the demo.

In some ways, our experience highlighted the potential highs and lows of AI. Just the ability for a digital assistant to tell you where you might have left your keys or how many apples were in your fruit bowl before you left for the grocery store could help you save some real time. But after talking to some of the researchers behind Astra, there are still a lot of hurdles to overcome.

An AI-generated story about a dinosaur and a baguette created by Google's Project Astra
Photo by Sam Rutherford/Engadget

Unlike a lot of Google’s recent AI features, Astra (which is described by Google as a “research preview”) still needs help from the cloud instead of being able to run on-device. And while it does support some level of object permanence, those “memories” only last for a single session, which currently only spans a few minutes. And even if Astra could remember things for longer, there are things like storage and latency to consider, because for every object Astra recalls, you risk slowing down the AI, resulting in a more stilted experience. So while it’s clear Astra has a lot of potential, my excitement was weighed down with the knowledge that it will be some time before we can get more full-feature functionality.

Karissa’s take:

Of all the generative AI advancements, multimodal AI has been the one I’m most intrigued by. As powerful as the latest models are, I have a hard time getting excited for iterative updates to text-based chatbots. But the idea of AI that can recognize and respond to queries about your surroundings in real-time feels like something out of a sci-fi movie. It also gives a much clearer sense of how the latest wave of AI advancements will find their way into new devices like smart glasses.

Google offered a hint of that with Project Astra, which may one day have a glasses component, but for now is mostly experimental (the glasses shown in the demo video during the I/O keynote were apparently a “research prototype.”) In person, though, Project Astra didn’t exactly feel like something out of sci-fi flick.

During a demo at Google I/O, Project Astra was able to remember the position of objects seen by a phone's camera.
Photo by Sam Rutherford/Engadget

It was able to accurately recognize objects that had been placed around the room and respond to nuanced questions about them, like “which of these toys should a 2-year-old play with.” It could recognize what was in my doodle and make up stories about different toys we showed it.

But most of Astra’s capabilities seemed on-par with what Meta has already made available with its smart glasses. Meta’s multimodal AI can also recognize your surroundings and do a bit of creative writing on your behalf. And while Meta also bills the features as experimental, they are at least broadly available.

The Astra feature that may set Google’s approach apart is the fact that it has a built-in “memory.” After scanning a bunch of objects, it could still “remember” where specific items were placed. For now, it seems Astra’s memory is limited to a relatively short window of time, but members of the research team told us that it could theoretically be expanded. That would obviously open up even more possibilities for the tech, making Astra seem more like an actual assistant. I don’t need to know where I left my glasses 30 seconds ago, but if you could remember where I left them last night, that would actually feel like sci-fi come to life.

But, like so much of generative AI, the most exciting possibilities are the ones that haven’t quite happened yet. Astra might get there eventually, but right now it feels like Google still has a lot of work to do to get there.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-project-astra-hands-on-full-of-potential-but-its-going-to-be-a-while-235607743.html?src=rss

Google Project Astra hands-on: Full of potential, but it’s going to be a while

At I/O 2024, Google’s teaser for Project Astra gave us a glimpse at where AI assistants are going in the future. It’s a multi-modal feature that combines the smarts of Gemini with the kind of image recognition abilities you get in Google Lens, as well as powerful natural language responses. However, while the promo video was slick, after getting to try it out in person, it's clear there’s a long way to go before something like Astra lands on your phone. So here are three takeaways from our first experience with Google’s next-gen AI.

Sam’s take:

Currently, most people interact with digital assistants using their voice, so right away Astra’s multi-modality (i.e. using sight and sound in addition to text/speech) to communicate with an AI is relatively novel. In theory, it allows computer-based entities to work and behave more like a real assistant or agent – which was one of Google’s big buzzwords for the show – instead of something more robotic that simply responds to spoken commands.

The first project Astra demo we tried used a large touchscreen connected to a downward-facing camera.
Photo by Sam Rutherford/Engadget

In our demo, we had the option of asking Astra to tell a story based on some objects we placed in front of camera, after which it told us a lovely tale about a dinosaur and its trusty baguette trying to escape an ominous red light. It was fun and the tale was cute, and the AI worked about as well as you would expect. But at the same time, it was far from the seemingly all-knowing assistant we saw in Google's teaser. And aside from maybe entertaining a child with an original bedtime story, it didn’t feel like Astra was doing as much with the info as you might want.

Then my colleague Karissa drew a bucolic scene on a touchscreen, at which point Astra correctly identified the flower and sun she painted. But the most engaging demo was when we circled back for a second go with Astra running on a Pixel 8 Pro. This allowed us to point its cameras at a collection of objects while it tracked and remembered each one’s location. It was even smart enough to recognize my clothing and where I had stashed my sunglasses even though these objects were not originally part of the demo.

In some ways, our experience highlighted the potential highs and lows of AI. Just the ability for a digital assistant to tell you where you might have left your keys or how many apples were in your fruit bowl before you left for the grocery store could help you save some real time. But after talking to some of the researchers behind Astra, there are still a lot of hurdles to overcome.

An AI-generated story about a dinosaur and a baguette created by Google's Project Astra
Photo by Sam Rutherford/Engadget

Unlike a lot of Google’s recent AI features, Astra (which is described by Google as a “research preview”) still needs help from the cloud instead of being able to run on-device. And while it does support some level of object permanence, those “memories” only last for a single session, which currently only spans a few minutes. And even if Astra could remember things for longer, there are things like storage and latency to consider, because for every object Astra recalls, you risk slowing down the AI, resulting in a more stilted experience. So while it’s clear Astra has a lot of potential, my excitement was weighed down with the knowledge that it will be some time before we can get more full-feature functionality.

Karissa’s take:

Of all the generative AI advancements, multimodal AI has been the one I’m most intrigued by. As powerful as the latest models are, I have a hard time getting excited for iterative updates to text-based chatbots. But the idea of AI that can recognize and respond to queries about your surroundings in real-time feels like something out of a sci-fi movie. It also gives a much clearer sense of how the latest wave of AI advancements will find their way into new devices like smart glasses.

Google offered a hint of that with Project Astra, which may one day have a glasses component, but for now is mostly experimental (the glasses shown in the demo video during the I/O keynote were apparently a “research prototype.”) In person, though, Project Astra didn’t exactly feel like something out of sci-fi flick.

During a demo at Google I/O, Project Astra was able to remember the position of objects seen by a phone's camera.
Photo by Sam Rutherford/Engadget

It was able to accurately recognize objects that had been placed around the room and respond to nuanced questions about them, like “which of these toys should a 2-year-old play with.” It could recognize what was in my doodle and make up stories about different toys we showed it.

But most of Astra’s capabilities seemed on-par with what Meta has already made available with its smart glasses. Meta’s multimodal AI can also recognize your surroundings and do a bit of creative writing on your behalf. And while Meta also bills the features as experimental, they are at least broadly available.

The Astra feature that may set Google’s approach apart is the fact that it has a built-in “memory.” After scanning a bunch of objects, it could still “remember” where specific items were placed. For now, it seems Astra’s memory is limited to a relatively short window of time, but members of the research team told us that it could theoretically be expanded. That would obviously open up even more possibilities for the tech, making Astra seem more like an actual assistant. I don’t need to know where I left my glasses 30 seconds ago, but if you could remember where I left them last night, that would actually feel like sci-fi come to life.

But, like so much of generative AI, the most exciting possibilities are the ones that haven’t quite happened yet. Astra might get there eventually, but right now it feels like Google still has a lot of work to do to get there.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-project-astra-hands-on-full-of-potential-but-its-going-to-be-a-while-235607743.html?src=rss

Audible is testing book recommendations based on your Prime Video habits

Audible is testing a new category of book recommendations based on what a user watched recently on Prime Video. Which, as the name suggests, will show you audiobooks based on what you watch on the Amazon-owned service, TechCrunch reports.

The new carousel should appear on mobile and web apps for about half of users who have Amazon Prime Video and Audible subscriptions. You might see recommendations as straightforward as the book a movie you watched is based on or titles with storylines or authors that users with similar preferences to you have enjoyed.

Audible claims the decision came due to the uptick it saw in users accessing titles recently released as shows or movies. "There is a natural synergy between TV, movies, and books, and we see that clearly in how our customers engage with content on Audible," Andy Tsao, chief product and analytics officer at Audible, said in a statement. The company gives examples such as Reacher, which came out on Amazon Prime in 2022. Audible claims that the listenership of author Lee Child's books rose by almost 80 percent daily in the two weeks after its release.

This article originally appeared on Engadget at https://www.engadget.com/audible-is-testing-book-recommendations-based-on-your-prime-video-habits-123053133.html?src=rss

Ryan Gosling and Miller/Lord’s Project Hail Mary could be the sci-fi event of 2026

Do you like rip-roaring science fiction books? Do you like movies? Then you are in for a treat in, well, two years. Amazon MGM Studios just set a release date of March 20, 2026 for Project Hail Mary, according to Deadline. It’s based on the Andy Weir novel of the same name, which was one of our favorite books of the past few years, so color us excited.

The film stars honorary SNL cast member Ryan Gosling and will be directed by Phil Lord and Christopher Miller, the duo behind The Lego Movie and, allegedly, most of the good parts of Solo: A Star Wars Story. Lord also wrote a little-known movie called Spider-Man: Into the Spider-Verse.

The script was penned by Drew Goddard, who cut his teeth on TV shows like Buffy the Vampire Slayer and Lost before moving onto features. He directed Cabin in the Woods, which is somehow both iconic and underrated at the same time. If the name Andy Weir sounds familiar, it’s because he wrote a book called The Martian, which inspired the Matt Damon film. Incidentally, Goddard also wrote that script.

I’ve read the book and loved it. It’s more fantastical than The Martian, but still filled with the same science-based solutions to massive life-or-death problems. This time, the entire Earth is on the chopping block, instead of one lone astronaut. It’s also pretty dang funny, just like The Martian, so Lord and Miller are a good match to direct. The pair also signed on to direct an adaptation of another Weir novel, Artemis, but that project looks to have stalled.

Or course, a lot can happen in two years. Here’s to hoping our humble little society keeps clunking along so we can chomp down some popcorn in 2026. Speaking of, that year will also see the release of The Mandalorian & Grogu, the Rey Skywalker film, the sequel to The Super Mario Bros. Movie, Toy Story 5, The Batman Part II and, reportedly, Avengers: The Kang Dynasty

This article originally appeared on Engadget at https://www.engadget.com/ryan-gosling-and-millerlords-project-hail-mary-could-be-the-sci-fi-event-of-2026-174440164.html?src=rss

Ryan Gosling and Miller/Lord’s Project Hail Mary could be the sci-fi event of 2026

Do you like rip-roaring science fiction books? Do you like movies? Then you are in for a treat in, well, two years. Amazon MGM Studios just set a release date of March 20, 2026 for Project Hail Mary, according to Deadline. It’s based on the Andy Weir novel of the same name, which was one of our favorite books of the past few years, so color us excited.

The film stars honorary SNL cast member Ryan Gosling and will be directed by Phil Lord and Christopher Miller, the duo behind The Lego Movie and, allegedly, most of the good parts of Solo: A Star Wars Story. Lord also wrote a little-known movie called Spider-Man: Into the Spider-Verse.

The script was penned by Drew Goddard, who cut his teeth on TV shows like Buffy the Vampire Slayer and Lost before moving onto features. He directed Cabin in the Woods, which is somehow both iconic and underrated at the same time. If the name Andy Weir sounds familiar, it’s because he wrote a book called The Martian, which inspired the Matt Damon film. Incidentally, Goddard also wrote that script.

I’ve read the book and loved it. It’s more fantastical than The Martian, but still filled with the same science-based solutions to massive life-or-death problems. This time, the entire Earth is on the chopping block, instead of one lone astronaut. It’s also pretty dang funny, just like The Martian, so Lord and Miller are a good match to direct. The pair also signed on to direct an adaptation of another Weir novel, Artemis, but that project looks to have stalled.

Or course, a lot can happen in two years. Here’s to hoping our humble little society keeps clunking along so we can chomp down some popcorn in 2026. Speaking of, that year will also see the release of The Mandalorian & Grogu, the Rey Skywalker film, the sequel to The Super Mario Bros. Movie, Toy Story 5, The Batman Part II and, reportedly, Avengers: The Kang Dynasty

This article originally appeared on Engadget at https://www.engadget.com/ryan-gosling-and-millerlords-project-hail-mary-could-be-the-sci-fi-event-of-2026-174440164.html?src=rss

Apple TV+’s Dark Matter series takes on one of Blake Crouch’s best books

Apple just dropped a trailer for another of its never-ending cavalcade of sci-fi shows. Dark Matter stars Joel Edgerton and Jennifer Connelly. It also happens to be based on a fantastic book by author Blake Crouch, which we recommended in 2021 after publication. The show premieres on May 8 with two episodes.

I’ve read the book, and love it, but there will be no real spoilers here. Dark Matter follows a physicist as he gets involved with some serious sci-fi shenanigans. The trailer gives a bit of the plot away, enough to understand that these particular sci-fi shenanigans are of the multiversal variety. Again, the book is a rip-roaring page turner, so the show should follow suit. The rest of the cast includes Jimmi Simpson, Alice Braga, Dayo Okeniyi and Oakes Fegley.

Crouch is actually the showrunner here, which is a first for the author. This isn’t, however, the first TV show based on one of his books. Wayward Pines ran on Fox for two seasons and was based on a series of novels. Good Behavior, also pulled from a book series, aired on TNT back in 2016. The writer has penned a bunch of novels that haven’t been turned into TV shows. We heartily recommend Upgrade, which made our list of the best books of 2022.

Dark Matter joins an absolutely stacked collection of sci-fi shows on Apple TV+. There are the heavy hitters like Severance, For All Mankind and Silo, but also a bunch of lesser-known programs like Invasion and the recently-released Constellation. I’m not done. Monarch: Legacy of Monsters put Kurt Russell up against Godzilla and Hello Tomorrow is set in a retro-future wonderland. I’m still not done. See, Schmigadoon and The Last Days of Ptolemy Grey all have sci-fi elements. Finally, there’s that fantasy show about an American college football coach who somehow becomes a soccer sensation in the UK without actually knowing anything about the sport.

This article originally appeared on Engadget at https://www.engadget.com/apple-tvs-dark-matter-series-takes-on-one-of-blake-crouchs-best-books-174636542.html?src=rss

Apple TV+’s Dark Matter series takes on one of Blake Crouch’s best books

Apple just dropped a trailer for another of its never-ending cavalcade of sci-fi shows. Dark Matter stars Joel Edgerton and Jennifer Connelly. It also happens to be based on a fantastic book by author Blake Crouch, which we recommended in 2021 after publication. The show premieres on May 8 with two episodes.

I’ve read the book, and love it, but there will be no real spoilers here. Dark Matter follows a physicist as he gets involved with some serious sci-fi shenanigans. The trailer gives a bit of the plot away, enough to understand that these particular sci-fi shenanigans are of the multiversal variety. Again, the book is a rip-roaring page turner, so the show should follow suit. The rest of the cast includes Jimmi Simpson, Alice Braga, Dayo Okeniyi and Oakes Fegley.

Crouch is actually the showrunner here, which is a first for the author. This isn’t, however, the first TV show based on one of his books. Wayward Pines ran on Fox for two seasons and was based on a series of novels. Good Behavior, also pulled from a book series, aired on TNT back in 2016. The writer has penned a bunch of novels that haven’t been turned into TV shows. We heartily recommend Upgrade, which made our list of the best books of 2022.

Dark Matter joins an absolutely stacked collection of sci-fi shows on Apple TV+. There are the heavy hitters like Severance, For All Mankind and Silo, but also a bunch of lesser-known programs like Invasion and the recently-released Constellation. I’m not done. Monarch: Legacy of Monsters put Kurt Russell up against Godzilla and Hello Tomorrow is set in a retro-future wonderland. I’m still not done. See, Schmigadoon and The Last Days of Ptolemy Grey all have sci-fi elements. Finally, there’s that fantasy show about an American college football coach who somehow becomes a soccer sensation in the UK without actually knowing anything about the sport.

This article originally appeared on Engadget at https://www.engadget.com/apple-tvs-dark-matter-series-takes-on-one-of-blake-crouchs-best-books-174636542.html?src=rss

Jon Stewart says Apple asked him not to host FTC Chair Lina Khan

Jon Stewart hosted FTC (Federal Trade Commission) chair Lina Khan on his weekly Daily Show segment yesterday, but Stewart's own revelations were just as interesting as Khan's. During the sit-down, Stewart admitted that Apple asked him not to host Khan on a podcast, which was an extension of his The Problem with Jon Stewart Apple TV+ show at the time. 

"I wanted to have you on a podcast and Apple asked us not to do it," Stewart told Khan. "They literally said, 'Please don’t talk to her.'"

In fact, the entire episode appeared to have a "things Apple would let us do" theme. Ahead of the Khan interview, Stewart did a segment on artificial intelligence he called "the false promise of AI," effectively debunking altruistic claims of AI leaders and positing that it was strictly designed to replace human employees. 

"They wouldn’t let us do even that dumb thing we just did in the first act on AI," he told Khan. "Like, what is that sensitivity? Why are they so afraid to even have these conversations out in the public sphere?"

"I think it just shows the danger of what happens when you concentrate so much power and so much decision making in a small number of companies," Khan replied.

The Problem With Jon Stewart was abruptly cancelled ahead of its third season, reportedly following clashes over potential AI and China segments. That prompted US lawmakers to question Apple, seeking to know if the decision had anything to do with possible criticism of China. 

While stating that Apple has the right to stream any content it wants, "the coercive tactics of a foreign power should not be directly or indirectly influencing these determinations," the bipartisan committee wrote. (Apple's response to this, if any, has yet to be released.)

Stewart didn't say that the AI and Khan interview issues were the reason his show was cancelled, but they do indicate that Apple asserted editorial influence over issues that directly involved it.

Elsewhere in the segment, Khan discussed the FTC's lawsuit against Amazon, stating that the FTC alleges the company is a monopoly maintained via illegal practices (exorbitant seller fees, shady ads). They also touched on the FTC's lawsuit against Facebook, tech company collusion via AI, corporate consolidation, exorbitant drug prices and more.

This article originally appeared on Engadget at https://www.engadget.com/jon-stewart-says-apple-asked-him-not-to-host-ftc-chair-lina-khan-090249490.html?src=rss

Now it’s NVIDIA being sued over AI copyright infringement

It's getting hard to keep up with copyright lawsuits against generative AI, with a new proposed class action hitting the courts last week. This time, authors are suing NVIDIA over its AI platform NeMo, a language model that allows businesses to create and train their own chatbots, Ars Technica reported. They claim the company trained it on a controversial dataset that illegally used their books without consent.

Authors Abdi Nazemian, Brian Keene and Stewart O’Nan demanded a jury trial and asked NVIDIA to pay damages and destroy all copies of the Books3 dataset used to power NeMo large language models (LLMs). They claim that dataset copied a shadow library called Bibliotek consisting of 196,640 pirated books. 

"In sum, NVIDIA has admitted training its NeMo Megatron models on a copy of The Pile dataset," the claim states. "Therefore, NVIDIA necessarily also trained its NeMo Megatron models on a copy of Books3, because Books3 is part of The Pile. Certain books written by Plaintiffs are part of Books3— including the Infringed Works—and thus NVIDIA necessarily trained its NeMo Megatron models on one or more copies of the Infringed Works, thereby directly infringing the copyrights of the Plaintiffs. 

In response, NVIDIA told The Wall Street Journal that "we respect the rights of all content creators and believe we created NeMo in full compliance with copyright law."

Last year, OpenAI and Microsoft were hit with a copyright lawsuit from nonfiction authors, claiming the companies made money off their works but refused to pay them. A similar lawsuit was launched earlier this year. That's on top of a lawsuit from news organizations like The Intercept and Raw Story, and of course, the legal action that kicked all of this off from The New York Times

This article originally appeared on Engadget at https://www.engadget.com/now-its-nvidia-being-sued-over-ai-copyright-infringement-083407300.html?src=rss

Why Jack Dorsey thought Elon Musk could fix Twitter

Of the many bizarre moments that preceded Twitter's change in ownership, one that’s always stuck out to me was Jack Dorsey’s tweetstorm that “Elon is the singular solution I trust.” His insistence that Musk was uniquely positioned to “extend the light of consciousness” was a strange endorsement, even by Dorsey’s usual weird-guy standards. But Dorsey had long idolized Musk and the two men had a relationship that was far deeper than what many onlookers realized.

That’s according to a new book that explores Jack Dorsey’s role in Elon Musk’s takeover of Twitter. Written by Bloomberg reporter Kurt Wagner, Battle for the Bird tells the story of how Dorsey saved Twitter in 2015 and how his actions – or often, lack thereof— led to Musk’s acquisition and, ultimately, Twitter’s death.

Wagner’s isn’t the first book to delve into the tumultuous events of the last two years — Musk biographer Walter Isaacson had a front-row seat to the drama — but Battle for the Bird sheds new light on Dorsey's side of the equation. “Jack had been bringing Elon to Twitter offsites, he'd visited him at his SpaceX launch facility, the two of them sort of had this relationship that I don't really think people paid much attention to,” Wagner tells Engadget. So once Musk began acquiring a large stake in the company, “Jack sort of stepped in and did what he could” to make the deal happen.

The book, which began as a Dorsey biography before Musk’s takeover forced Wagner to change his plans, focuses on the enigmatic Twitter co-founder whose unusual management style sometimes worked against the company’s own interests.

Inside of Twitter, Wagner writes, Dorsey was known to “rarely speak” in meetings and disliked making decisions. Internally, this was a source of confusion as executives often had to guess what Dorsey was thinking about a particular issue. “People would be surprised at how little he was directing [Twitter and Square], he was really advising them in a weird way,” Wagner says.

These dynamics played out in Twitter’s product. Wagner reports that Dorsey had initially encouraged the product team to create the feature that was eventually known as “Fleets,” Twitter’s experiment with disappearing posts. But Dorsey “grew to despise” the feature and publicly cheered when the company killed it less than a year after its rollout. “Even though he thought Fleets was a bad decision, he never stepped in to halt the product or move the team in another direction,” Wagner writes.

Battle for the Bird also details Dorsey’s many eccentricities: the days-long silent meditation retreats, his affinity for “salt juice” (a mixture of water, pink Himalayan sea salt and lemon juice) and his more recent obsession with bitcoin. “He goes through these stages of his life where he's different, he looks different, he acts different, his priorities are different and I think it's sort of a reflection of the things that he becomes obsessed with,” Wagner says.

Giving Musk a more influential role at Twitter was another idea Dorsey fixated on. He tried to get Musk a seat on the company’s board in 2020 amid a bruising fight with activist investor Elliott Management. Dorsey managed to keep his job but failed to get Musk a board seat because, according to what he told Musk, the rest of the board were “super risk averse.” (By 2020, Musk had already faced at least two major lawsuits over his tweets.)

Dorsey would also tell Musk that the board’s veto was “about the time I decided I needed to work to leave” the company. He had always seemed disinterested in the business of running Twitter, but the troubles with Elliott seemed to change him. “He thought that Twitter served this bigger purpose … its place in the world was not to make money for shareholders,” Wagner explains. “And as a result, he was just not really that interested in playing the Wall Street game, which is a problem when you're a publicly traded company.”

So in 2022, after he had stepped down as CEO, Dorsey encouraged Musk to use his new position as a major stakeholder in Twitter to address Twitter’s “original sin” of existing as a corporate entity beholden to advertisers and political interests. Dorsey believed that Musk loved Twitter for the same reasons he did. So when Musk decided to buy the company and take it private, he backed Musk.

Dorsey publicly endorsed the move and promised to roll over his Twitter shares into the new entity, effectively saving Musk about $1 billion. He, along with the rest of the company’s board, voted to approve the deal.

As Wagner points out in Battle for the Bird, Dorsey eventually soured on Musk after he tried to back out of the deal, saying “it all went south.” But by then, Jack Dorsey’s Twitter was already unrecognizable. “He so publicly endorsed this new idea, this takeover from Elon,” Wagner says. “And as a result, the company that he co-founded and led for almost 16 years in various ways, is no more. X is here, but Twitter is gone. His legacy has really been hurt by this whole debacle.”

This article originally appeared on Engadget at https://www.engadget.com/why-jack-dorsey-thought-elon-musk-could-fix-twitter-140004514.html?src=rss