Doctor Who: The Legend of Ruby Sunday review: What legend?

The following contains spoilers for “The Legend of Ruby Sunday.”

In an episode full of misdirection, the biggest one has to be its title, given we’ve learned very little about what Ruby Sunday’s legend actually is. Instead, the first part of the series’ two part finale is essentially an hour to build a sense of dread that spills over in its final moments. I could cheat and say “The Legend of Ruby Sunday” is just “Army of Ghosts” — the first half of the 2006 season’s finale — with a bigger budget. Except the big bad that reveals itself at the end is a villain from a far deeper cut than the usual corners of Doctor Who’s history.

The Doctor and Ruby arrive at UNIT HQ to ask about the mysterious woman — Susan Twist — following them around the universe. UNIT, meanwhile, has been monitoring someone named Susan Triad, a British tech billionaire who will announce her gift to humanity later that day. Even the goofballs at UNIT work out that S.TRIAD is an anagram of TARDIS and the Doctor thinks Triad, or the mysterious woman more generally, could be his granddaughter.

But there’s also the matter of Ruby’s parentage to uncover, giving the Doctor a reason not to just confront Triad. The Doctor, Ruby and a UNIT soldier enter the time window — a low-grade holodeck — to try and see who left Ruby on the steps of the church. But the history’s a bit wonky, and Ruby’s faceless mother — unlike what we saw in “The Church on Ruby Road” — turns and ominously points toward the TARDIS. Not long after, the TARDIS is engulfed in a black cloud of swirling evil that nobody’s sure what to do about.

The Doctor then meets Triad just before she gets on stage, prompting her to remember all of her other selves. Whenever Triad dreams, she’s somehow aware of those myriad alternate selves. And while she takes to the stage, the Doctor asks the team at UNIT HQ to scan the TARDIS. It is similarly engulfed in an invisible cloud of malevolent stuff that’s threatening everyone in the area.

Susan Triad on stage during
Bad Wolf / BBC Studios

[ASIDE: This is the second time in four years that Doctor Who has tried to parody an Apple Keynote. And this is the second time that they’ve totally misunderstood how to stage one that looks even remotely evocative of what they’re parodying. I know the conventions of the tech keynote have mutated since the Steve Jobs era, but they’re not even trying.]

A UNIT staffer, Harriet Arbinger (Wait… H. Arbinger?) starts muttering about a dark prophecy while Triad goes off script. The Doctor, standing close by, watches as she turns into a skeleton monster while the TARDIS is menaced by a giant animal head surrounded by Egyptian iconography. Turns out Susan isn’t the Doctor’s granddaughter, or even a key component of the story, but an innocent. An innocent who has been co-opted by Sutekh, an all-powerful Egyptian God we first saw in 1975’s “Pyramids of Mars.” Cue the credits.

It’s a slender synopsis, mostly because these scenes are played slowly as the tension ratchets up. “The Legend of Ruby Sunday” takes its time, letting the screw turn gently until you’re almost happy when the big reveal happens. It’s a gripping ride on a first watch, although I imagine it’ll not have too much value when you go back to it a third or fourth time. But, then again, that’s often been an issue with episodes penned by Russell T. Davies. It’s also a good way to juice bookings for next week’s finale which will get a UK cinema release on June 21.

Was it easy to guess that we’d be getting Sutekh back after his one outing in “Pyramids of Mars?” The rumor mill certainly pulled in that direction over the last month or so, and it’s not as if we didn’t get a clue or two along the way. Longtime Davies fans will recall that Vince watches the part one cliffhanger at the end of the first episode of Queer as Folk. And we’ve already had a whole scene from “Pyramids of Mars” lifted — the jump into a ruined future — in “The Devil’s Chord.”

Image of Ruby, The Doctor and Mel.
Bad Wolf / BBC Studios

If you are unfamiliar, “Pyramids of Mars” is a classic, and another blockbuster from the pen of the series’ best 20th century writer, Robert Holmes. At the time, Holmes was the series’ script editor and had commissioned a story from writer Lewis Griefer. But Griefer’s material was so poor that Holmes and producer Philip Hinchcliffe decided a replacement was needed. So Holmes was tasked with writing a whole new episode in a tiny amount of time. The finished episode was credited to pseudonym Stephen Harris, but it’s all Holmes under the hood. Sadly, because of various rules around writing credits, “The Legend of Ruby Sunday” end credits actually give credit to Lewis Griefer as Sutekh’s creator and omit Holmes, which feels pretty rough.

But that one minor injustice aside, let’s bring on the finale.

Susan Twist Corner

  • Well, looks as if we have our answer that Susan Twist was something of a misdirect.

  • Gabriel Woolf, who voiced Sutekh in 1975, is back to give voice to him now.

  • When Mrs. Flood was left to look after Cherry, she was clearly aware of Sutekh’s return and seemed delighted by it. But she didn’t appear to be a harbinger, so it’s likely she’s representing another, different malevolent character from the series' past.

This article originally appeared on Engadget at https://www.engadget.com/doctor-who-the-legend-of-ruby-sunday-review-what-legend-120004162.html?src=rss

Until Dawn’s original actors will not star in its film adaptation

PlayStation Productions and Screen Gems have announced the cast for the upcoming movie adaptation of the interactive horror game Until Dawn. According to Deadline, the ensemble will include Ella Rubin, who stars alongside Anne Hathaway in Amazon Prime's The Idea of You, and Michael Cimino, who played Victor Salazar in Hulu's Love, Victor. Expats' Ji-young Yoo and Sitting in Bars with Cake's Odessa A'zion have also signed on to play characters in the game revolving around eight young adults who have to survive the night at a remote mountain lodge while being hunted by a killer.

Supermassive Games got some pretty well-known actors to provide motion capture and voice acting for the game's characters, including Rami Malek and Hayden Panettiere. They're no longer the right age to play their original roles, so it doesn't come as a surprise that they're not involved in the project. But since they're not unknown motion capture actors, the filmmakers are dealing with a unique situation in that famous people's faces are tied to the characters other people will now portray.

"At PlayStation Productions, we are always looking to find creative and authentic ways to adapt our beloved games that our fans will enjoy," Asad Qizilbash, head of Sony's production company, told Deadline. "Alongside Screen Gems, we’ve assembled a fantastic cast of new characters that builds upon our already stellar filmmaking team and their vision for the adaptation."

The game itself is getting a remake for the PS5 and for PC. It was built in Unreal Engine 5 for the newer console, and it will add a third-person camera mode, new locations and new interactions to the original. Until Dawn's remake is coming out sometime this fall.

This article originally appeared on Engadget at https://www.engadget.com/until-dawns-original-actors-will-not-star-in-its-film-adaptation-110036254.html?src=rss

One of the biggest games on Steam right now is… a clickable banana

If you regularly stare at the Steam charts to see if there’s anything new and exciting to play, you may have noticed an odd little “game” called Banana. It has quickly become a huge success and, as of this writing, sits at the number three spot with over 400,000 concurrent players. It’s a simple idle clicker game, like many before it, so what’s making players flock to what amounts to a static screen of a huge banana?

The promise of sweet, sweet cash, that’s what. It’s an extremely bare-bones title that has you repeatedly clicking on a banana. That’s pretty much it, though there’s a twist. As you click and click on the tropical fruit, there’s a chance of a banana sticker dropping into your Steam inventory. These bananas come in all different designs, from silver-encrusted variants to one that looks like it's glitching out from a hack.

A silver banana.
aaladin66, Pony, Sky, AestheticSpartan

Because the bananas show up in your inventory, they can be sold on the Steam Marketplace. Rare bananas have already gone for as much as $1,400, though the average payout is somewhere in the $0.02 range. One of the developers called it a “legal infinite money glitch” in an interview with Polygon. “Users make money out of a free game while selling free virtual items,” he continued.

The money earned goes into a Steam wallet, which can then be used to purchase games. So these bananas are basically NFTs, only without the blockchain. People are buying and selling them like crazy, like weird fruit-based trading cards. Forget the banana stand: it looks like there’s money in just the facsimile of a banana.

If the idea of spending all day clicking on a fake banana in front of a vomit-green background doesn’t do it for you, the developers sell inventory bananas outright for $0.25 a pop. The game itself, however, is free to play. The devs deny allegations that the clicker is some sort of scam or a Ponzi scheme, simply saying that it’s “pretty much a stupid game.” Idle clickers, after all, are nothing new.

As for the future, the designers have teased updates, including a way to use inventory items to change the way the plain in-game banana looks. There also might be a minigame coming down the pike, as well a shop upgrade that lets players exchange multiples of the same banana for a unique drop. One thing is a near certainty. The massive popularity of Banana is sure to inspire a whole bunch of copycats. May I humbly suggest a pizza slice as something to click over and over.

This article originally appeared on Engadget at https://www.engadget.com/one-of-the-biggest-games-on-steam-right-now-is-a-clickable-banana-190058749.html?src=rss

Neva hands-on: A grand achievement in emotional game design

Neva is going to make me cry. It very nearly did at Summer Game Fest, as the game’s introductory cinematics faded to black, literally just one minute into my time with the demo. I won’t divulge what happens in those initial frames, but it shattered my soul. It also perfectly primed me for the heart-pounding danger and devastating beauty that I would get lost in for the next 45 minutes, alongside my new best friend, Neva the wolf.

Neva
Nomada Studio

Every aspect of Neva is breathtaking. It plays like a living watercolor illustration: Alba, the protagonist, has long, slender limbs, a cloud of silver hair and a flowing red cloak that drapes behind her elegantly with each leap and fall. Neva is a young white wolf, fluffy and energetic, and the two share an intense bond that’s repeatedly reinforced and tested in the demo.

The world of Neva feels slightly more grounded than that of Gris, the game that put Nomada Studio on the map in 2018, but it’s still filled with layers of magic. The landscapes beyond the 2D plane that Alba and Neva traverse have incredible depth — dense forests hiding secrets and mountain ranges towering above wide valleys, sharp peaks piercing the sky in the far distance. The demo has lush glades draped in vines and weeping branches, sunlight streaming through the gaps in the leaves, as well as cave systems with dark, tight corridors. At times Neva takes the Frank Lloyd Wright approach to design, squeezing players through claustrophobic thickets that suddenly burst onto fields of thick green grass, the camera pulling back to show how small Alba and Neva really are in this space.

Neva
Nomada Studio

Trees, leaves, rocks and roots compose the game’s sidescrolling playground, with sloping platforms and floating islands built mainly out of stone. Touches of fantastical alien technology appear with increasing frequency as the demo progresses, as do hordes of inky-black enemies with round white faces, mouths open in silent screams.

Platforming in Neva is intuitive. There’s minimal on-screen text in the game, and instead direction comes from the environment, soft highlights and sunkissed glows marking the proper paths in a way that feels completely natural. I flowed through most areas of the demo, leaping onto ledges with almost-subconscious impulses, knowing that I could trust the game’s subtle instructions. There are areas of spiky blackness that Alba has to clear for Neva to be able to progress, and at times it’s necessary to leave the little wolf behind for a moment, generating instant separation anxiety. Neva yelps and squeaks as she learns how to traverse the world, and they’re heart-wrenching sounds. I was keenly aware of Neva with each jump, making sure she could follow my path, lingering to watch her complete big leaps, petting her after each success, and consistently calling out her name.

Alba’s voice is fairy-like and the way she says, “Neva? Neva. Nevaaa!” has become an earworm I can’t shake. In the days since coming home from Summer Game Fest and reuniting with my two small dogs, I’ve been walking around the house saying, “Neva?” as if it were their names. It’s been a very confusing time for them, but they’ve gotten a few extra treats, so all’s well.

Combat in Neva feels as intuitive as platforming, with simple inputs that land satisfying hits of Alba’s sword. The enemies, long-limbed creatures that appear out of dark pools in the ground, slash at Alba with their spiky fingers and throw lethal blobs at her, but one-on-one, they’re fairly easy to dispatch. Alba is able to get incredibly close to each creature before she takes damage, and this generous proximity makes the fight scenes feel like dance, with constant action and minimal interruptions. I didn’t die until I reached the boss fight at the end of the demo, where Neva and I had to fight off a giant creature, double jumping around it to slash at its legs and back, avoiding its attacks. I defeated the boss after three deaths, and the scene felt like an appropriate escalation of everything I’d learned so far.

Neva
Nomada Studio

I’m convinced that every preview of Neva (including this one) will mention how quickly and easily the game will make players cry, and I want to take a moment to recognize the magnitude of this achievement. The bond that Nomada Studio have built between Neva and Alba is incredibly powerful, and this type of emotional connection doesn’t just happen when you put an animal and a human in the same scene. Neva is a constant source of anxiety and joy: The cub must be protected, at all costs, and she feels like a physical part of Alba’s being, necessary to the protagonist’s survival. Neva establishes their shared trauma and every following mechanic reinforces their partnership — protect, pet, repeat. Neva and Alba need each other, and their shared love resonates from each frame of the game.

Guaranteed, Neva is going to make me cry.

Neva is due out on PC and PlayStation 5 this year, developed by Nomada Studio and published by Devolver Digital.


Catch up on all of the news from Summer Game Fest 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/neva-hands-on-a-grand-achievement-in-emotional-game-design-180516649.html?src=rss

Picsart and Getty are making an AI image generator entirely trained on licensed content

Getty has partnered up with Picsart, a popular photo-editing platform, to build an AI image generator that’s entirely trained on licensed stock images. The companies are calling it a “responsible, commercially-safe” alternative to current platforms. Images created by the model will have full commercial rights, which should address concerns about AI-generated images violating copyright laws.

The service will only be available to paid Picsart subscribers and the whole thing recalls Adobe’s Firefly AI model. That generator is also trained on stock images, though not exclusively. Adobe recently outraged users by updating its terms of service to indicate that it could access and use people’s work to train AI models. The company quickly amended the terms of service once the backlash started spreading.

Picsart and Getty hope to avoid any backlash by sticking to fully licensed stock images, so regular Picsart users won’t be at risk of having their creations snatched up by the model for training and generation purposes. “It allows creators to bring their visions to life while maintaining the highest standards of commercial safety,” Grant Farhall, CPO at Getty Images, wrote in a blog post.

It also looks like Getty is playing fair with this one, for those worrying about the work of professional photographers being co-opted. We reached out to the company and a rep said that it is "compensating creators included in the dataset on an annual basis." That's something at least!

The Picsart x Getty Images model releases later this year, though there’s no concrete launch date. It’ll be accessible via Picsart’s API services.

This article originally appeared on Engadget at https://www.engadget.com/picsart-and-getty-are-making-an-ai-image-generator-entirely-trained-on-licensed-content-154058696.html?src=rss

House of the Dragon renewed for season 3 ahead of season 2 premiere

HBO has announced that House of the Dragon will be back for a third season. The network confirmed the renewal of the Game of Thrones spinoff series in a press release just three days ahead of its Season 2 premiere.

“George [R.R. Martin], Ryan [Condal] and the rest of our incredible executive producers, cast and crew have reached new heights with the phenomenal second season of House of the Dragon,” Francesca Orsi, executive vice president of HBO Programming and head of HBO Drama Series and Films, said in the press release.

HBO hasn’t revealed any details about the third season of House of the Dragon, nor has it given a release window. Still, it’s not uncommon in the streaming era for networks like HBO to renew shows for future seasons before upcoming seasons go live, like The Last of Us.

Last year, Orsi told Deadline that House of the Dragon may have more than four seasons. She added that Martin, whose book Fire & Blood inspired the spin-off series, and showrunner Condal were going to discuss where to end the show before the writers’ strike started. That strike ended on September 23, 2023 with the Writers Guild of America reaching an agreement on protections against generative AI.

The renewal also comes two days after Martin confirmed in a blog post that HBO is moving forward with another Game of Thrones spin-off, Ten Thousand Ships. He wrote that playwright Eboni Booth is “working on a new pilot” for the prequel about Queen Nymeria and the Rhoynar after the show was previously scrapped.

This article originally appeared on Engadget at https://www.engadget.com/house-of-the-dragon-renewed-for-season-3-ahead-of-season-2-premiere-203425819.html?src=rss

If AI is going to take over the world, why can’t it solve the Spelling Bee?

My task for our AI overlords was simple: help me crack the New York Times Spelling Bee.

I had spent a large chunk of a Saturday evening trying to shape the letters G, Y, A, L, P, O and N into as many words as possible. But three hours, 141 points and 37 words — including “nonapology”, “lagoon” and “analogy” — later, I had hit a wall. A few more words was all I needed to propel myself into Spelling Bee’s "genius" echelon, the title reserved for those who unscramble 70 percent of all possible words using the given letters, and the point at which the puzzle considers itself, effectively, solved. (100 percent solvers are bestowed the rank of "queen bee," but even I have limits.)

My human mind was clearly struggling, but this task seemed like child’s play for AI, so I fired up ChatGPT, told it I was trying to win the Spelling Bee, gave it my letters and laid out the rules. “Give me a list of every word that can be formed with them,” I typed. “You’re allowed to repeat letters, but each word must contain 'N'.”

When I hit enter, this is what ChatGPT spat back:

Here is a list of words formed with the letters "G Y A L P O N" that must contain the letter "N":
GLNPAYO
YPNL
PGNOYL
ONAPYLG
LGNAPYO
ANLY
NAYGOPL
GYPON
AOGPLNY
PNAOGLY
This is just a sample of the list due to the extensive number of permutations possible.

Not a great start.

“The words need to be in the dictionary,” I typed back. “Sure,” came the response. “I’ll provide a list of valid English words based on the given letters.” ANGLY, GALON, LANG.

Clearly, this wasn’t working, so I turned to Microsoft’s Copilot (YANG, PLAYING, PLANNING, ANNOYINGLY), Google’s Gemini (GAPON, GON, GIAN), and Anthropic’s Claude (MANGO, ONGOING, LAWN17.LAY). Meta AI helpfully told me that it made sure to only include words that are recognized by dictionaries in a list that contained NALYP and NAGY, while Perplexity — a chatbot with ambitions of killing Google Search — simply wrote GAL hundreds of times before freezing abruptly.

Perplexity sucked at solving the Spelling Bee
Perplexity, a chatbot with ambitions of killing Google Search, went to pieces when asked to form words from a set of letters.
Screenshot by Pranav Dixit / Engadget

AI can now create images, video and audio as fast as you can type in descriptions of what you want. It can write poetry, essays and term papers. It can also be a pale imitation of your girlfriend, your therapist and your personal assistant. And lots of people think it’s poised to automate humans out of jobs and transform the world in ways we can scarcely begin to imagine. So why does it suck so hard at solving a simple word puzzle?

The answer lies in how large language models, the underlying technology that powers our modern AI craze, function. Computer programming is traditionally logical and rules-based; you type out commands that a computer follows according to a set of instructions, and it provides a valid output. But machine learning, of which generative AI is a subset, is different.

“It’s purely statistical,” Noah Giansiracusa, a professor of mathematical and data science at Bentley University told me. “It’s really about extracting patterns from data and then pushing out new data that largely fits those patterns.”

OpenAI did not respond on record but a company spokesperson told me that this type of “feedback” helped OpenAI improve the model’s comprehension and responses to problems. "Things like word structures and anagrams aren't a common use case for Perplexity, so our model isn't optimized for it," company spokesperson Sara Platnick told me. "As a daily Wordle/Connections/Mini Crossword player, I'm excited to see how we do!" Microsoft and Meta declined to comment. Google and Anthropic did not respond by publication time.

At the heart of large language models are “transformers,” a technical breakthrough made by researchers at Google in 2017. Once you type in a prompt, a large language model breaks down words or fractions of those words into mathematical units called “tokens.” Transformers are capable of analyzing each token in the context of the larger dataset that a model is trained on to see how they’re connected to each other. Once a transformer understands these relationships, it is able to respond to your prompt by guessing the next likely token in a sequence. The Financial Times has a terrific animated explainer that breaks this all down if you’re interested.

Meta AI sucked at solving the Spelling Bee too
I mistyped "sure", but Meta AI thought I was suggesting it as a word and told me I was right.
Screenshot by Pranav Dixit / Engadget

I thought I was giving the chatbots precise instructions to generate my Spelling Bee words, all they were doing was converting my words to tokens, and using transformers to spit back plausible responses. “It’s not the same as computer programming or typing a command into a DOS prompt,” said Giansiracusa. “Your words got translated to numbers and they were then processed statistically.” It seems like a purely logic-based query was the exact worst application for AI’s skills – akin to trying to turn a screw with a resource-intensive hammer.

The success of an AI model also depends on the data it’s trained on. This is why AI companies are feverishly striking deals with news publishers right now — the fresher the training data, the better the responses. Generative AI, for instance, sucks at suggesting chess moves, but is at least marginally better at the task than solving word puzzles. Giansiracusa points out that the glut of chess games available on the internet almost certainly are included in the training data for existing AI models. “I would suspect that there just are not enough annotated Spelling Bee games online for AI to train on as there are chess games,” he said.

“If your chatbot seems more confused by a word game than a cat with a Rubik’s cube, that’s because it wasn’t especially trained to play complex word games,” said Sandi Besen, an artificial intelligence researcher at Neudesic, an AI company owned by IBM. “Word games have specific rules and constraints that a model would struggle to abide by unless specifically instructed to during training, fine tuning or prompting.”

“If your chatbot seems more confused by a word game than a cat with a Rubik’s cube, that’s because it wasn’t especially trained to play complex word games."

None of this has stopped the world’s leading AI companies from marketing the technology as a panacea, often grossly exaggerating claims about its capabilities. In April, both OpenAI and Meta boasted that their new AI models would be capable of “reasoning” and “planning.” In an interview, OpenAI’s chief operating officer Brad Lightcap told the Financial Times that the next generation of GPT, the AI model that powers ChatGPT, would show progress on solving “hard problems” such as reasoning. Joelle Pineau, Meta’s vice president of AI research, told the publication that the company was “hard at work in figuring out how to get these models not just to talk, but actually to reason, to plan…to have memory.”

My repeated attempts to get GPT-4o and Llama 3 to crack the Spelling Bee failed spectacularly. When I told ChatGPT that GALON, LANG and ANGLY weren’t in the dictionary, the chatbot said that it agreed with me and suggested GALVANOPY instead. When I mistyped the world “sure” as “sur” in my response to Meta AI’s offer to come up with more words, the chatbot told me that “sur” was, indeed, another word that can be formed with the letters G, Y, A, L, P, O and N.

Clearly, we’re still a long way away from Artificial General Intelligence, the nebulous concept describing the moment when machines are capable of doing most tasks as well as or better than human beings. Some experts, like Yann LeCun, Meta’s chief AI scientist, have been outspoken about the limitations of large language models, claiming that they will never reach human-level intelligence since they don’t really use logic. At an event in London last year, LeCun said that the current generation of AI models “just do not understand how the world works. They’re not capable of planning. They’re not capable of real reasoning," he said. "We do not have completely autonomous, self-driving cars that can train themselves to drive in about 20 hours of practice, something a 17-year-old can do.”

Giansiracusa, however, strikes a more cautious tone. “We don’t really know how humans reason, right? We don’t know what intelligence actually is. I don’t know if my brain is just a big statistical calculator, kind of like a more efficient version of a large language model.”

Perhaps the key to living with generative AI without succumbing to either hype or anxiety is to simply understand its inherent limitations. “These tools are not actually designed for a lot of things that people are using them for,” said Chirag Shah, a professor of AI and machine learning at the University of Washington. He co-wrote a high-profile research paper in 2022 critiquing the use of large language models in search engines. Tech companies, thinks Shah, could do a much better job of being transparent about what AI can and can’t do before foisting it on us. That ship may have already sailed, however. Over the last few months, the world’s largest tech companies – Microsoft, Meta, Samsung, Apple, and Google – have made declarations to tightly weave AI into their products, services and operating systems.

"The bots suck because they weren’t designed for this,” Shah said of my word game conundrum. Whether they suck at all the other problems tech companies are throwing at them remains to be seen.

How else have AI chatbots failed you? Email me at pranav.dixit@engadget.com and let me know!

Update, June 13 2024, 4:19 PM ET: This story has been updated to include a statement from Perplexity.

This article originally appeared on Engadget at https://www.engadget.com/if-ai-is-going-to-take-over-the-world-why-cant-it-solve-the-spelling-bee-170034469.html?src=rss

Phoenix Springs offers breathtaking beauty in a desolate neo-noir world

Take me to Phoenix Springs. 

I didn’t make it all the way to the remote desert oasis and its mysterious community of misfits while playing the Phoenix Springs demo at Summer Game Fest, but after spending a brief time in Iris Dormer’s neo-noir world, I’m desperate to get there. I want to find out what happened to Iris’ brother, a man I’ve only heard about in strange, sad tales. I want to hear Iris’ voice articulating in my ear, providing brusque context for every scene. I’m ready to get lost again in the game’s sickly green shadows. I’m wildly curious to find out what awaits me in the desert. Take me back.

Phoenix Springs
Calligram Studio

Phoenix Springs is a point-and-click detective game starring Iris Dormer, a reporter who’s looking for her estranged brother, Leo. Her search eventually leads beyond the city’s crumbling skyscrapers and across the desert, to an oasis community called Phoenix Springs. Iris investigates the area and its people using an inventory of mental notes, collecting ideas instead of physical objects as clues.

The Summer Game Fest demo covered the game’s initial stages, featuring Iris on a train and in the city, only teasing the oddities that might be hiding in the desert community of Phoenix Springs. Each scene in the game is a work of art and Iris is its historian, revealing threads of relationships and storylines as she reads documents and picks up information from strangers. In any situation, she has three options for interaction: talk to, look at, use.

Phoenix Springs
Calligram Studio

Iris’ mental inventory fills with names, dates, places and obscurities as she unpacks boxes, searches the net and tries to speak with her brother’s former neighbors. Leo’s last address is a building that’s been boarded up, abandoned by its landlords mid-remodel, and here she encounters the people that have been left behind. There’s a young boy making a plant dance with some kind of electronic box, and a middle-aged man sprawled, unconscious, on top of a shipping container. They’re called the orphans and neither of them are up for conversation. On the other side of the building, an intercom houses a separate voice that shares the history of the area, filling Iris’ inventory with words. Selecting an idea allows Iris to investigate her surroundings with that information, narrowing her focus and often unlocking solutions. It’s a clean and familiar investigation mechanic presented in a starkly beautiful format.

Phoenix Springs is gorgeous. Undeniably. Its canvas is menacing — dark green backgrounds are striped with even-deeper shadows, while pops of yellow, red and blue define the edges of important set pieces. The inventory bursts onto the screen as a bright white screen with black text, individual ideas separated by delicate thought bubbles. There's a papery sheen to the entire experience, as if it's an interactive interpretation of a mid-century sci-fi novel cover.

Phoenix Springs
Calligram Studio

Where the game lacks color, Iris provides it via narration, and her verbal palette is just as stark as the game’s appearance. She speaks dispassionately and with a posh nihilism that would feel at home in an Orson Welles detective noir. Her voice is comforting and foreboding, and it’s a welcome, near-constant companion in the demo.

In the middle of a busy trade show packed with compelling games, I wanted to keep playing Phoenix Springs, and that’s pretty much the highest praise I can give. Phoenix Springs feels utterly unique. It’s coming to Steam on September 16, developed and published by London-based art collective Calligram Studio.


Catch up on all of the news from Summer Game Fest 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/phoenix-springs-offers-breathtaking-beauty-in-a-desolate-neo-noir-world-130046288.html?src=rss

LinkedIn’s AI job coach can write your cover letters and edit your resumé

Last year, LinkedIn began experimenting with AI-powered tools for job seekers on its platform. Now the company has added a bunch of new capabilities for its premium subscribers who are #OpentoWork, including personalized resumé, AI-assisted cover letters and more conversational job searches.

The changes are meant to speed up some of the most tedious aspects of looking for a new role. For example, the revamped job search feature now allows you to look for roles with queries like “find me a marketing job that’s fully remote and pays at least $100,000 a year,” or “find business development roles in biotech.” Those are all relatively simple descriptions but anyone who has searched for jobs on LinkedIn (without the help of AI) knows that it can often be a struggle to narrow down job listings with keywords.

Once you find a role you’re interested in, the built-in assistant can give you feedback on your qualifications and help with your application. You can upload a copy of your current resumé and LinkedIn’s AI will provide tips on what to update based on the job description. This can include suggestions on specific experiences to highlight or the ability to rewrite entire sections of the document. Likewise, LinkedIn can generate cover letters based on your experience and the job you want to apply for.

LinkedIn Job Seeker AI
LinkedIn

The company gave me a preview of these tools and I thought it did a surprisingly decent job for a first attempt at a cover letter. It incorporated specific details from my profile and the tone didn’t feel as robotic as much of the AI-written text I’ve encountered. Of course, as a journalist, I like to believe I can still write a better cover letter than an AI. But, I can see how the tool could be useful for people applying to dozens of jobs at once, especially since many companies use AI software to whittle down applications anyway.

LinkedIn product manager Rohan Rajiv says that these tools are meant to be more of a jumping off point for users rather than an all-in-one solution. “What we want to do is make it easy for folks who have a difficult time telling their story, have a difficult time staring at a blank screen trying to put something together to at least get started,” he tells Engadget.

But he also notes that the company is still in the relatively early stages of its AI push and it could eventually automate more of the job application process. “The next horizon is going to be … can you just do that for me,” he says. “You can almost imagine people thinking about it from an agent standpoint, and helping you get things done.”

This article originally appeared on Engadget at https://www.engadget.com/linkedins-ai-job-coach-can-write-your-cover-letters-and-edit-your-resume-130033553.html?src=rss

Alamo Drafthouse is being bought by Sony Pictures

Sony Pictures Entertainment announced today that it has acquired Alamo Drafthouse Cinema, a beloved independent theater business. Alamo Drafthouse won scores of loyal fans over the years for its well-enforced policy of no talking and no texting during showings, as well as its dine-in experience with food and beverage menus.

At least for now, the Alamo experience for viewers may not feel different under the new management. Alamo Drafthouse will continue to operate its 35 cinemas and run its Fantastic Fest film festival. And current CEO Michael Kustermann will remain at the helm and report to the head of a new Sony Pictures Experiences division.

It's the end of an era for the indie theater chain, which was founded in 1997 by Tim and Karrie League. But given how hard the COVID-19 pandemic crushed the movie-going experience, at least this isn't the end of the Alamo Drafthouse story. The business made a valiant effort to keep viewers' support with its Season Pass streaming service in 2020, but the Texas-based company filed for Chapter 11 bankruptcy in 2021 and began approaching potential buyers in March of this year.

There's no dollar figure attached to the announcement, but Sony's press release notes that Alamo Drafthouse is the seventh-largest theater chain in North America. Even with their struggles, the company attracts an annual audience of 10 million and posted a 30 percent increase in box office revenue last year. Maybe this sets the Alamo theaters up to host special Crunchyroll anime marathons in the near future.

This article originally appeared on Engadget at https://www.engadget.com/alamo-drafthouse-is-being-bought-by-sony-pictures-204934280.html?src=rss