Ubisoft's shakeups continue unabated. The creative director of the next Assassin's Creed game, codenamed Hexe, has left the company. The departure of Clint Hocking, a 20-year veteran of the company over two stints, was reportedly announced in a staff meeting this week.
Hocking's resume at Ubisoft included serving as creative director on Splinter Cell: Chaos Theory, Far Cry 2 and Watch Dogs: Legion. The details of why he's leaving the company haven't been reported.
Ubisoft told VGC, which first reported on Hocking's exit, that development on Hexe will continue. Jean Guedson, one of three new leaders of the Assassin's Creed franchise, will take over as the upcoming title's new creative director. Guedson had the same role for Assassin's Creed Origins and Black Flag, two of the franchise's most well-received entries.
To say sailing hasn't been smooth of late at Ubisoft would be an understatement. Last year, the company reorganized its corporate structure under a system of "creative houses." The first, Vantage Studios, is partly owned by Tencent and now oversees Assassin's Creed. Then in October, franchise head Marc-Alexis Côté left the company. He later claimed he was "asked to step aside" and is suing his former employer.
But have no fear; some aspects of the company are doing quite well. Take, for example, nepotism. The future is looking bright indeed for a rising company star who is now co-CEO of Vantage Studios. That title belongs to Charlie Guillemot, the son of Ubisoft CEO Yves Guillemot.
This article originally appeared on Engadget at https://www.engadget.com/gaming/the-next-assassins-creed-game-loses-its-creative-director-210119005.html?src=rss
Two stories about the Claude maker Anthropic broke on Tuesday that, when combined, arguably paint a chilling picture. First, US Defense Secretary Pete Hegseth is reportedly pressuring Anthropic to yield its AI safeguards and give the military unrestrained access to its Claude AI chatbot. The company then chose the same day that the Hegseth news broke to drop its centerpiece safety pledge.
On Tuesday, Anthropic said it was modifying its Responsible Scaling Policy (RSP) to lower safety guardrails. Up until now, the company's core pledge has been to stop training new AI models unless specific safety guidelines can be guaranteed in advance. This policy, which set hard tripwires to halt development, was a big part of Anthropic's pitch to businesses and consumers.
“Two and a half years later, our honest assessment is that some parts of this theory of change have played out as we hoped, but others have not,” Anthropic wrote. Now, its updated policy approaches safety relatively, rather than with strict red lines.
Anthropic's quotes in an interview with Time sound reasonable enough in a vacuum. "We felt that it wouldn't actually help anyone for us to stop training AI models," Jared Kaplan, Anthropic's chief science officer, told Time. "We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments… if competitors are blazing ahead."
Anthropic CEO Dario Amodei (Photo by David Dee Delgado/Getty Images for The New York Times)
David Dee Delgado via Getty Images
But you could also read those quotes as the latest example of a hot startup’s ethics becoming grayer as its valuation rises. (Remember Google’s old “Don’t be evil” mantra that it later removed from its code of conduct?) The latest versions of Claude have drawn widespread praise, especially in coding. In February, Anthropic raised $30 billion in new investments. It now has a valuation of $380 billion. (Speaking of the competition Kaplan referred to, rival OpenAI is currently valued at over $850 billion.)
In place of Anthropic's previous tripwires, it will implement new "Risk Reports" and "Frontier Safety Roadmaps." These disclosure models are designed to provide transparency to the public in place of those hard lines in the sand.
Anthropic says the change was motivated by a "collective action problem" stemming from the competitive AI landscape and the US's anti-regulatory approach. "If one AI developer paused development to implement safety measures while others moved forward training and deploying AI systems without strong mitigations, that could result in a world that is less safe," the new RSP reads. "The developers with the weakest protections would set the pace, and responsible developers would lose their ability to do safety research and advance the public benefit."
Defense Secretary Pete Hegseth (Photo by AAron Ontiveroz/The Denver Post)
AAron Ontiveroz via Getty Images
Neither Anthropic's announcement nor the Time exclusive mentions the elephant in the room: the Pentagon's pressure campaign. On Tuesday, Axiosreported that Hegseth told Anthropic CEO Dario Amodei that the company has until Friday to give the military unfettered access to its AI model or face penalties. The company has reportedly offered to adopt its usage policies for the Pentagon. However, it wouldn't allow its model to be used for the mass surveillance of Americans or weapons that fire without human involvement.
If Anthropic doesn't relent, experts say its best bet would be legal action. But will the Pentagon's proposed penalties be enough to scare a profit-driven startup into compliance? Hegseths' threats reportedly include invoking the Defense Production Act, which gives the president authority to direct private companies prioritize certain contracts in the name of national defense. The military could also sever its contract with Anthropic and designate it as a supply chain risk. That would force other companies working with the Pentagon to certify that Claude isn't included in their workflows.
Claude is the only AI model currently used for the military's most sensitive work. "The only reason we're still talking to these people is we need them and we need them now,” a defense official told Axios. “The problem for these guys is they are that good." Claude was reportedly used in the Maduro raid in Venezuela, a topic Amodei is said to have raised with its partner Palantir.
Time's story about the new RSP included reactions from a nonprofit director focused on AI risks. Chris Painter, director of METR, described the changes as both understandable and perhaps an ill omen. "I like the emphasis on transparent risk reporting and publicly verifiable safety roadmaps," he said. However, he also raised concerns that the more flexible RSP could lead to a "frog-boiling" effect. In other words, when safety becomes a gray area, a seemingly never-ending series of rationalizations could take the company down the very dark path it once condemned.
Painter said the new RSP shows that Anthropic "believes it needs to shift into triage mode with its safety plans, because methods to assess and mitigate risk are not keeping up with the pace of capabilities. This is more evidence that society is not prepared for the potential catastrophic risks posed by AI."
This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-weakens-its-safety-pledge-in-the-wake-of-the-pentagons-pressure-campaign-183436413.html?src=rss
Uber is one step closer to going airborne. On Wednesday, the company previewed its air taxi booking service ahead of an expected launch in Dubai later this year. The inaugural Uber Air program will let travelers book Joby Aviation's electric air taxis through a familiar process in the Uber app.
The experience of booking an air taxi will be much like reserving a four-wheeled Uber. In the app, after entering your destination, Uber Air will appear as an option for eligible routes. The Uber app will book a flight and an Uber Black to pick you up and drop you off at a Joby "vertiport."
The process of booking a flying taxi will be instantly familiar.
Uber
Joby's air taxis, built exclusively for city travel, can accommodate up to four passengers and luggage. (Uber says size and weight guidelines will be announced closer to launch.) The interior is about the size of an SUV and has "comfortable seating" with panoramic windows. They can travel up to 200 mph and have a range of up to 100 miles. Four battery packs and a triple-redundant flight computer are onboard for safety purposes.
The air taxis aren't (yet) autonomous and will each have a human pilot onboard. That would at least suggest high prices. After all, pilots aren't nearly as cheap as Uber's legion of independent-contractor drivers. But the company insists its air taxi rides will somehow be around as expensive as an Uber Black trip.
Joby's air taxis have "panoramic" windows with a view of the city below.
Joby
Dubai is only the beginning of the companies’ plans. The US-based Joby says it's in the final stage of FAA type certification and hopes to launch service in New York and Los Angeles. Globally, it's targeting the UK and Japan as well.
As for how realistic a US launch is anytime soon, well, that's up for debate. On one hand, President Trump signed executive orders last year that would create a pilot program to test such aircraft. But safety and cost considerations may require a grounding of expectations.
The aircraft requires a human pilot, at least in these early stages.
Joby
In November, Robert Ditchey, a Los Angeles-based aviation expert and test pilot, toldNBC News that he didn't think air taxi service "was ever going to happen" in American cities. "They're dangerous," he warned. "We have had helicopters fail and crash on top of buildings in Los Angeles. We've had helicopters fail at takeoff and landing in airports. They're dangerous not from a fire point of view but in terms of landing on top of people and buildings." In addition, he warned that air taxis can't be developed in sufficient numbers to make them economically viable "unless they are subsidized by a government."
Uber and Joby have partnered since 2019. In 2021, Joby bought the Uber Elevate ride-hailing division, which essentially integrated the companies’ services. Last year, Joby acquired Blade Air Mobility's passenger business, which could open the door to eventually electrifying Blade's routes.
The video below shows one of Joby’s air taxis taking a test flight in Dubai.
This article originally appeared on Engadget at https://www.engadget.com/transportation/uber-previews-its-dubai-air-taxi-service-130000603.html?src=rss
A common theme in online age verification laws is the tension between user privacy and preventing children from accessing harmful or inappropriate content. Now the UK is sending a not-so-subtle message to Reddit on the subject, to the tune of £14.5m ($19.6 million). The nation's Information Commissioner's Office (ICO) accused the company of using children’s data and potentially exposing them to inappropriate content.
“Children under 13 had their personal information collected and used in ways they could not understand, consent to or control,” UK Information Commissioner John Edwards wrote in a statement. “That left them potentially exposed to content they should not have seen. This is unacceptable and has resulted in today’s fine.”
In July 2025, Reddit began requiring age verification to access adult content in the UK, in compliance with the Online Safety Act. However, that's only used to block under-18 users from sexually explicit, violent or other mature posts. The platform also prohibits users under 13 from accessing it altogether — and enforcement of that policy is lax. It merely requires users to declare, when signing up, that they're over 13. The ICO (accurately) described the method as "easy to bypass."
In its defense, Reddit told the BBC that it "didn't require users to share information about their identities, regardless of age, because we are deeply committed to their privacy and safety." The company said it would appeal the decision. "The ICO's insistence that we collect more private information on every UK user is counterintuitive and at odds with our strong belief in our users' online privacy and safety," the spokesperson added.
"It's concerning that a company the size of Reddit failed in its legal duty to protect the personal information of UK children," Edwards said. "Companies operating online services likely to be accessed by children have a responsibility to protect those children by ensuring they’re not exposed to risks through the way their data is used. To do this, they need to be confident they know the age of their users and have appropriate, effective age assurance measures in place.”
“Reddit failed to meet these expectations,” he added. “They must do better, and we are continuing to consider the age assurance controls now implemented by the platform.” The ICO also accused Reddit of failing to conduct a data protection impact assessment by January 2025.
The Guardiannotes that the £14.5m fine is the third-largest handed down by the ICO. It trails only a £20m fine for British Airways involving a data breach disclosure and an £18.4m penalty for Marriott Hotels for exposing over 300 million customer records in a hack.
This article originally appeared on Engadget at https://www.engadget.com/social-media/reddit-fined-196-million-over-age-verification-checks-in-the-uk-173705048.html?src=rss
YouTube's "Ask" button is making its way to the living room. The Gemini-powered feature is now rolling out as an experiment on smart TVs, gaming consoles and streaming devices. 9to5Google first spotted a Google support page announcing the change.
Like on mobile devices and desktop, the feature is essentially a Gemini chatbot trained on each video's content. Selecting that "Ask" button will bring up a series of canned prompts related to the content. Alternatively, you can use your microphone to ask questions about it in your own words.
The "Ask about this video" feature on desktop
YouTube
Google says your TV remote's microphone button (if it has one) will also activate the “Ask” feature. The company listed sample questions in its announcement, such as "what ingredients are they using for this recipe?" and "what's the story behind this song's lyrics?"
The conversational AI tool is only launching for "a small group of users" at first. Google promises that it will "keep everyone up to speed on any future expansions."
This article originally appeared on Engadget at https://www.engadget.com/ai/youtube-is-bringing-the-gemini-powered-ask-button-to-tvs-173900295.html?src=rss
Nevada is taking action against the rapidly growing Wild West of prediction markets. The state's gambling regulators and attorney general sued Kalshi on Tuesday. They accuse the company of bypassing Nevada law by operating a sports gambling market without proper licenses. In addition, they say Kalshi provides services to individuals under 21, which violates state law.
The lawsuit follows a federal appeals court’s rejection of Kalshi's request to prevent the state from pursuing legal action. And it comes a day after the Trump administration claimed that only the federal government has the right to enforce the industry.
Prediction markets, which allow users to bet on events such as sports, political outcomes and wars, have exploded in popularity. Business Insiderreports that Kalshi did 27 times as much business during this year's Super Bowl as last year's. Some of that growth has been at the expense of regulated gambling; Nevada's gambling operations did less business during this year's game.
"Kalshi has continued to dramatically expand its business, rather than attempting to maintain any kind of status quo," Nevada regulators wrote in a letter this month.
Kalshi and rival Polymarket insist that their businesses are "event contracts" and should be regulated as financial investments rather than gambling. The Trump administration, rife with conflicts of interest in this area, agrees. The Chair of the Commodity Futures Trading Commission (CFTC) filed an amicus brief on Tuesday, claiming that it alone has the authority to enforce the prediction market.
"The CFTC will no longer sit idly by while overzealous state governments undermine the agency's exclusive jurisdiction over these markets by seeking to establish statewide prohibitions on these exciting products," CFTC Chair Michael Selig wrote in a Wall Street Journal op-ed.
Donald Trump Jr. (Photo by Olivier Touron / AFP via Getty Images)
OLIVIER TOURON via Getty Images
Not coincidentally, prediction markets are a growing part of the Trump family business. Donald Trump Jr. is a paid adviser to Kalshi. He's also an investor in and unpaid adviser to Polymarket. In January, his family's social media business said it would launch its own prediction market platform.
Prediction markets have the potential to be a hotbed of insider trading. According to blockchain analyst DeFi Oasis, fewer than 0.04 percent of Polymarket accounts have captured over 70 percent of the platform's total profits, totaling over $3.7 billion.
Last month, The Guardianhighlighted the case of a Polymarket user who bet tens of thousands of dollars on "yes" to the question, "Israel's military action against Iran by Friday?" Within 24 hours, Israel bombed Iran, leaving hundreds dead. The user made $128,000 on that bet. The Guardian traced the blockchain data to a wallet associated with an X account. Its location on the social platform was set to Beit Ha'shita, a northern Israeli settlement. The user later transferred their bets to two other accounts, apparently to avoid detection. In January, the accounts held 10 live bets on Israeli military strategy.
Another anonymous user made over $400,000 by betting that Nicolás Maduro would be ousted by the end of January. The bets were placed in the hours and days leading up to the US strikes on Venezuela. In another case, eight jointly owned accounts collectively generated over $161,000 by betting on the country's María Corina Machado Parisca winning the Nobel Peace Prize. The accounts' handles used names such as "fmaduro," "madurowilllose," "striketheboats" and "trumpdeservesit".
This article originally appeared on Engadget at https://www.engadget.com/big-tech/nevada-sues-kalshi-for-operating-a-sports-gambling-market-without-a-license-175721982.html?src=rss
Web designers of the world: The Automattic-owned WordPress.com is further embracing AI on its platform. On Tuesday, it expanded its one-off AI site builder into a persistent AI assistant for editing and media creation.
In the site editor, the AI assistant can help with site-wide structure and design choices. For example, you can ask the chatbot to "give me more font options that feel clean and professional or “change my site colors to be brighter and bolder." It also includes image generation and writing assistance, such as "rewrite this to sound more confident." (Who needs learning when you have automation!)
The assistant can also now be integrated into your site's media library. It can generate new images or make prompted edits to your existing ones. Examples include "update this image to be black and white" or "replace this stack of pancakes with waffles." (Just don't fake that if your business sells breakfast food, okay?) WordPress says the assistant understands your website's look and brand and can tailor the media accordingly.
WordPress also added the AI assistant to the platform's team chat, Block Notes. You can summon the chatbot from within your team chat threads.
The tool is available for WordPress.com's Business or Commerce plans. (Or, if you made your site using the AI builder, it's enabled by default, no matter which plan you use.) The feature works best with the platform's block themes; it's much more limited with classic ones. You'll find the toggle to activate the AI assistant in your site settings under the "AI tools" section.
This article originally appeared on Engadget at https://www.engadget.com/ai/wordpress-adds-an-ai-assistant-174719676.html?src=rss
The head of the antitrust division is out at the US Department of Justice. Gail Slater, a former JD Vance adviser and Fox Corp VP, reportedly clashed with Attorney General Pam Bondi. Their longstanding feud is said to have centered around Slater's skepticism of corporate mergers.
"It is with great sadness and abiding hope that I leave my role as [Assistant Attorney General] for Antitrust today," Slater posted on X. "It was indeed the honor of a lifetime to serve in this role."
Although Slater technically resigned, The Guardianreports that she was forced out. The fallout was said to be over her differences with Bondi (who just yesterday yelled, insulted and deflected her way through a hearing over the DOJ's stonewalling of the Epstein files). In recent weeks, Bondi reportedly reiterated to the White House that Slater's views on the antitrust division's direction made the pair's relationship irreconcilable.
Attorney General Pam Bondi (Photo by Win McNamee/Getty Images)
Win McNamee via Getty Images
The tensions reportedly began simmering last summer, when Slater sought to block the merger between Hewlett-Packard Enterprise and Juniper Networks. She opposed the deal out of concerns that it would create a duopoly in cloud computing and wireless networking. In addition, Slater reportedly told Bondi that US intelligence hadn't raised any concerns about blocking the merger. However, CIA Director John Ratcliffe later claimed that blocking it would pose national security risks because it could lead to the loss of business to China. The Trump administration's merger-friendly DOJ ultimately approved the deal.
Alongside Bondi, Slater was overseeing the DOJ's review of Netflix's proposed acquisition of Warner Bros. Discovery. In December, Trump said he would be involved in the regulatory review. That followed intense lobbying by Netflix and Paramount, the latter of which launched a hostile takeover bid. Earlier this month, The Wall Street Journal reported that the department was investigating whether Netflix was involved in anticompetitive practices during the process.
Slater's ousting also comes weeks ahead of the DOJ's antitrust trial against Ticketmaster owner Live Nation. The department's lawsuit was filed during the Biden administration. It claims that Live Nation is operating as a monopoly, harming competition, fans, industry promoters and artists.
This article originally appeared on Engadget at https://www.engadget.com/big-tech/antitrust-head-overseeing-netflix-warner-merger-resigns-192854114.html?src=rss
Anthropic is upgrading Claude's free tier, apparently to capitalize on OpenAI's planned integration of ads into ChatGPT. On Wednesday, Anthropic said free Claude users can now create files, connect to external services, use skills and more.
Anthropic added the ability for paid users to create files in September. Starting today, free users of the chatbot can also create and edit Excel spreadsheets, PowerPoint presentations, Word docs and PDFs. Claude's file creation abilities are powered by Sonnet 4.5.
Free users can now create and edit Excel spreadsheets, PowerPoint presentations, Word docs, and PDFs.
Anthropic
Meanwhile, Connectors allow free users to link Claude to third-party services. There's a long list of available ones, including Canva, Slack, Notion, Zapier and PayPal.
Skills, on the other hand, let you teach Claude to "complete specific tasks in repeatable ways." In short, the chatbot loads folders of instructions, scripts and other resources when performing relevant tasks. Other upgrades to the free tier include longer conversations, interactive responses and improved voice and image search.
Claude's free-tier upgrades appear to be a direct response to ChatGPT's planned introduction of ads for its free users. Anthropic's announcement today ended with the tag line, "No ads in sight." This follows the company's promise last week that Claude will remain ad-free. Anthropic even poked fun at OpenAI's cash-seeking move in a Super Bowl ad (below), which also took a swipe at GPT-4o's penchant for kissing ass.
This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-beefs-up-claudes-free-tier-as-openai-prepares-to-stuff-ads-into-chatgpts-194100939.html?src=rss
Hubble may no longer be the gold standard, but it can still capture some impressive images. The telescope's latest snapshot is our clearest view yet of the Egg Nebula. Roughly 3,000 light-years away from Earth, the nebula's name is derived from its dense layer of gas and dust cloaking a central star.
The new image shows the nebula's four beams of starlight (from that central star) escaping from its gas-and-dust "shell." On either side of the disc-like cloud are fast-moving outflows of hot molecular hydrogen. The orange highlights in this image indicate the glow of infrared light.
As the beams of starlight stretch out from the center, they illuminate concentric rings of gas. The gas’s ripple-like pattern suggests it was created by successive bursts from the star, with a little more ejecting every few hundred years.
Hubble image of the Egg Nebula. A disc of gas and dust surrounded by beams of light and concentric rings of dust.
SA / Hubble & NASA, B. Balick (University of Washington)
The Egg Nebula, found in the constellation Cygnus, was first discovered in 1975. Nebulae in this preplanetary phase are rare finds. Since the stage only lasts a few thousand years (and because they're often faint), they're relatively difficult for astronomers to spot. By comparing this new image with previous Hubble snapshots of the Egg Nebula, astronomers can learn more about it and shed more light on its processes. But for the rest of us, it makes for some pretty sweet eye candy, right?
This article originally appeared on Engadget at https://www.engadget.com/science/space/hubble-showcases-the-egg-nebula-in-all-its-dying-star-glory-174239769.html?src=rss