Artemis II commander shares a remarkable video of Earth vanishing behind the Moon

We’ve seen some astonishing photos of an Earthset — the Earth setting behind the Moon — from the Artemis II crew’s history-making trip around our planet’s closest neighbor. Now, Reid Wiseman, the mission’s commander, has shared a remarkable video of that same phenomenon.

While mission specialist Christina Koch was using a Nikon camera to snap stunning still images of the Earthset, Wiseman used an iPhone 17 Pro Max to film the moment. “I could barely see the Moon through the docking hatch window but the iPhone was the perfect size to catch the view… This is uncropped, uncut with 8x zoom which is quite comparable to the view of the human eye,” he wrote on X.

This was the first time that human eyes had witnessed an Earthset in 54 years since the Apollo 17 mission. The Artemis II crew flew more than 5,000 miles beyond the Moon as they travelled more than a quarter of a million miles away from Earth — the furthest any humans have ever been from terra firma.

I, like many people, overuse the word “awesome.” It should only really be used when something actually inspires awe. This video absolutely meets that mark. It’s genuinely awesome.

This article originally appeared on Engadget at https://www.engadget.com/science/space/artemis-ii-commander-shares-a-remarkable-video-of-earth-vanishing-behind-the-moon-152036403.html?src=rss

Sam Altman’s ‘human verification’ company thinks its eye-scanning orbs could solve ticket scalping

Sam Altman's cryptocurrency turned identity verification startup Tools for Humanity is offering a new set of perks to people who scan their eyes at one of the company's orbs. Among them, is a new tool called Concert Kit that could help bands and artists fight back against ticket scalping bots. 

The new feature relies on the revamped World ID, the orb-based verification system that scans users eyeballs and faces to create a "proof of human" signature that lives on users' mobile devices. "It's basically like a little human passport for the internet that lets you prove on apps and websites that you are a real and unique human without revealing anything about yourself," Tools for Humanity Chief Product Officer Tiago Sada tells Engadget. 

Now, as more apps and services are starting to support World ID, that "human passport" can unlock some new abilities. Coupled with Concert Kit, it allows artists to designate a specific pool of tickets for "verified" humans only. The concept is a bit like how pre-sales currently work, with artists (or their teams) setting aside a specific number of tickets for people who have set up a World ID. Those folks can then use their World ID to get ticket codes for Ticketmaster, Eventbrite, AXS or other major ticketing platforms. 

Because World ID is limited to actual, "verified," humans the system won't be susceptible to the same tactics that have enabled bots to ruin the ticket-buying process for so many, Tools for Humanity says. Artists are also in control of what level of verification they want to require from their fans. (The new World ID app will also allow people to set up an account with a selfie check if they don't have ready access to an orb.) 

Just how much of a dent Concert Kit will be able to make in the massive scalping bot problem that plagues the concert industry is less clear. So far, Bruno Mars is slated to use the solution on his upcoming world tour — no word on just how many of his tickets will be reserved for World ID-verified humans, though — and Concert Kit is available to other artists starting today.

Concert Kit is one of several new integrations and updates to World ID that Tools for Humanity announced at an event in San Francisco Friday. Tinder, which earlier this year started testing World ID as an age verification solution in Japan, will be rolling out support worldwide. In the US, Tinder's integration won't be for age verification, though. Instead, it will indicate whether there is an actual "verified" human behind a given profile.

Tinder profiles that verify with World ID will get a badge as an extra signal of authenticity.
Tinder profiles that verify with World ID will get a badge as an extra signal of authenticity.
Tools for Humanity

On the enterprise side, Zoom and DocuSign are also adding support for World ID to help businesses verify that there is an actual person (and not a deepfake or bot) joining their video calls or signing important documents. Tools for Humanity is also introducing a standalone app for World ID that separates its identity verification tools from its existing crypto wallet app.

The updates are Tools for Humanity's latest attempt to make their orb-based verification system, which has been widely mocked, more mainstream and perhaps a little less dystopian. (Elsewhere, orbs have begun appearing in some new places like a San Francisco Gap.) 

On their part, Tools for Humanity seems aware that a lot of people aren't ready to scan their faces at a bunch of orbs controlled by Altman just to "prove" they are humans. I asked Sada, Tools for Humanity's Chief Product Officer, what he would say to people who think that the company is solving for the wrong problem: that really it should be up to ticketing platforms and dating apps and other services to strengthen their security and bot-fighting tools, rather than rely on their users to "prove" their humanness. 

He said it was a "completely understandable question" and compared it to some people's initial discomfort with things like Apple's TouchID or FaceID. "Not everyone has to do it upfront, and that's important," he said. "It's optional. If you want to have a World ID, you get access to that enhanced experience."

This article originally appeared on Engadget at https://www.engadget.com/ai/sam-altmans-human-verification-company-thinks-its-eye-scanning-orbs-could-solve-ticket-scalping-171500555.html?src=rss

Panic says the Playdate Catalog won’t accept games made with generative AI

Panic, the company behind the tiny and excellent Playdate console, is taking a stand on generative AI. The company has published an AI disclosure that says as of this month, the Playdate Catalog “will no longer accept titles that use ‘Generative AI’ for art, audio, music, text, or dialog.” Panic does allow for developers to use AI assistance for coding, but also says that “we will flag any title as such and specify the extent that it was used (for example, “Lua debugging”) so the customer can decide whether to support it or not.”

This comes a day after Panic announced that Playdate season three was happening and would arrive later this year. For those who don’t recall, the Playdate includes a “season” worth of games when you buy it, 24 titles in total with two revealed every week. Season two came out last year with 12 games — but, as Game Developer notes, one of those games used generative AI for writing and coding. On Bluesky, someone asked Panic if it would disclose what games in season three used AI, and the company confirmed that it was a requirement for season three that developers not use AI for art, music, writing or coding.

Specifically, Panic says you can’t use large language models like ChatGPT or Google Gemini, AI image generators like Stable Diffusion or audio generators like MuseNet and Suno. Previously-approved games with generative AI will be allowed to stay on the catalog with a disclosure that indicates what exactly AI was used for. The company says these guidelines are “under constant discussion and is subject to change at any time.”

I recall seeing AI disclosures on games in the Playdate Catalog in the past, but it makes sense to be up-front and clear on exactly what Panic allows and what it will reject. That said, it’s fairly easy to sideload games onto a Playdate, so anyone who wants to use generative AI to make a game isn’t entirely out of luck — though distribution and discovery for Playdate owners will obviously be harder.

This article originally appeared on Engadget at https://www.engadget.com/gaming/panic-says-the-playdate-catalog-wont-accept-games-made-with-generative-ai-160615022.html?src=rss

Anthropic now has a design assistant too

In hindsight, I suppose it was only a matter of time after Anthropic made Claude capable of generating charts and diagrams that the company would then begin offering a more robust image editor. Now, a little more than a month after that release, Anthropic has announced Claude Design, a new research preview that allows subscribers to use Claude to generate designs, prototypes, slides and more.  

"Claude Design gives designers room to explore widely and everyone else a way to produce visual work," Anthropic says of its newest product. As with its previous forays into image generation, the company isn't calling this, well, an image generator. Instead, Anthropic describes Opus 4.7, the system powering the app, as its most capable vision model to date. In other words, you won't be using Claude Design to whip up a picture of a cat in space eating a lasagna.  

As you might expect, every project in Claude Design starts with a prompt. From there, Anthropic notes users can refine Claude's outputs through conversation, inline comments and direct edits. Like Adobe's recently announced AI assistant, Claude will also generate custom sliders that correspond to specific elements in a design, which the user can push and pull to modify those elements. For instance, in the screenshot below, you can see how Claude has tweaked the interface to allow the user to adjust the glow and density of arcs it used to illustrate a connected network.

Claude Design will generate custom sliders you can use to adjust specific visual elements.
Anthropic

Anthropic has also built an onboarding process that allows Claude to build an internal visual language after reading your organization's codebase and existing design documents. "Every project after that uses your colors, typography, and comments automatically," according to the company. Outside of text prompts, there's also support for image and document uploads, and Anthropic has even included a web capture tool so enterprise customers can snapshot elements from their company's website. There's also built-in sharing, and you can export a design directly to Claude Code. In the coming weeks, Anthropic has promised to make it easier to build integrations with its new app. 

Claude Design arrives in the same week that both Adobe and Canva released their own visual AI assistants. If Anthropic is preparing to eat Canva’s lunch, it's doing so in a strange way given that you can export your Claude Design projects to Canva. If you want to try the new app for yourself, it's available as part of Anthropic's Pro, Max, Team and Enterprise subscriptions, with usage running up against your usage limits.

This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-now-has-a-design-assistant-too-150000903.html?src=rss

Perplexity brings its Personal Computer AI assistant to Mac

Perplexity has just released Personal Computer. The software, which is available starting today for Mac, builds on the multi-model orchestration capabilities the company debuted with Perplexity Computer at the end of February. Like Claude Cowork (and, as of today, OpenAI Codex too), it's a suite of computer use agents that can work with your files, apps, connectors and the web to complete complex and "even continuous workflows." 

Perplexity suggests a few different use cases for Personal Computer, starting with the obvious. “You can ask Personal Computer to read your to-do list,” the company states. “In fact, you can ask it to DO your to-do list." It explains you can open the Notes app on your Mac, ask Personal Computer for help and the system will reason how to best assist you. In the process of tackling that task, it can work across all your files, as well as apps like Apple Messages. When needed, it will also employ multiple agents to complete a request. Like Anthropic did with Claude Cowork, Perplexity says you can also use its software to organize messy folders so files feature sensible names and there's an easy-to-understand structure to everything.

You can prompt Personal Computer with your voice, and you can even initiate and manage tasks from your phone. Perplexity says the app creates files in a secure sandbox, and any actions it takes are auditable and reversible. "A system that acts on your behalf needs to be useful and legible. It should feel like a team you manage, not a rogue employee with keys to your most important data," the company said.   

Personal Computer for Mac is available starting today, beginning with Max subscribers. Perplexity said it would bring the app to its other users soon, prioritizing those who joined the waitlist for the experience. 

This article originally appeared on Engadget at https://www.engadget.com/ai/perplexity-brings-its-personal-computer-ai-assistant-to-mac-202045969.html?src=rss

Blackmagic Camera for iOS now has a companion Watch app

Blackmagic Camera is one of the more powerful third-party smartphone camera apps available and it's now even more useful for solo creators. Blackmagic Design just announced that the latest iOS version 3.3 now supports Apple Watch, letting you control the app and monitor video remotely from your wrist. It also includes ATEM camera control so you can use your iPhone as a live studio camera. 

With the new Camera Apple Watch companion app, you can remotely control and monitor your iPhone from anywhere within Wi-Fi range. It lets you start and stop recording, control zoom and adjust settings like frame rate, shutter speed (angle), white balance and ISO with a tap. You can also see a view of your video for framing control, though a Watch screen is probably a bit too small to accurately check focus. 

The Watch app will benefit solo creators who want to mount their iPhone on a tripod to record standup or vlogging activities. To set it up, you install the Watch app through your iPhone and it will automatically connect and sync to your device. 

Blackmagic Camera 3.3 iOS app
Use your iPhone as a broadcast camera? Sure, why not
Blackmagic Design

The other key feature is iPhone control from Blackmagic's ATEM Mini switcher used by streamers and broadcasters. To use it, you need the $420 Blackmagic Camera ProDock that gives your iPhone 17 Pro or iPhone 17 Pro Max an HDMI output, timecode, USB-C and other ports. Blackmagic Camera now lets you connect a single HDMI cable from the ProDock to an ATEM Mini switcher, then adjust settings, trigger recording, focus and zoom. It also offers a DaVinci primary color corrector so you can match and create digital film looks during live production. 

Finally, Camera now supports Blackmagic's "Focus and Zoom Demand" controls (a knob and handle) designed for broadcast cameras. When those controls and an iPhone 17 Pro/Pro Max are connected via USB-C to a ProDock as shown above, you can zoom and focus Camera app video without taking your hands off the tripod handles. Together with the ATEM feature, it lets you use an iPhone as a full broadcast camera, which looks slightly weird but is pretty cool.

On top of those features, Blackmagic Design also added ProRes RAW stabilization and general bug fixes and improvements. Blackmagic Camera for iOS 3.3 is available now as a free download from the Apple App Store.

This article originally appeared on Engadget at https://www.engadget.com/apps/blackmagic-camera-for-ios-now-has-a-companion-watch-app-194529980.html?src=rss

Meta is giving Threads on web a redesign that finally adds direct messages

Meta is starting to test a long-overdue facelift for Threads on web. The company's head of Threads Connor Hayes showed off a new look for the web version of Threads that finally adds direct messaging and makes it easier to navigate between multiple feeds.

The new layout adds a bunch of new shortcuts to the site's left rail, including saved posts, insights, activity, and the ability to move between different feeds. Those features have all been accessible on web before, but many were hard to find. For example, the only way to currently get to "insights" is to navigate to your own profile or save it as a "pinned" column. Most importantly, though, the update finally adds the Threads inbox, which has not been available to web users even though the feature was added to the app last June.

It's not clear when the new look will roll out, but Hayes said Meta has already started to test it and that the company will "be investing more here going forward." The last time the Threads website got a major update was last April, which added some basic functionality. But since then, Meta has focused much of its efforts on the Threads app, rather than the website. Some newer features, like disappearing "ghost posts," are able to be viewed on the web but can only be created in the app.

Speaking of the Threads app, the web updates come one day after Hayes previewed some tweaks to how replies look on mobile. With the change, replies under a post will be indented slightly to make it easier to follow conversations. That change is rolling out now on iOS and currently "testing" on Android. 

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-is-giving-threads-on-web-a-redesign-that-finally-adds-direct-messages-192903284.html?src=rss

OpenAI’s latest Codex update builds the groundwork for its upcoming super app

Last month, following reporting from The Wall Street Journal, OpenAI confirmed it was working on a desktop super app that would combine ChatGPT, its Codex coding agent and Atlas web browser into one cohesive experience. OpenAI is not releasing that application today. Instead, it's pushing out a major update to Codex that significantly expands what that software can do. However, the new release offers a glimpse of what OpenAI hopes to build with its latest effort.  

"We're building the super app out in the open," said Thibault Sottiaux, the head of Codex, during a press briefing held by OpenAI. "This release is about developers. In the future, we will broaden it up to a wider audience." Until then, the latest version of Codex offers developers multi-purpose AI agents that can work across a "larger surface area," while being more proactive. In practice, that translates to a host of new capabilities, starting with computer use. 

The agents inside of Codex can interact with other apps on your PC. When prompting one of OpenAI's models, you can name a specific program or let it determine the best application for the job. Computer use is available in competing apps like Claude Cowork, but where OpenAI believes Codex offers an edge in that department is in the "secret sauce" it built to allow an agent to run an app without bogging down your entire system, so the two of you can work in tandem. At the same time, OpenAI is releasing 111 new plugins for Codex that combine skills, app integrations and model context protocol server connections to give Codex more ways to gather context and use the tools developers depend on for their work.

The company has also added a built-in browser, with a commenting system that allows you to prompt Codex to make tweaks to specific parts of a webpage or web app you're building. In the demo OpenAI showed, one member of the Codex team used this tool to instruct Codex to change the margins on a graph so that the y axis wasn't cut off. Complementing this is built-in image generation. Codex can use gpt-image-1.5 to create product concepts, mockups, frontend designs and even assets for simple games. It also allows Codex to use screenshots to verify it's on the right track with a user request.   

With today's update, OpenAI is also previewing a pair of memory features. The first allows Codex to recall context from previous tasks to inform how it goes about future prompts. According to OpenAI, with time, this will allow Codex to complete requests faster and to a higher standard. The app will also use the context it's gathered to suggest proactive actions. For example, at the start of your day, it might suggest you respond to a comment a coworker left on a Google Doc draft you wrote. 

If you want to try the updated Codex for yourself, OpenAI is starting to roll out the new version to desktop app users who are logged in with their ChatGPT account. Computer use is available to macOS users first, with availability for people in the EU and UK to follow soon. Similarly, Brits and Europeans will need to wait to try the memory features OpenAI has built into Codex.  

This article originally appeared on Engadget at https://www.engadget.com/ai/openais-latest-codex-update-builds-the-groundwork-for-its-upcoming-super-app-170000019.html?src=rss

Gemini can now draw on your Google data to personalize the images it generates

Your Google Photos library could soon influence the kind of images you can generate with Gemini. After letting users personalize the AI assistant's responses with data from Gmail, Search and YouTube, Google says it's bringing that same "Personal Intelligence" to Nano Banana 2 to make it easier for users to create personalized images with the AI model.

The goal is to have the data affiliated with your Google account — your YouTube history, emails, Google Photos, etc. — provide context to Nano Banana 2 so you don't have to. Rather than prompting Gemini's image generation model with information about you or photos of your belongings, a direction to "create a picture of my desert island essentials" should produce an image that includes the things you care about without any extra context. Similarly, if you use labels in Google Photos to identify people or pets, you can tell Gemini to "create a hand-drawn illustration of mom," and it should be able to use Google Photo's labels to find the right reference photo and create an image of the right person.

A gif of someone generating an image with Gemini using Personal Intelligence.
Google

If Gemini creates images that don't look right, you can still send a follow-up prompt to refine the result, or select a new source image from Google Photos with the "+" button. Google says you can also click the "Sources" button to view what images the AI referenced in the first place, or ask it directly for the attribution and sources used for a specific image.

Personalized user data is one of the unique advantages Google has over companies offering competing AI assistants, so expanding Personal Intelligence to an already popular feature like image generation is a natural way to build on that lead. For now, this more personalized version of Nano Banana 2 is available in the Gemini app for eligible AI Pro and AI Ultra subscribers. Google says the feature will come to Gemini in Chrome and other users "soon."

This article originally appeared on Engadget at https://www.engadget.com/ai/gemini-can-now-draw-on-your-google-data-to-personalize-the-images-it-generates-160000269.html?src=rss

Canva starts previewing a more powerful version of its AI assistant

Adobe isn't the only company releasing a new AI assistant this week. Ahead of its Create event in Los Angeles today, Canva announced Canva AI 2.0. Building on its existing AI assistant, the company is billing the release as its most significant update since the platform first launched in 2013, and the culmination of years of investment to build its own foundational design models. 

As you might imagine, it all starts with a conversational interface that allows you to describe an idea or goal and the system will start generating a design to match. Under the hood, there's a new orchestration layer that allows the model to use all of Canva's disparate tools to accomplish complex, multi-step tasks. For instance, the company suggests you could use Canva AI to create a multi-channel advertising campaign, and the software will generate everything you need to get that off the ground. 

For brands, Canva AI 2.0 can adapt to their design needs.
For brands, Canva AI 2.0 can adapt to their design needs.
Canva

If edits are required, the company says Canva AI avoids one of the pitfalls of many other image generation models. It's possible to edit every visual element the system generates, just like if they were created with a traditional image editor. As a result, you can do things like swap out images and tweak fonts without affecting any other part of a design. To bring everything together, Canva has built persistent memory into the tool. The more you use Canva AI, the better the system will get at applying your personal taste and style to future generations. According to the company, it also has a context window that is long enough to maintain coherence until you arrive at a final design.    

Alongside those enhancements, Canva is adding support for new workflows that expand what you can do with its software, starting with connections that allow its models to pull data from other apps, including Notion, Slack, Zoom, Gmail, Google Calendar and more. Users can also schedule tasks for Canva AI to complete in the background, and the company has even baked in deep research capabilities into the tool.

The coding function Canva previously offered has been upgraded to include support for HTML imports, allowing users to bring any HTML file or AI-generated experience into Canva's visual editor to tweak the design of it without breaking things. For brands, the company is also offering a tool that can process their visual identity and apply it to new and existing designs.   

Canva's updated coding agent now support HTML imports.
Canva

As a casual observer, it might seem like Canva is trend chasing, but Danny Wu, the company's head of AI, argues the new AI tools represent a natural evolution for Canva. "This is something we've been dreaming of and working towards for quite a while," he tells Engadget. "Even before ChatGPT was a thing, we were thinking, 'what if we don't have a template that matches your needs?' … So I wouldn't describe this as a pivot or shift, we've been wanting to offer these kinds of capabilities all along as part of our mission to make design simple."

If you want to give Canva's new tools a try for yourself, Canva AI 2.0 is available as a research preview starting today. The first 1 million people who visit the Canva website will get first access, with availability gradually expanding to more users over the coming weeks. As before, access to Canva’s AI features remains included in the company’s free offering, though it’s also introducing a new AI Pass add-on that significantly increases rate limits for users.

This article originally appeared on Engadget at https://www.engadget.com/ai/canva-starts-previewing-a-more-powerful-version-of-its-ai-assistant-130000966.html?src=rss