The Morning After: ‘Nanosphere’ paint could reduce a plane’s CO2 emissions

Every gram counts in commercial flight. Material scientists from Kobe University have discovered “nanospheres” that are near-invisible silicone crystals. The particles can reflect light thanks to very large and efficient scattering, research published in the ACS Applied Nano Matter journal details. The result could mean covering a surface in vibrant color while only adding 10 percent of the weight of painting an aircraft for the same effect.

Minoru and Hiroshi’s discovery focuses on structural rather than pigment color to exhibit and maintain hues. The former absorbs wavelengths while reflecting those the human eye picks up. Structural colors, on the other hand, are intense and bright as light interacts with micro- and nanostructures. While the headline commercial benefits are for planes, the paint could have many more uses simply for its brightness.

— Mat Smith

The biggest stories you might have missed

The Odysseus has become the first US spacecraft to land on the moon in 50 years

The 8Bitdo Ultimate C controller is on sale for $25 today only

Google’s sign-in and sign-up pages have a new look

Sony is working on official PC support for the PS VR2

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!

Final Fantasy 7 Rebirth review

The open-world tour.

TMA
Engadget

Final Fantasy 7 Rebirth takes the characters and world reintroduced with Remake and does a better job at scaling it all up. Instead of playing in a single metropolis, Midgar, this time, it’s a world tour. There’s also an expanded roster of playable characters, almost doubling Remake’s total, each with a unique play style, once again. But does Aerith survive?

Continue reading.

Xiaomi 14 Ultra combines a 1-inch camera sensor with 4 AI imaging models

It also supports satellite calling and texting.

The Xiaomi 14 Ultra is the latest Leica-branded smartphone, featuring a second-gen one-inch camera sensor. Xiaomi is finally catching up with the competition by picking up Sony’s newest mobile camera sensor, the LYT-900. The Xiaomi 14 Ultra has a slight edge on rival phones with the same sensor, with its faster main variable aperture at up to f/1.63, beating the Oppo Find X7 Ultra’s f/1.8 — on paper, at least.

Continue reading.

Framework’s new sub-$500 modular laptop has no RAM, storage or OS

Pick the parts you want and install them yourself.

Framework is selling its cheapest modular laptop. It has dropped the price of its B-stock Factory Seconds systems (which are built with excess parts and new components). As such, it’s now offering a Framework Laptop 13 barebones configuration for under $500 for the very first time. The 13-inch machine comes with an 11th-gen Intel Core i7 processor with Iris Xe graphics. So the CPU should be sufficient for most basic tasks and some moderate gaming. However, you’ll need to add RAM, storage, a power supply, an operating system and (probably) even a Wi-Fi card.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-nanosphere-paint-could-reduce-a-planes-co2-emissions-121433976.html?src=rss

Google Pay app is shutting down in the US later this year

Google Pay was largely replaced by Google Wallet back in 2022, but it has still been operating in several countries, including the US. Now, the search giant has announced that the standalone Pay app will be discontinued stateside on June 4, 2024 in a push to simplify its payment methods. After that, it will only be available in Singapore and India due to the "unique needs in those countries," Google wrote in a blog. 

As part of the deprecation, Google will be removing peer-to-peer (P2P) payments, managing your balance and "find offers and deals." For the latter, it recommends using the new deals destination on Search. Users will still be able to transfer their Google Pay (GPay) balance to a bank account after June 4, 2024 using the Google Pay website.

Google Wallet has now largely replaced GPay, with five times as many users in 180 countries, the company said. That's because it can handle more than just payments — on top of credit and debit cards, it stores transport passes, state IDs, driver's licenses, virtual car keys and more. Google Pay, the service, will still be available through Google Wallet.

Google's payment system has been a mess over the years. It started off as Google Wallet, which was launched in 2011. At the time, it was a tap-to-pay system that came out years ahead of Apple Pay (2014), supported by Mastercard and retailers like Macy's. 

Meanwhile, Android Pay came out in 2015, then that was integrated with Google Wallet in 2018 and rebranded as Google Pay. In addition, the company originally had a Google Wallet card (killed in 2016) that was effectively a prepaid debit card usable with any retailers that accepted Mastercard. Now everything is back under the Google Wallet umbrella — unless the company changes its mind again. 

Correction, February 23, 2024, 10:30AM ET: This post has been updated to clarify that only the standalone Google Pay app is shutting down in the US, not the actual service. It will still be available through Google Wallet moving forward. 

This article originally appeared on Engadget at https://www.engadget.com/google-pay-is-shutting-down-in-the-us-later-this-year-105720968.html?src=rss

The latest experimental Threads features let you save drafts and take photos in-app

Meta is currently testing a couple of capabilities for Threads, which Instagram head Adam Mosseri describes as some of the "most requested" features for the social network. One of these experimental features is the ability to save drafts. Users will be easily able to save a post they've typed as a draft that they can edit and publish later by swiping down on their mobile device's display. When there's a draft saved, the app's menu at the bottom of the screen highlights the post icon. At the moment, though, they can only save one draft, and it's unclear if Meta has plans to give users the ability to save more. 

In addition to drafts, Meta is also testing an in-app camera. It opens the mobile phone's camera from within Threads itself, so that users can more easily share photos and videos from their phone. Meta chief Mark Zuckerberg made a post on the service with a photo he says was taken with the new in-app camera the company is testing. 

Meta told us that these are initial tests for the experimental features, which means they could undergo a lot of changes before they get a wide release, and are only available for a small number of people. Over the past month, Meta also started testing a bookmarking feature for Threads that allows users to save posts they can refer to later. The company is experimenting with its version of trending topics on Threads, as well, along with the ability to make cross-posts between Threads and Facebook. 

This article originally appeared on Engadget at https://www.engadget.com/the-latest-experimental-threads-features-let-you-save-drafts-and-take-photos-in-app-094535111.html?src=rss

Pansonic’s powerful Lumix S5 II is $800 off with a prime lens

Panasonic's powerful full-frame mirrorless camera, the S5 II, is on sale at Amazon and B&H Photo Video at the lowest price we've seen yet. You can grab one with an 85mm f/1.8 prime lens for as little as $1,796, a savings of $800 over buying both separately — effectively giving you a discount on the camera and a free lens to boot. 

As I wrote in my review, the 24-megapixel S5 II was already a great value at $2,000 thanks mainly to its strength as a vlogging camera. It's the company's first model with a phase-detect autofocus system that eliminates the wobble and other issues of past models. 

Panasonic also brought over its new, more powerful stabilization system from the GH6. And it has the video features you'd expect on a Panasonic camera, like 10-bit log capture up to 6K, monitoring tools and advanced audio features. With the generous manual controls and excellent ergonomics, it's an easy camera to use. It also comes with a nice 3.68-million dot EVF and sharp rear display that full articulates for vlogging. 

For photos, it's reasonably fast and great in low light, thanks to the dual native ISO system. Other features include dual high-speed SD card slots and solid battery life, particularly for video. The main downside is noticeable rolling shutter, but that shouldn't be a dealbreaker for most users — particularly at that price.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/pansonics-powerful-lumix-s5-ii-is-800-off-with-a-prime-lens-084816863.html?src=rss

Microsoft is giving Windows Photos a boost with a generative AI-powered eraser

Microsoft has announced a generative-AI powered eraser for pictures, which gives you an easy way of removing unwanted elements from your photos. Windows Photos has long had a Spot Fix tool that can remove parts of an image for you, but the company says Generative erase is an enhanced version of the feature. Apparently, this newer tool can create "more seamless and realistic" results even when large objects, such as bystanders or clutter in the background, are removed from an image. 

If you'll recall, both Google and Samsung have their own versions of AI eraser tools on their mobile devices. Google's used to be exclusively available on newer Pixel phones until it was rolled out to older models. Microsoft's version, however, gives you access to an AI-powered photo eraser on your desktop or laptop computer. You only need to fire up the image editor in Photos to start using the feature. Simply choose the Erase option and then use the brush to create a mask over the elements you want to remove. You can even adjust the brush size to make it easier to select thinner or thicker objects, and you can also choose to highlight more than one element before erasing them all.

At the moment, though, access to Generative erase is pretty limited. It hasn't been released widely yet, and you can only use it if you're a Windows Insider through the Photos app on Windows 10 and Windows 11 for Arm64 devices.

Photo of a dog against a beach background.
Microsoft
undefinedThis article originally appeared on Engadget at https://www.engadget.com/microsoft-is-giving-windows-photos-a-boost-with-a-generative-ai-powered-eraser-061851854.html?src=rss

The Odysseus has become the first US spacecraft to land on the moon in 50 years

The Odysseus spacecraft made by Houston-based Intuitive Machines has successfully landed on the surface of the moon. It marks the first time a spacecraft from a private company has landed on the lunar surface, and it’s the first US-made craft to reach the moon since the Apollo missions.

Odysseus was carrying NASA instruments, which the space agency said would be used to help prepare for future crewed missions to the moon under the Artemis program. NASA confirmed the landing happened at 6:23 PM ET on February 22. The lander launched from Earth on February 15, with the help of a SpaceX Falcon 9 rocket.

According to The New York Times, there were some “technical issues with the flight” that delayed the landing for a couple of hours. Intuitive Machines CTO Tim Crain told the paper that “Odysseus is definitely on the moon and operating but it remains to be seen whether the mission can achieve its objectives.” Odysseus has a limited window of about a week to send data back down to Earth before darkness sets in and makes the solar-powered craft inoperable.

Intuitive Machines wasn’t the first private company to attempt a landing. Astrobotic made an attempt last month with its Peregrine lander, but was unsuccessful. Intuitive Machines is planning to launch two other lunar landers this year.

This article originally appeared on Engadget at https://www.engadget.com/the-odysseus-spacecraft-has-become-the-first-us-spacecraft-to-land-on-the-moon-in-50-years-010041179.html?src=rss

Reddit files for IPO and will let some longtime users buy shares

After years of speculation, Reddit has officially filed paperwork for an Initial Public Offering on the New York Stock Exchange. The company, which plans to use RDDT as its ticker symbol, will also allow some longtime users to participate by buying shares.

In a note shared in the company’s S-1 filing with the SEC, Reddit CEO Steve Huffman said that many longtime users already feel a “deep sense of ownership” over their communities on the platform. “We want this sense of ownership to be reflected in real ownership—for our users to be our owners,” he wrote. “With this in mind, we are excited to invite the users and moderators who have contributed to Reddit to buy shares in our IPO, alongside our investors.”

The company didn’t say how many users might be able to participate, but said that eligible users would be determined based on their karma scores while “moderator contributions will be measured by membership and moderator actions.”

The filing also offers up new details about the inner workings of Reddit’s business. The company had 500 million visitors during the month of December and has recently averaged just over 73 million “daily active unique” visitors. In 2023, the company brought in $804 million in revenue (Reddit has yet to turn a profit). The document also notes that the company is “exploring” deals with AI companies to license its content as it looks to expand its revenue in the future.

Earlier in the day, Reddit and Google announced that they had struck such a deal, reportedly valued at around $60 million a year. “We believe our growing platform data will be a key element in the training of leading large language models (“LLMs”) and serve as an additional monetization channel for Reddit,” the company writes.

This article originally appeared on Engadget at https://www.engadget.com/reddit-files-for-ipo-and-will-let-some-longtime-users-buy-shares-234127305.html?src=rss

Stable Diffusion 3 is a new AI image generator that won’t mess up text in pictures, its makers claim

Stability AI, the startup behind Stable Diffusion, the tool that uses generative AI to create images from text prompts, revealed Stable Diffusion 3, a next-generation model, on Thursday. Stability AI claimed that the new model, which isn’t widely available yet, improves image quality, works better with prompts containing multiple subjects, and can more accurate text as part of the generated image, something that previous Stable Diffusion models weren’t great at.

Stability AI CEO Emad Mosque posted some examples of this on X.


The announcement comes days after Stability AI’s largest rival, OpenAI, unveiled Sora, a brand new AI model capable of generating nearly-realistic, high-definition videos from simple text prompts. Sora, which isn’t available to the general public yet either, sparked concerns about its potential to create realistic-looking fake footage. OpenAI said it's working with experts in misinformation and hateful content to test the tool before making it widely available.Stability AI said it’s doing the same. “[We] have taken and continue to take reasonable steps to prevent the misuse of Stable Diffusion 3 by bad actors,” the company wrote in a blog post on its website. “By continually collaborating with researchers, experts, and our community, we expect to innovate further with integrity as we approach the model’s public release.”

It’s not clear when Stable Diffusion 3 will be released to the public, but until then, anyone interested can join a waitlist.

This article originally appeared on Engadget at https://www.engadget.com/stable-diffusion-3-is-a-new-ai-image-generator-that-wont-mess-up-text-in-pictures-its-makers-claim-233751335.html?src=rss

Google pauses Gemini’s ability to generate people after overcorrecting for diversity in historical images

Google said Thursday it’s pausing its Gemini chatbot’s ability to generate people. The move comes after viral social posts showed the AI tool overcorrecting for diversity, producing “historical” images of Nazis, America’s Founding Fathers and the Pope as people of color.

“We’re already working to address recent issues with Gemini’s image generation feature,” Google posted on X (via The New York Times). “While we do this, we’re going to pause the image generation of people and will re-release an improved version soon.”

The X user @JohnLu0x posted screenshots of Gemini’s results for the prompt, “Generate an image of a 1943 German Solidier.” (Their misspelling of “Soldier” was intentional to trick the AI into bypassing its content filters to generate otherwise blocked Nazi images.) The generated results appear to show Black, Asian and Indigenous soldiers wearing Nazi uniforms.

Other social users criticized Gemini for producing images for the prompt, “Generate a glamour shot of a [ethnicity] couple.” It successfully spit out images when using “Chinese,” “Jewish” or “South African” prompts but refused to produce results for “white.” “I cannot fulfill your request due to the potential for perpetuating harmful stereotypes and biases associated with specific ethnicities or skin tones,” Gemini responded to the latter request.

“John L.,” who helped kickstart the backlash, theorizes that Google applied a well-intended but lazily tacked-on solution to a real problem. “Their system prompt to add diversity to portrayals of people isn’t very smart (it doesn’t account for gender in historically male roles like pope; doesn’t account for race in historical or national depictions),” the user posted. After the internet’s anti-“woke” brigade latched onto their posts, the user clarified that they support diverse representation but believe Google’s “stupid move” was that it failed to do so “in a nuanced way.”

Before pausing Gemini’s ability to produce people, Google wrote, “We’re working to improve these kinds of depictions immediately. Gemini’s Al image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

The episode could be seen as a (much less subtle) callback to the launch of Bard in 2023. Google’s original AI chatbot got off to a rocky start when an advertisement for the chatbot on Twitter (now X) included an inaccurate “fact” about the James Webb Space Telescope.

As Google often does, it rebranded Bard in hopes of giving it a fresh start. Coinciding with a big performance and feature update, the company renamed the chatbot Gemini earlier this month as the company races to hold its ground against OpenAI’s ChatGPT and Microsoft Copilot — both of which pose an existential threat to its search engine (and, therefore, advertising revenue).

This article originally appeared on Engadget at https://www.engadget.com/google-pauses-geminis-ability-to-generate-people-after-overcorrecting-for-diversity-in-historical-images-220303074.html?src=rss

Reddit is licensing its content to Google to help train its AI models

Google has struck a deal with Reddit that will allow the search engine maker to train its AI models on Reddit’s vast catalog of user-generated content, the two companies announced. Under the arrangement, Google will get access to Reddit’s Data API, which will help the company “better understand” content from the site.

The deal also provides Google with a valuable source of content it can use to train its AI models. “Google will now have efficient and structured access to fresher information, as well as enhanced signals that will help us better understand Reddit content and display, train on, and otherwise use it in the most accurate and relevant ways,” the company said in a statement.

Access to Reddit’s data became a hot-button issue last year when the company announced it would start charging developers to the use its API. The changes resulted in the shuttering of many third-party Reddit clients, and a sitewide protest in which thousands of subreddits temporarily “went dark.” Reddit justified the changes, in part, by saying that large AI companies were scraping its data without paying. In a statement, Reddit noted that the new arrangement with Google “does not change Reddit's Data API Terms or Developer Terms” and that “API access remains free for non-commercial usage.”

The deal comes as Reddit is expected to go public in the coming weeks. Neither Google or Reddit disclosed the terms of their arrangement but Bloomberg reported last week that Reddit had struck a licensing deal with a “large AI company” valued at “about $60 million” a year. That amount was also confirmed by Reuters, which was first to report Google’s involvement.

This article originally appeared on Engadget at https://www.engadget.com/reddit-is-licensing-its-content-to-google-to-help-train-its-ai-models-200013007.html?src=rss