Google has announced a slew of Android updates to kick off MWC this year, including Gemini integration with Messages and AI-powered text summaries for when you’re driving. As of this week, Messages users will be able to access Google’s chatbot without leaving the texting app. Gemini in Messages can handle basic tasks like drafting messages and helping to plan events, or you can just chat with it if you’re bored. The feature is still in beta, and it’s only available to English-language Messages users for now, Google says.
Android Auto is also getting a boost from AI that could help minimize distractions from people texting you while you’re on the road. If the group chat is blowing up your phone with nonstop messages or if someone is sending you novels of text, Android Auto will automatically summarize the messages and read you its more succinct version. It’ll also suggest replies and actions based on the messages, like sharing your ETA, so you can respond with a single tap and focus on driving.
Google
Google also announced some new accessibility features for Android at MWC, including AI-generated image captions in the Lookout app. It’ll be able to generate descriptions for images found online or received in messages and read them aloud to the user. The feature is only available in English to start, but is rolling out globally. Google’s Lens feature in Maps is getting an enhanced screen reader option as well, which will allow users to point their phone’s camera at something in front of them, like a restaurant or transit station, and hear information about it.
The Android updates also include new casting controls for Spotify called Spotify Connect so users can switch seamlessly between their devices, like from your headphones to a speaker. This feature was already available for YouTube Music.
Google
Catch up on all of the news from MWC 2024 right here!
This article originally appeared on Engadget at https://www.engadget.com/google-brings-gemini-to-messages-and-adds-ai-text-summaries-for-android-auto-080051647.html?src=rss
Meta is currently testing a couple of capabilities for Threads, which Instagram head Adam Mosseri describes as some of the "most requested" features for the social network. One of these experimental features is the ability to save drafts. Users will be easily able to save a post they've typed as a draft that they can edit and publish later by swiping down on their mobile device's display. When there's a draft saved, the app's menu at the bottom of the screen highlights the post icon. At the moment, though, they can only save one draft, and it's unclear if Meta has plans to give users the ability to save more.
In addition to drafts, Meta is also testing an in-app camera. It opens the mobile phone's camera from within Threads itself, so that users can more easily share photos and videos from their phone. Meta chief Mark Zuckerberg made a post on the service with a photo he says was taken with the new in-app camera the company is testing.
Meta told us that these are initial tests for the experimental features, which means they could undergo a lot of changes before they get a wide release, and are only available for a small number of people. Over the past month, Meta also started testing a bookmarking feature for Threads that allows users to save posts they can refer to later. The company is experimenting with its version of trending topics on Threads, as well, along with the ability to make cross-posts between Threads and Facebook.
This article originally appeared on Engadget at https://www.engadget.com/the-latest-experimental-threads-features-let-you-save-drafts-and-take-photos-in-app-094535111.html?src=rss
Microsoft has announced a generative-AI powered eraser for pictures, which gives you an easy way of removing unwanted elements from your photos. Windows Photos has long had a Spot Fix tool that can remove parts of an image for you, but the company says Generative erase is an enhanced version of the feature. Apparently, this newer tool can create "more seamless and realistic" results even when large objects, such as bystanders or clutter in the background, are removed from an image.
If you'll recall, both Google and Samsung have their own versions of AI eraser tools on their mobile devices. Google's used to be exclusively available on newer Pixel phones until it was rolled out to older models. Microsoft's version, however, gives you access to an AI-powered photo eraser on your desktop or laptop computer. You only need to fire up the image editor in Photos to start using the feature. Simply choose the Erase option and then use the brush to create a mask over the elements you want to remove. You can even adjust the brush size to make it easier to select thinner or thicker objects, and you can also choose to highlight more than one element before erasing them all.
At the moment, though, access to Generative erase is pretty limited. It hasn't been released widely yet, and you can only use it if you're a Windows Insider through the Photos app on Windows 10 and Windows 11 for Arm64 devices.
Microsoft
undefinedThis article originally appeared on Engadget at https://www.engadget.com/microsoft-is-giving-windows-photos-a-boost-with-a-generative-ai-powered-eraser-061851854.html?src=rss
NVIDIA is testing out a new unified app that lets users adjust GPU settings, install software and fine-tune gameplay, all from the same place. Currently, you have to access the dated Control Panel app and do some heavy menu diving to do stuff like configuring G-Sync. There’s also an entirely separate "user-friendly" app called GeForce Experience for basic GPU adjustments, driver updates and quick settings. So this collapses two different things into one.
The appropriately-named NVIDIA app is just a beta for now, but seems to do a whole lot. You can use it to update drivers, discover and install standalone applications like GeForce Now and make all kinds of GPU adjustments. To further simplify things for PC gamers, you can also use it to fine-tune both game settings and driver settings. It’s pretty much a one-stop shop.
There’s a redesigned in-game overlay for easier access to recording tools and performance monitoring. The overlay also lets you apply various gameplay filters, including AI-powered filters available to GeForce RTX users. The app looks to be squarely aimed at those who balk at the perceived complexity of PC gaming. You can even use it to redeem bundles and rewards and opt into experimental features and new RTX capabilities.
Speaking of new RTX capabilities, the app lets users easily experiment with that new remix tool that adds AI-optimized upscaled textures to older games. The celebrated Half Life 2 is getting an unofficial RTX remaster thanks to this technology. The app will also have access to a new feature called RTX Dynamic Vibrance that beefs up visual clarity and improves upon the current Digital Vibrance feature found in the current Control Panel app.
This article originally appeared on Engadget at https://www.engadget.com/nvidia-is-testing-an-app-that-unifies-geforce-experience-and-control-panel-140037038.html?src=rss
Last year, a global survey crowned KeyShot as the “Best Rendering Software,” with 88% of designers overwhelmingly picking it for its incredibly photorealistic rendering capabilities. Now, with KeyShot’s newly unveiled Physics Simulation and Camera Keyframe features, the software is growing even more powerful, bringing real-world physics and camera effects to make your renders pop even more.
I put KeyShot’s Physics Simulation feature to the ultimate test by rendering a dramatic domino chain reaction scene. Setting up the simulation took hardly any time, with incredibly easy controls that took mere minutes to get the hang of. The results were jaw-dropping if I say so myself. In this article, I’ll show you how I managed to pull off one of my most exciting KeyShot rendering experiences ever. I’ll walk you through how I set the domino scene up, what parameters I input into the Physics Simulation window, and how you can recreate this scene, too. I’ll also share tips and tricks that can help you create some incredibly real simulations with objects falling, bouncing, and colliding with each other, absolutely enhancing your KeyShot rendering experience to a level like never before.
The entire scene was modeled in Rhino 7, starting by building one single domino, creating a spiral curve, and arraying multiple dominoes along the curve. The dominos were spaced at roughly 2 centimeters apart, ensuring the chain reaction would go smoothly from start to finish. The entire scene has a whopping 1182 dominoes in total; a little ambitious considering I was going to render the simulation on a 2022 gaming laptop.
Tilt the first domino to help kickstart the physics cycle
To use the simulation feature, import your scene into the latest version of KeyShot (2023-24) (get a free trial here), set the scale, add the materials, and pick the right environment. Before you use the physics feature, however, you need to prime your scene – in this case, it meant tilting the first domino forward so gravity would kick in during the simulation. The Physics Simulation feature can be found in the ‘Tools’ section on top. Clicking on it opens a separate window with a preview viewport, a bunch of settings, and an animation timeline on the bottom.
The Physics Simulation feature can be found in the Tools window
To begin with, pick the parts you want to apply physics to (these are the parts that will be influenced by gravity, so don’t pick stuff that remains stationary, like ground objects). The parts you don’t select will still influence your physics because moving objects will still collide with them. Once you’ve chosen what parts you want to move (aka the dominoes), select the ‘Shaded’ option so you can see them clearly in the viewport.
The settings on the left are rather basic but extremely powerful. You start by first setting the maximum simulation time (short animations require short simulations; considering mine was a long chain reaction, I chose 200 seconds), followed by Keyframes Per Second – This basically tells KeyShot to make your animation more detailed or choppy (think FPS, but for simulation). I prefer selecting 25 keyframes per second since I’m rendering my animation at 25fps (just to keep the simulation light), but you can bump things up to 60 keyframes per second, which gives your simulation smoother detail. You can then bump up your animation FPS to render high frame-rate videos that can then be slowed down for dramatic slow motion. Simulation quality dictates how well KeyShot factors the physics in – it’s at a default of 0.1, although if you feel like your simulation looks off, bump it up to a higher value.
The Physics Simulation Window
The remaining settings pertain to gravity and material properties. The gravity is set at Earth’s default of 9.81 m/s² – increasing it makes items heavier (and fall faster), and decreasing it makes objects float around for longer before descending. I set mine at 11 m/s² just to make sure the dominoes fall confidently. Friction determines the amount of drag caused by two colliding objects – setting a higher friction causes more surface interference, like dropping a cube on a ramp made of rubber, and reducing the friction enables smooth sliding, like the same cube on a polished metal ramp. To ensure that the dominos don’t stick to each other like they were made of rubber, I reduced my friction setting to 0.4. Finally, a Bounciness feature lets you determine how two objects collide – the lower this setting, the less bounce-back, the higher the setting, the more the rebound. Given that I didn’t want my dominos bouncing off each other, I set this at a low of 0.01. Once you’re done, hit the Begin Simulation button to watch the magic unfold.
If you aren’t happy with your simulation, you can stop it mid-way and troubleshoot. Usually, tinkering with the settings helps achieve the right simulation, but here’s something I learned, too – bigger objects fall slower than smaller objects, so playing around with the size and scale of your model can really affect the simulation. If, however, you’re happy with your simulation (you can run through it in the video timeline below), just hit the blue ‘OK’ button, and you’ve successfully rendered your first physics simulation!
The simulation then becomes a part of KeyShot’s Animation timeline, and you can then play around with camera angles and movements to capture your entire scene just the way you visualized it. I created multiple clips of my incredibly long domino chain reaction (in small manageable chunks because my laptop crashed at least 8 times during this) and stitched them together in a video editing app.
Comparing KeyShot and Blender’s Physics Control Panels
The Physics Simulation feature in KeyShot 2023-24 is incredibly impressive. For starters, it’s a LOT easier than other software like Blender, which can feel a little daunting with the hundreds of settings it has you choose from. Figuring out physics simulation in KeyShot takes just a few minutes (although the actual simulation can take a while if you’re running something complex), making an already powerful rendering software feel even more limitless!
That being said, there’s some room for growth. Previous experiments with the simulation tool saw some strange results – falling objects sometimes ended up choosing their own direction, making the simulation feel odd (I made a watch fall down and the entire thing disassembled and scattered in mid-air instead of falling together and breaking apart on impact). Secondly, sometimes objects can go through each other instead of colliding, so make sure you tinker with quality settings to get the perfect result. Thirdly, you can’t choose different bounciness values for different objects in the same simulation just yet, although I’m sure KeyShot is working on it. Finally, it would be absolutely amazing if there were a ‘slow-motion’ feature. The current way to do this is to bump up the keyframe rate and bring down the gravity, but that can sometimes cause objects to drift away after colliding instead of falling downwards in slow motion.
So there you have it! You can use this tutorial to animate your own domino sequence, too, or better still, create a new simulation based on your own ideas! If you do, make sure to participate in the 2024 KeyShot Animation Challenge to stand a chance to win some exciting prizes. Hurry! The competition ends on March 10th, 2024!
Apple today announced Sports, a new iPhone app offering real-time stats for a number of major leagues. Once installed, users can set their favorite team and get a trove of data on their lock screen in the live activities box when the team is playing. Available free starting today in the US, Canada and the UK, the app currently supports basketball, hockey and soccer football. The company added that other sports, including baseball and American football will debut for their upcoming seasons.
There are plenty of reasons you might not be able to watch your team of choice play live. You may have a prior engagement, the game may not be televised, or Todd Boehly has done so much damage to the club you can’t bear to look at it any more. In those situations, push alerts from major sports apps has been a lifeline, but it’s not always entirely reliable.
Now, it has been possible to get this working since iOS 16, if you fancied messing around in the depths of the Apple TV app. And some third-party platforms, like MLB’s homegrown app, would put this data in your lock screen or Dynamic Island. But Apple says that its own setup offers a “simple and fast way to stay up to speed on the teams and leagues they love.” The setup will also sync up with any sports preferences already stored in the Apple TV or Apple News apps.
Of more concern is that Sports will also offer up live betting odds for the games as they’re in play. It’s worth noting it will be possible to deactivate the live odds feature in settings, but it seems like it would have been smarter and less potentially harmful to make that opt-in, rather than opt-out.
Apple Sports is available to download now in English. French and Spanish are supported where available.
This article originally appeared on Engadget at https://www.engadget.com/apple-sports-puts-real-time-scores-on-your-iphone-lock-screen-140050382.html?src=rss
Google has released an open AI model called Gemma, which it says is created using the same research and technology that was used to build its Gemini AI models. The company says Gemma is its contribution to the open community and is meant to help developers "in building AI responsibly." As such, it also introduced the Responsible Generative AI Toolkit alongside Gemma. It contains a debugging tool, as well as a guide with best practices for AI development based on Google's experience.
The company has made Gemma available in two different sizes — Gemma 2B and Gemma 7B — which both come with pre-trained and instruction-tuned variants and are both lightweight enough to run directly on a developer's laptop or desktop computer. Google says Gemma surpasses much larger models when it comes to key benchmarks and that both model sizes outperform other open models out there.
In addition to being powerful, the Gemma models were trained to be safe. Google used automated techniques to strip personal information from the data it used to train the models, and it used reinforcement learning based on human feedback to ensure Gemma's instruction-tuned variants show responsible behaviors. Companies and independent developers could use Gemma to create AI-powered applications, especially if none of the currently available open models are powerful enough for what they want to build.
Google has plans to introduce even more Gemma variants in the future for an even more diverse range of applications. That said, those who want to start working with the models right now can access them through data science platform Kaggle, the company's Colab notebooks or through Google Cloud.
This article originally appeared on Engadget at https://www.engadget.com/google-introduces-a-lightweight-open-ai-model-called-gemma-130053289.html?src=rss
Apple has been pushing the iPads, particularly the iPad Pros, as the next wave of computing, practically replacing laptops for some of the common computing tasks, including content creation. Despite the rich variety of apps for these slates, however, there are still some software and work that can only be done on more powerful computers like Macs and MacBooks. And despite how Apple’s computers have long been loved by designers and artists, the company itself has made no tools to support these use cases, such as drawing tablets or even specialized controllers. That does leave the market wide open for manufacturers like Wacom and its drawing tables, but it also forces people to buy these products when they have a perfectly capable iPad with an Apple Pencil. That’s where Astropad’s latest product comes in, bridging the divide between Macs and iPads once again, but with a curious twist.
In a nutshell, Astropad Slate is an app that lets you remotely control a Mac using an iPad, Pro or otherwise. You can connect using Wi-Fi, a USB cable, or even Peer-to-Peer networking. Although an Apple Pencil would be nice, it isn’t exactly a requirement. With just your fingers, you can already control the Mac as if you were using a gigantic touchpad. That includes supporting gestures like pinching or two-finger scroll.
The Slate app really shines, however, when you involve an Apple Pencil, which is supported by most iPads nowadays. With this precise tool, you can not only hover over the user interface on the Mac, you can also turn handwritten scribbles into text, practically replacing the keyboard. Of course, creators, designers, and artists are more likely to utilize the app’s ability to turn the iPad into a drawing tablet, but one without a screen.
This would be similar to the older and cheaper drawing slates that some artists prefer for their distraction-free experience. It does, however, take a bit of getting used to because you won’t be looking at where your hand is going, unlike the analog pen and paper experience. That does help you focus more on what’s happening on screen and, at least for some, offers a more ergonomic position since you won’t be craning your neck downward.
For those that prefer a more “conventional” display tablet experience, Astropad does have its Studio that turns the iPad into something like a Wacom Cintiq and even has compatibility with Windows PCs. For all that power, however, Astropad Studio requires a $79.99 annual subscription, while this simpler Astropad Slate is a one-time $19.99 purchase only.
Apple has explained why it's disabling progressive web apps (PWAs) in the EU, it wrote in updated developer notes seen by TechCrunch. The news follows users noticing that web apps were no longer functional in Europe with recent iOS 17.4 beta releases. Apple said it's blocking the feature in the region due to new rules around browsers in Europe's Digital Markets Act (DMA).
Web apps behave much like native apps, allowing dedicated windowing, notifications, long-term local storage and more. European users tapping web app icons will see a message asking if they wish to open them in Safari instead or cancel. That means they act more like web shortcuts, creating issues like data loss and broken notifications, according to comments from users seen by MacRumors.
The problem, according to Apple, is a new DMA requirement that it allow browsers that don't use its WebKit architecture. "Addressing the complex security and privacy concerns associated with web apps using alternative browser engines would require building an entirely new integration architecture that does not currently exist in iOS and was not practical to undertake given the other demands of the DMA and the very low user adoption of Home Screen web apps," the company wrote.
However, the Open Web Advocacy organization disagrees, as it writes in its latest blog:
Some defend Apple's decision to remove Web Apps as a necessary response to the DMA, but this is misguided.
Apple has had 15 years to facilitate true browser competition worldwide, and nearly two years since the DMA’s final text. It could have used that time to share functionality it historically self-preferenced to Safari with other browsers. Inaction and silence speaks volumes.
The complete absence of Web Apps in Apple's DMA compliance proposal, combined with the omission of this major change from Safari beta release notes, indicates to us a strategy of deliberate obfuscation. Even if Apple were just starting to internalize its responsibilities under the DMA, this behaviour is unacceptable. A concrete proposal with clear timelines, outlining how third party browsers could install and power Web Apps using their own engines, could prevent formal proceedings, but this looks increasingly unlikely. Nothing in the DMA compels Apple to break developers' Web Apps, and doing so through ineptitude is no excuse.
The change, spotted earlier by researcher Tommy Mysk, arrived with the second iOS 17.4 beta, but many observers first thought it was a bug. "The EU asked for alternative app stores and Apple took down web apps. Looks like the EU is going to rue the day they have asked Apple to comply with the #DMA rules," he posted on X.
According to Apple's App Store Guidelines, web apps are supposed to be an alternative to the App Store model. Considering that that the EU's DMA is designed to break the App Store monopoly, the move to disable them altogether is bound to cause friction. The EU, Japan, Australia and the UK have previously criticized the requirement for WebKit to run PWAs, according to the Open Web Advocacy (OWA).
Apple said it regrets any impact to the change, but said it was required "as part of the work to comply with the DMA." The company has already been accused by developers of malicious compliance with the DMA over fees for developers to bypass the App Store, with Spotify CEO Daniel Ek describing it as "extortion.".
This article originally appeared on Engadget at https://www.engadget.com/apple-confirms-home-screen-web-apps-will-no-longer-work-on-european-ios-devices-112527560.html?src=rss