If you're a dedicated Google Maps user like me, then you know its not perfect. But, Google is now announcing some improvements with a range of new features for Maps and Waze. One of the things I find most exciting is the additional guidance on entering buildings and where to park. In the coming weeks, Maps will start lighting up the destination and entrance to it as you approach, so you (hopefully!) don't have to circle it three times in the dark.
Google is also now making it easier to report incidents while using Maps, increasing the size of these icons so you can share quickly — and safely — while on the go. You can also tap to confirm a previously reported incident after approaching it.
Waze is getting three updates, including new camera alerts. Now, Waze will be able to alert you if a camera is approaching and tell you what it's monitoring, whether it be speed, seat belts or accurate carpool lane use. Waze will also notify you if there's a traffic event nearby or close to one of your starred locations. You can then send an alert to a friend or family member. Both of these updates are rolling out now on iOS and Android. Rounding out Waze's updates is the ability to get navigation guidance even when your phone is locked. This feature will launch globally on Android soon, while it will arrive on iOS in the fall.
This article originally appeared on Engadget at https://www.engadget.com/google-maps-will-show-you-where-to-enter-your-destination-130021496.html?src=rss
In a fireside chat on Monday between NVIDIA CEO Jensen Huang and Meta CEO Mark Zuckerberg at the SIGGRAPH 2024, the latter dropped the f-bomb. After exchanging leather jackets (apparently two billionaires can’t get custom-made jackets that fit), the two talked about the future of AI, chatbots and open large language models.
Zuckerberg launched into a lengthy rant about his frustrations with “closed” ecosystems, like Apple’s App Store. None of that is new — the Meta founder has been feuding with Apple for years.
Zuckerberg, decked out in the aforementioned leather jacket and chain, said: “There have been too many things that I’ve tried to build and have been told ‘nah, you can’t really build that’ by the platform provider that, at some level, I’m just like, ‘nah, fuck that.’”
It’s the latest public step along his rebrand/ midlife crisis/bit of both. MMA training, “Carthage must die” tees and rebellious banter all have more than a whiff of Succession’s Kendall Roy.
It’s always fun to do a 180 on a newsletter from the day before. Apple’s long-awaited take on artificial intelligence is, well, rolling out. Whoops.
The developer betas for iOS 18.1, iPadOS 18.1 and macOS Sequoia 15.1 just dropped, and they include some of the first Apple Intelligence features. If you have Apple developer accounts, you can update software and go into settings to see a new option for Apple Intelligence. There, you’ll have to join a waitlist, but it shouldn’t take longer than a few hours.
The update includes writing tools for proofreading, rewriting or summarizing text. You’ll also gain the ability to create Memories in the redesigned Photos app, as well as transcribe live calls in the Phone app. Features not yet available are Genmoji, Image Playground, ChatGPT integration, Cleanup in Photos and upgraded Siri.
Now there’s more foldable competition than ever, how does Samsung’s latest flip-phone fare? While Z Flip 6’s design has remained largely the same, Samsung made several under-the-hood upgrades this year, with improved battery life and cameras. It makes the best case yet for mainstream foldables, but the company could do more, especially in using the secondary front screen. That said, the new AI features are a lot of fun.
Sony announced a themed PS5 DualSense controller to coincide with its incoming Astro Bot game. The game, like the VR title before it, taps into all the tricks and features of the DualSense controller, so the collab is a no-brainer in a lot of ways. It costs $80 and ships September 6, the same day as the game.
This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-mark-zuckerberg-is-surprisingly-angry-about-closed-platforms-115711926.html?src=rss
Don't call it AI, but Apple's long-awaited take on artificial intelligence is finally rolling out today. Well, in limited form, anyway. The developer betas for iOS 18.1, iPadOS 18.1 and macOS Sequoia 15.1 just dropped, and they include some of the first Apple Intelligence features available to a broader, public group of testers. To be clear, this isn't the full release that was rumored to be delayed till October. These updates are part of an early preview for developers to test.
How to get the new Apple Intelligence features
Starting today, those with Apple developer accounts will be able to update their software and go into their settings to see a new option for Apple Intelligence. There, you'll have to join a waitlist, though it shouldn't take longer than a few hours for you to gain access to the new features.
It's important to note that you have to have either an iPhone 15 Pro or Pro Max to use the new Apple Intelligence features in the iOS 18.1 developer beta, or an iPad or Mac with an M1 chip or newer for the iPadOS 18.1 preview. If your device and Siri's languages are not set to US English, you'll have to change them both to that before you can access the Apple Intelligence setting.
You'll also be running software that might be unstable or buggy, so be sure to back up your device before installing the developer beta.
What Apple Intelligence features are available now?
Once you've been granted access, Apple will deliver a notification to your device. The new stuff you'll be able to play with in this version of the beta include writing tools for proofreading, rewriting or summarizing text. You'll also gain the ability to create Memories in the redesigned Photos app, as well as transcribe live calls in the Phone app (or audio in Notes). Apple Intelligence can produce summaries of those clips.
Part of the redesigned Siri is also available, including the new glowing borders, the ability to type to the assistant and it being able to understand requests even if you've stuttered or interrupted yourself mid-speech.
Features that aren't yet available are Genmoji, Image Playground, ChatGPT integration, Cleanup in Photos and the personal context and in-app actions for Siri. More should arrive in future betas, and as a reminder the full, general release of iOS 18, iPadOS 18 and macOS Sequoia is expected later this year. Apple Intelligence features are slated to roll out in 2024 and over the course of the next year.
Update, July 29 2024, 5:15PM ET: This story has been updated to include more details around what Apple Intelligence features are available in the developer beta, as well as required language settings.
This article originally appeared on Engadget at https://www.engadget.com/apple-intelligence-is-here-as-part-of-the-ios-181-developer-beta-170836131.html?src=rss
Don't call it AI, but Apple's long-awaited take on artificial intelligence is finally rolling out today. Well, in limited form, anyway. The developer betas for iOS 18.1, iPadOS 18.1 and macOS Sequoia 15.1 just dropped, and they include some of the first Apple Intelligence features available to a broader, public group of testers. To be clear, this isn't the full release that was rumored to be delayed till October. These updates are part of an early preview for developers to test.
How to get the new Apple Intelligence features
Starting today, those with Apple developer accounts will be able to update their software and go into their settings to see a new option for Apple Intelligence. There, you'll have to join a waitlist, though it shouldn't take longer than a few hours for you to gain access to the new features.
It's important to note that you have to have either an iPhone 15 Pro or Pro Max to use the new Apple Intelligence features in the iOS 18.1 developer beta, or an iPad or Mac with an M1 chip or newer for the iPadOS 18.1 preview. You'll also be running software that might be unstable or buggy, so be sure to back up your device before installing the developer beta.
What Apple Intelligence features are available now?
Once you've been granted access, Apple will deliver a notification to your device. The new stuff you'll be able to play with in this version of the beta include writing tools for proofreading, rewriting or summarizing text. You'll also gain the ability to create Memories in the redesigned Photos app, as well as some of the updated Siri, including typing to the assistant and it being able to understand if you've stuttered.
Features that aren't yet available are Genmoji, ChatGPT integration and the personal context and in-app actions for Siri. More should arrive in future betas, and as a reminder the full, general release of iOS 18, iPadOS 18 and macOS Sequoia is expected to take place later this year.
This article originally appeared on Engadget at https://www.engadget.com/apple-intelligence-is-here-as-part-of-the-ios-181-developer-beta-170836131.html?src=rss
There's word going around that X just enabled a setting that lets it train Grok on public tweets, as well as any interactions they have with the chatbot. That's not entirely true: a help page instructing users how to opt-out of X using their data to train Grok has been live since at least May. X just never exactly made it crystal clear that it was opting everyone into this, which is a sketchy move. If you don't want a bad chatbot to use your bad tweets for training, it's thankfully easy to switch that off.
You just need to uncheck a box from the Grok data sharing tab in the X settings. If that link doesn't work, you can go to Settings > Privacy and Safety > Grok. For the time being, the setting isn't accessible through X's mobile apps (the company says it will be soon), so you'll have to uncheck the box on the web for now. It's also worth noting that Grok isn't trained on any tweets from private X accounts.
All X users have the ability to control whether their public posts can be used to train Grok, the AI search assistant. This option is in addition to your existing controls over whether your interactions, inputs, and results related to Grok can be utilized. This setting is…
One of X's selling points for Grok when it rolled out the chatbot was that it had the advantage of using real-time information that's published on the platform — in other words, users' tweets. That only works if users opt-in or are automatically enrolled into sharing their data with the chatbot. But X isn't exactly the pinnacle of truth and accuracy. It's full of pranksters, and lifting their jokes might be one of the reasons why Grokkeeps ongetting stuff wrong. In any case, it's not exactly uncommon for AI models to be trained on material without explicit permission from the original creators.
This article originally appeared on Engadget at https://www.engadget.com/heres-how-to-stop-groks-ai-models-using-your-tweets-for-training-161041266.html?src=rss
Sonos seriously stepped in it a couple of months back when it released an overhauled first-party mobile app that shipped with a number of missing features. These included core functions like sleep timers and alarms. Many of the company’s speakers would not appear as a pairing option and it became extremely difficult to precisely adjust the volume level of a paired speaker.Additionally, music search and playback were both negatively impacted by the change, leading to numerous customer complaints.
Now, the company has apologized for releasing the half-baked app. CEO Patrick Spence whipped up a blog post to address the “significant problems” with the new software.
“There isn’t an employee at Sonos who isn’t pained by having let you down, and I assure you that fixing the app for all of our customers and partners has been and continues to be our number one priority,” he wrote.
Spence also wrote that the company had planned to quickly incorporate the missing features and patch up any errors, but these fixes were delayed by a “number of issues” that were unique to the update. He did confirm that Sonos has been actively pushing out patches approximately every two weeks to address a wide variety of concerns.
Additionally, he outlined the company’s future roadmap for getting the app into proper working order. Upcoming fixes include increased stability when pairing new products and enhancing configuration options with regard to the music library. Volume responsiveness is also getting a refresh, as is the alarm clock. As a matter of fact, the entire user interface is getting a complete overhaul that is “based on customer feedback.”
All of these changes will be released via a number of app updates from now until October. Spence says he knows the company has work to do to “earn back” the trust of loyal Sonos customers. In better news, it did just release some nifty headphones.
This article originally appeared on Engadget at https://www.engadget.com/sonos-apologized-for-messing-up-its-app-and-has-offered-a-roadmap-for-fixing-everything-191528422.html?src=rss
Back when Max was still known as HBO Max, it released a redesigned app that added SharePlay for Apple devices, but only in the US. Now, the streaming service is rolling out the feature to all its users around the world. SharePlay is now available to all Max users paying for Ad-Free and Ultimate Ad-Free plans, allowing them to hold and join watch parties over FaceTime and iMessage, no matter where they are.
Users can start watching with friends by hitting the "share" button either on the details section of each title or within the FaceTime app. Each session can have as many as 32 participants, but they all have to be Max subscribers. That means people from regions where Max isn't available, such as in Asian countries, won't be able to hop on and watch with their pals in the US or Europe. Warner Bros. is planning to expand Max's reach to South East Asia later this year, but it warns on its website that the timeline could still change.
SharePlay for Max works on iPhones, iPads, Apple TVs and Vision Pro headsets. To initiate a watch party on iPhones, iPads and Vision Pros, users have to find the Share icon on the details page of a show or a movie, enter the contacts they want to share with and initiate a FaceTime call. If they choose Messages on their mobile devices, their friends will get a message asking them to join SharePlay. On Apple TV, users will have to open FaceTime first before clicking the SharePlay button and choosing Max from the app list.
This article originally appeared on Engadget at https://www.engadget.com/maxs-shareplay-feature-for-ios-is-now-available-to-all-ad-free-subscribers-040624031.html?src=rss
There are some limitations for now. Availability will vary by region and Maps is only available in English on the web at the outset. As things stand, you can access Apple Maps from Safari and Chrome on Mac and iPad. Windows PC users can access the service via Chrome and Edge. Apple says it will expand the web experience to other languages, devices and browsers over time, but for now at least, iPhone users will need to keep using the Maps app.
The web version of Apple Maps includes directions; guides; opening hours, reviews and other helpful information for businesses; and actions such as ordering food. Apple will add other features, including Look Around (i.e. the company's version of street View), in the coming months.
After many years of restricting Maps to an app, Apple might be trying to take on Google at its own game. Google Maps has, for instance, long allowed developers to embed a section of a map on websites. Apple says devs will be able to link to its maps on the web to offer their users driving directions, information about places and more.
Expanding beyond the app is a smart idea and it could help Apple Maps reach more eyeballs. The company also started offering a web version of Apple Music several years ago.
This article originally appeared on Engadget at https://www.engadget.com/apple-maps-is-now-available-on-the-web-in-beta-193648138.html?src=rss
Google just announced a suite of updates to the Play Store in an attempt to make it more fun to use. This is part of a larger move by the company to turn its online marketplace into "an end-to-end experience that’s more than a store.” You read that right. They want us to hang out on Google Play.
Here’s what the company has planned. The update brings AI-generated review summaries that pull from user reviews to develop a consensus. You’ve likely already encountered this type of thing on Facebook and while using Google search. The company first announced this feature at this year’s I/O event.
This AI-adjacent approach will also apply to auto-generated FAQs about each app that are powered by Gemini models. Additionally, there will be AI-generated highlights that offer a quick summarization of a particular app. Google showed off a still image of this for a photo editing app in which the highlights included the number of filters and layouts available, in addition to tools and sharing options. This AI approach will also let users quickly compare apps in similar categories.
Google’s also rolling out shared spaces on the Play Store. These aren’t communities or mini social networks, like Reddit or something, but rather splash pages for various topics of interest. The company started this project with a pilot involving cricket. The shared space gave users in India the ability to “explore all their cricket content from across various channels in one, convenient spot.” This included relevant videos, around 100 curated cricket-related apps and some simple user polls. The next curated space will be about Japanese manga. There has been no word as to when this feature will expand into multiple categories available to global users.
The entire “shopping for a new game to play” experience is also getting an upgrade, focused primarily on discovery. Google promises “enriched game details” pages, complete with YouTube videos from developers and clearly-marked promotions, which reminds me of Steam. This even extends to the post-purchase experience, as return users will see updated developer notes and a section for tips and tricks. The program is in early access and currently only available to English language users. There are also some new games coming to Google’s oft-overlooked Play Pass, like Asphalt Legends Unite and Candy Crush Saga, and a feature that lets users play multiple games at once on PC.
Finally, there’s some personalization stuff in this update. The new Collections feature provides custom categories based on previously-purchased apps. This means that each Google Play homescreen will be different for each user, offering an easy way to continue binging a show or finishing a video game.
Many of these upgrades begin rolling out today, though some are still in the early access stage. Others, like the shared spaces feature, still have some kinks to work out.
This article originally appeared on Engadget at https://www.engadget.com/google-is-updating-the-play-store-with-ai-powered-app-reviews-and-curated-spaces-130036843.html?src=rss
CrowdStrike has blamed faulty testing software for a buggy update that crashed 8.5 million Windows machines around the world, it wrote in an post incident review (PIR). "Due to a bug in the Content Validator, one of the two [updates] passed validation despite containing problematic data," the company said. It promised a series of new measures to avoid a repeat of the problem.
The massive BSOD (blue screen of death) outage impacted multiple companies worldwide including airlines, broadcasters, the London Stock Exchange and many others. The problem forced Windows machines into a boot loop, with technicians requiring local access to machines to recover (Apple and Linux machines weren't affected). Many companies, like Delta Airlines, are still recovering.
To prevent DDoS and other types of attacks, CrowdStrike has a tool called the Falcon Sensor. It ships with content that functions at the kernel level (called Sensor Content) that uses a "Template Type" to define how it defends against threats. If something new comes along, it ships "Rapid Response Content" in the form of "Template Instances."
A Template Type for a new sensor was released on March 5, 2024 and performed as expected. However, on July 19, two new Template Instances were released and one (just 40KB in size) passed validation despite having "problematic data," CrowdStrike said. "When received by the sensor and loaded into the Content Interpreter, [this] resulted in an out-of-bounds memory read triggering an exception. This unexpected exception could not be gracefully handled, resulting in a Windows operating system crash (BSOD)."
To prevent a repeat of the incident, CrowdStrike promised to take several measures. First is more thorough testing of Rapid Response content, including local developer testing, content update and rollback testing, stress testing, stability testing and more. It's also adding validation checks and enhancing error handing.
Furthermore, the company will start using a staggered deployment strategy for Rapid Response Content to avoid a repeat of the global outage. It'll also provide customers greater control over the delivery of such content and provide release notes for updates.
However, some analysts and engineers think the company should have put such measures in place from the get-go. "CrowdStrike must have been aware that these updates are interpreted by the drivers and could lead to problems," engineer Florian Roth posted on X. "They should have implemented a staggered deployment strategy for Rapid Response Content from the start."
This article originally appeared on Engadget at https://www.engadget.com/crowdstrike-blames-bug-that-caused-worldwide-outage-on-faulty-testing-software-120057494.html?src=rss