Apple confirms it has blocked an iMessage exploit

It was never going to last. Ever since it was launched this week, the Beeper Mini app, which let Android users get iMessage text support, was expected to be in trouble as soon as it caught Apple's attention. And catch Apple's attention it has. Yesterday, the entire Beeper platform appeared to be on the fritz, resulting in speculation that the iPhone maker had been shutting down the iMessage workarounds. As of this morning, Beeper Mini was still posting on X (formerly Twitter) that it was working on and potentially fixing the outage, but with an announcement from Apple today, all that may be for naught. 

"We took steps to protect our users by blocking techniques that exploit fake credentials in order to gain access to iMessage," Apple said. "These techniques posed significant risks to user security and privacy, including the potential for metadata exposure and enabling unwanted messages, spam, and phishing attacks. We will continue to make updates in the future to protect our users."

Though Apple does not mention any apps by name, it stands to reason that, given the timing of Beeper Mini's launch and recent troubles, that this refers to the loophole the platform was using. 

Beeper's method sent users' texts to Apple's servers before moving on to their intended recipients, and was thought up by a high-school student. Would-be messengers wouldn't even need an Apple ID to access iMessage via Beeper Mini, though the Android app did offer end-to-end encryption for conversations between those on both operating systems. 

Apple also said today that it's unable to verify that messages sent through unauthorized means that pose as having valid credentials can maintain end-to-end encryption. Beeper had anticipated that this workaround might one day be shut down, and it looks like the Android-iOS messaging divide remains intact. For now.

This article originally appeared on Engadget at https://www.engadget.com/apple-confirms-it-has-blocked-imessage-exploit-012015485.html?src=rss

How to use Personal Voice on iPhone with iOS 17

Ahead of the International Day of Persons with Disabilities last Sunday, Apple released a short film that showcased its Personal Voice accessibility feature, which debuted earlier this year in iOS 17. Personal Voice allows users to create digital versions of their voice to use on calls, supported apps and Apple’s own Live Speech tool.

For those who are at risk of permanently losing their voice due to conditions like Parkinson’s disease, multiple sclerosis, ALS and vocal cord paralysis, not sounding like yourself can be yet another form of identity loss. Being able to create a copy of your voice while you’re still able might help alleviate the feeling that you’ll never feel like yourself again, or that your loved ones won’t know what you sound like.

All iOS 17, iPadOS 17 and macOS Sonoma users can create a personal voice in case you need it in the future — whether temporarily or for long-term use. I found the process (on my iPhone 14 Pro) pretty straightforward and was surprisingly satisfied with my voice. Here’s how you can set up your own Personal Voice, as long as you’ve upgraded to iOS 17, iPadOS 17 or macOS Sonoma (on Macs with Apple Silicon).

Before you start the process, make sure you have a window of about 30 minutes. You’ll be asked to record 150 sentences, and depending on how quickly you speak, it could take some time. You should also find a quiet place with minimal background sound and get comfortable. It’s also worth having a cup of water nearby and making sure your phone has at least 30 percent of battery.

How to set up Personal Voice on iPhone

When you’re ready, go to the Personal Voice menu by opening Settings and finding Accessibility > Personal Voice (under Speech). Select Create A Personal Voice, and Apple will give you a summary of what to expect. Hit Continue, and you’ll see instructions like “Find a quiet place” and “Take your time.”

Importantly, one of the tips is to “Speak naturally.” Apple encourages users to “read aloud at a consistent volume, as if you’re having a conversation.” After you tap Continue on this page, there is one final step where your phone uses its microphone to analyze the level of background noise, before you can finally start reading prompts.

The layout for the recording process is fairly intuitive. Hit the big red record button at the bottom, and read out the words in the middle of the page. Below the record button, you can choose from “Continuous Recording” or “Stop at each phrase.”

A screenshot of the process of setting up Personal Voice in iOS 17. At the top of the page are the words
Screenshot

In the latter mode, you’ll have to tap a button each time you’ve recorded a phrase, while Continuous is a more hands-free experience that relies on the phone to know when you’re done talking. For those with speech impairments or who read slowly, the continuous mode could feel too stressful. Though it happened just once for me, the fact that the iPhone tried to skip ahead to the next phrase before I was ready was enough for me to feel like I needed to be quick with my reactions.

Personal Voice on iOS 17: First impressions

Still, for the most part the system was accurate at recognizing when I was done talking, and offered enough of a pause that I could tap the redo button before moving to the next sentence. The prompts mostly consisted of historical and geographical information, with the occasional expressive exclamation thrown in. There’s a fairly diverse selection of phrases, ranging from simple questions like “Can you ask them if they’re using that chair?” to forceful statements like “Come back inside right now!” or “Ouch! That is really hot!”

I found myself trying to be more exaggerated when reading those particular sentences, since I didn’t want my resulting personal voice to be too robotic. But it was exactly when I was doing that when I realized the problem inherent to the process. No matter how well I performed or acted, there would always be an element of artifice in the recordings. Even when I did my best to pretend like something was really hot and hurt me, it still wasn’t a genuine reaction. And there’s definitely a difference between how I sound when narrating sentences and having a chat with my friends.

That’s not a ding on Apple or Personal Voice, but simply an observation to say that there is a limit to how well my verbal self can be replicated. When you’re done with all 150 sentences, Apple explains that the process “may need to complete overnight.” It recommends that you charge and lock your iPhone, and your Personal Voice “will be generated only while iPhone is charging and locked” and that you’ll be alerted when it’s ready to use. It’s worth noting that in this time, Apple is training neural networks fully on the device to generate text-to-speech models and not in the cloud.

A screenshot of the Personal Voice recording process, with the sentence
Screenshot

In my testing, after 20 minutes of putting down my iPhone, only 4 percent of progress was made. Twenty more minutes later, the Personal Voice was only 6 percent done. So this is definitely something you’ll need to allocate hours, if not a whole night, for. If you’re not ready to abandon your device for that long, you can still use your phone — just know that it will delay the process.

When your Personal Voice is ready, you’ll get a notification and can then head to settings to try it out. On the same page where you started the creation process, you’ll see options to share your voice across devices, as well as to allow apps to request to use it. The former stores a copy of your voice in iCloud for use in your other devices. Your data will be end-to-end encrypted in the transfer, and the recordings you made will only be stored on the phone you used to create it, but you can export your clips in case you want to keep a copy elsewhere.

How to listen to and use Personal Voice

You can name your Personal Voice and create another if you prefer (you can generate up to three). To listen to the voice you’ve created, go back to the Speech part of the accessibility settings, and select Live Speech. Turn it on, choose your new creation under Voices and triple click your power button. Type something into the box and hit Send. You can decide if you like what you hear and whether you need to make a new Personal Voice.

At first, I didn’t think mine sounded expressive enough, when I tried things like “How is the weather today?” But after a few days, I started entering phrases like “Terrence is a monster” and it definitely felt a little more like me. Still robotic, but it felt like there was just enough Cherlynn in the voice that my manager would know it was me calling him names.

With concerns around deepfakes and AI-generated content at an all-time high this year, perhaps a bit of artifice in a computer-generated voice isn’t such a bad thing. I certainly wouldn’t want someone to grab my phone and record my digital voice saying things I would never utter in real life. Finding a way to give people a sense of self and improve accessibility while working with all the limits and caveats that currently exist around identity and technology is a delicate balance, and one that I’m heartened to see Apple at least attempt with Personal Voice.

This article originally appeared on Engadget at https://www.engadget.com/how-to-use-personal-voice-on-iphone-with-ios-17-193002021.html?src=rss

Apple will offer RCS support starting in 2024

The green bubble-blue bubble divide may be getting smaller soon. Apple has confirmed it will support the RCS messaging standard that it's long eschewed. That's not to say that messages from Android devices will no longer appear green on Apple's Messages app. It does mean that texts from iPhones to non-iOS devices will support the newer Rich Communication Services protocol, meaning they will not have to go through the aging SMS (and MMS) system. In a statement, Apple said "We believe RCS Universal Profile will offer a better interoperability experience when compared to SMS or MMS."

That support will "work alongside iMessage, which will continue to be the best and most secure messaging experience for Apple users." With new features like voice memo transcriptions and Check In that aren't available on RCS, iMessage could still outshine default text messaging apps on Android. It also means there might not be change to any of the colors of the conversation bubbles.

Google has long taken potshots at Apple for not supporting RCS, saying the texting experience between iPhones and non-iPhones is so outdated it might as well be using a pager. With RCS support, messages between Android and iOS devices will be more secure (than over SMS), while media can be shared at higher quality.

In a statement issued on Thursday, Google said that it was happy to see Apple support RCS. “We welcome Apple’s participation in our ongoing work with GSMA to evolve RCS and make messaging more equitable and secure, and look forward to working with them to implement this on iOS in a way that works well for everyone,” Google posted on X.

A GSMA spokesperson told Engadget earlier this year that the RCS Universal Profile (UP) "provides the industry with an open, consistent and global messaging service across networks and devices. It simplifies interoperability and enables OEMs and OS providers to achieve scale and give consumers a richer and more consistent messaging experience regardless of device or network."

It has not been entirely clear why Apple has resisted adopting RCS until now, though security and potential for spam are both possible factors. It took until August this year for Google to enable end-to-end encryption (E2EE) in all RCS group chats in its Messages app for Android and Wear, while E2EE has been available for individual conversations since 2021. Compare that to chats in WhatsApp and iMessage, which have been encrypted since the 2010s, and it's clear that RCS is fairly late to offer this security feature.

It's not yet evident exactly when Apple plans to enable support for RCS UP, though the statement said "later next year." Today's announcement just happens to fall on the date that's the deadline for companies to file challenges at the European Union's General Court. Apple is reportedly looking to challenge the EU's decision to put all of the App Store on a digital antitrust list under its Digital Markets Act. 

In September, Apple launched the iPhone 15 and 15 Pro, which are the company's first phones to come with USB-C charging ports in place of Lightning. This week, we also saw news about the release of the Qi 2 wireless charging standard with the new iPhones among the first available devices compatible with the updated protocols. 

Whether it was brought on by EU regulations or other motivations, it's clear Apple is opening up parts of its walled garden to play nice with other devices. And maybe, just maybe, you won't have to "buy your mom an iPhone."

Update, November 16, 2023, 7:00PM ET: This story was updated with a statement from Google. 

Update, November 17, 2023, 1:25PM ET: This story was updated with additional context around the availability of end-to-end encryption on RCS and other messaging platforms.

This article originally appeared on Engadget at https://www.engadget.com/apple-will-start-supporting-rcs-in-2024-182232923.html?src=rss

Apple’s revenue declines again despite record iPhone and services sales

Apple's latest quarterly earnings report paints a picture of software wins amid something of a hardware slump. In a statement announcing the financial results for its fiscal fourth quarter, the company called out a new all-time high for revenue from its services division. It also highlighted iPhone revenue as having set a September quarter record. However, this marks the fourth consecutive quarter of overall revenue decline, with its earnings of $89.5 billion representing a 1 percent drop year over year. This also means the record-breaking performances of the iPhone and Services divisions did little to offset weakness elsewhere. 

The lackluster performance is somewhat understandable, though. The company just had a launch event for its new M3 chips, MacBooks and an iMac this week, none of which can be bought yet. And though the new iPhone 15 lineup and Apple Watches were introduced in September, sales of those devices likely did not account for much of this fiscal quarter’s results. We're also anticipating a November release for new iPads this year, which could further fuel hardware revenue. 

Correspondingly, the Mac, iPad and wearables divisions were down this quarter, with the first two taking noticeable hits. Though Apple drummed up significant interest with the Vision Pro headset earlier this year, that device is far from ready to be sold to the public and is unlikely to hit the market until 2024 at the earliest. With holiday shopping about to ramp up, as well as more product releases on the horizon, it’s much more likely that the company’s hardware products will have a greater impact on its bottom line next quarter.

This article originally appeared on Engadget at https://www.engadget.com/apples-revenue-declines-again-despite-iphone-and-services-strength-211938910.html?src=rss