Yamaha’s new audio mixer for gamers has a simpler interface and cheaper price

Yamaha has a new audio mixer for gamers and streamers. Nearly three years after the launch of the company’s first gaming-specific soundboard, the company’s new ZG02 adds a dedicated USB-C port for gaming consoles, a more streamlined profile and a lower price ($200 compared to the ZG01’s $300).

Yamaha wants to establish its ZG line as more straightforward and accessible than offerings from rival audio companies with gazes similarly fixed on the Twitch and Discord sets (Rode and Roland also court gamers and streamers). “While other game mixing solutions can be cumbersome and complex, the ZG02 offers a more tactile, compact and intuitive mixing experience with customizable effects controls,” Yamaha consumer audio director Alex Sadeghian wrote in a press release.

In addition to its on-the-fly sound, chat and voice adjustments, the Yamaha ZG02 includes competitive gaming “focus” modes and 3D surround sound. It has software-based mic settings and voice effects, including compression, limiter, reverb, pitch and a radio voice filter. In addition, you can assign your favorite shortcuts to its physical buttons in the (free) companion app for Windows and macOS.

The mixer supports gaming headsets with built-in or dedicated mics through an XLR / TRS combo jack on its rear (and has 48V of phantom power for condenser mics). It includes a “versatile USB interface” for Windows and macOS computers, and its console-specific USB-C port works with PS5 / PS4 and Nintendo Switch.

The ZG01 includes a USB driver that lets you route audio to two different apps (for example, Discord and OBS / Streamlabs). Yamaha also touts compatibility with the Elgato Stream Deck lineup. The $200 ZG02 is available starting today in the US at Yamaha’s website.

This article originally appeared on Engadget at https://www.engadget.com/yamahas-new-audio-mixer-for-gamers-has-a-simpler-interface-and-cheaper-price-140024235.html?src=rss

Microsoft’s legal department allegedly silenced an engineer who raised concerns about DALL-E 3

A Microsoft manager claims OpenAI’s DALL-E 3 has security vulnerabilities that could allow users to generate violent or explicit images (similar to those that recently targeted Taylor Swift). GeekWire reported Tuesday the company’s legal team blocked Microsoft engineering leader Shane Jones’ attempts to alert the public about the exploit. The self-described whistleblower is now taking his message to Capitol Hill.

“I reached the conclusion that DALL·E 3 posed a public safety risk and should be removed from public use until OpenAI could address the risks associated with this model,” Jones wrote to US Senators Patty Murray (D-WA) and Maria Cantwell (D-WA), Rep. Adam Smith (D-WA 9th District), and Washington state Attorney General Bob Ferguson (D). GeekWire published Jones’ full letter.

Jones claims he discovered an exploit allowing him to bypass DALL-E 3’s security guardrails in early December. He says he reported the issue to his superiors at Microsoft, who instructed him to “personally report the issue directly to OpenAI.” After doing so, he claims he learned that the flaw could allow the generation of “violent and disturbing harmful images.”

Jones then attempted to take his cause public in a LinkedIn post. “On the morning of December 14, 2023 I publicly published a letter on LinkedIn to OpenAI’s non-profit board of directors urging them to suspend the availability of DALL·E 3),” Jones wrote. “Because Microsoft is a board observer at OpenAI and I had previously shared my concerns with my leadership team, I promptly made Microsoft aware of the letter I had posted.”

AI-generated image of a teacup with a violent wave inside of it. A storm brews from behind the window sill behind it.
A sample image (a storm in a teacup) generated by DALL-E 3
OpenAI

Microsoft’s response was allegedly to demand he remove his post. “Shortly after disclosing the letter to my leadership team, my manager contacted me and told me that Microsoft’s legal department had demanded that I delete the post,” he wrote in his letter. “He told me that Microsoft’s legal department would follow up with their specific justification for the takedown request via email very soon, and that I needed to delete it immediately without waiting for the email from legal.”

Jones complied, but he says the more fine-grained response from Microsoft’s legal team never arrived. “I never received an explanation or justification from them,” he wrote. He says further attempts to learn more from the company’s legal department were ignored. “Microsoft’s legal department has still not responded or communicated directly with me,” he wrote.

An OpenAI spokesperson wrote to Engadget in an email, “We immediately investigated the Microsoft employee’s report when we received it on December 1 and confirmed that the technique he shared does not bypass our safety systems. Safety is our priority and we take a multi-pronged approach. In the underlying DALL-E 3 model, we’ve worked to filter the most explicit content from its training data including graphic sexual and violent content, and have developed robust image classifiers that steer the model away from generating harmful images.

“We’ve also implemented additional safeguards for our products, ChatGPT and the DALL-E API – including declining requests that ask for a public figure by name,” the OpenAI spokesperson continued. “We identify and refuse messages that violate our policies and filter all generated images before they are shown to the user. We use external expert red teaming to test for misuse and strengthen our safeguards.”

Meanwhile, a Microsoft spokesperson wrote to Engadget, “We are committed to addressing any and all concerns employees have in accordance with our company policies, and appreciate the employee’s effort in studying and testing our latest technology to further enhance its safety. When it comes to safety bypasses or concerns that could have a potential impact on our services or our partners, we have established robust internal reporting channels to properly investigate and remediate any issues, which we recommended that the employee utilize so we could appropriately validate and test his concerns before escalating it publicly.”

“Since his report concerned an OpenAI product, we encouraged him to report through OpenAI’s standard reporting channels and one of our senior product leaders shared the employee’s feedback with OpenAI, who investigated the matter right away,” wrote the Microsoft spokesperson. “At the same time, our teams investigated and confirmed that the techniques reported did not bypass our safety filters in any of our AI-powered image generation solutions. Employee feedback is a critical part of our culture, and we are connecting with this colleague to address any remaining concerns he may have.”

Microsoft added that its Office of Responsible AI has established an internal reporting tool for employees to report and escalate concerns about AI models.

The whistleblower says the pornographic deepfakes of Taylor Swift that circulated on X last week are one illustration of what similar vulnerabilities could produce if left unchecked. 404 Media reported Monday that Microsoft Designer, which uses DALL-E 3 as a backend, was part of the deepfakers’ toolset that made the video. The publication claims Microsoft, after being notified, patched that particular loophole.

“Microsoft was aware of these vulnerabilities and the potential for abuse,” Jones concluded. It isn’t clear if the exploits used to make the Swift deepfake were directly related to those Jones reported in December.

Jones urges his representatives in Washington, DC, to take action. He suggests the US government create a system for reporting and tracking specific AI vulnerabilities — while protecting employees like him who speak out. “We need to hold companies accountable for the safety of their products and their responsibility to disclose known risks to the public,” he wrote. “Concerned employees, like myself, should not be intimidated into staying silent.”

Update, January 30, 2024, 8:41 PM ET: This story has been updated to add statements to Engadget from OpenAI and Microsoft.

This article originally appeared on Engadget at https://www.engadget.com/microsofts-legal-department-allegedly-silenced-an-engineer-who-raised-concerns-about-dall-e-3-215953212.html?src=rss

Proposed California bill would let parents block algorithmic social feeds for children

California will float a pair of bills designed to protect children from social media addiction and preserve their private data. The Protecting Youth from Social Media Addiction Act (SB 976) and California Children’s Data Privacy Act (AB 1949) were introduced Monday by the state’s Attorney General Rob Bonta, State Senator Nancy Skinner and Assemblymember Buffy Wicks. The proposed legislation follows a CA child safety bill that was set to go into effect this year but is now on hold.

SB 976 could give parents the power to remove addictive algorithmic feeds from their children’s social channels. If passed, it would allow parents of children under 18 to choose between the default algorithmic feed — typically designed to create profitable addictions — and a less habit-forming chronological one. It would also let parents block all social media notifications and prevent their kids from accessing social platforms during nighttime and school hours.

 “Social media companies have designed their platforms to addict users, especially our kids. Countless studies show that once a young person has a social media addiction, they experience higher rates of depression, anxiety, and low self-esteem,” California Senator Nancy Skinner (D-Berkeley) wrote in a press release. “We’ve waited long enough for social media companies to act. SB 976 is needed now to establish sensible guardrails so parents can protect their kids from these preventable harms.”

L to R: California AG Rob Bonta, CA State Senator Nancy Skinner and Assemblymember Buffy Wicks standing at a podium in a classroom.
L to R: California AG Rob Bonta, State Senator Nancy Skinner and Assemblymember Buffy Wicks
The Office of Nancy Skinner

Meanwhile, AB 1949 would attempt to strengthen data privacy for CA children under 18. The bill’s language gives the state’s consumers the right to know what personal information social companies collect and sell and allows them to prevent the sale of their children’s data to third parties. Any exceptions would require “informed consent,” which must be from a parent for children under 13.

In addition, AB 1949 would close loopholes in the California Consumer Privacy Act (CCPA) that fail to protect the data of 17-year-olds effectively. The CCPA reserves its most robust protections for those under 16.

“This bill is a crucial step in our work to close the gaps in our privacy laws that have allowed tech giants to exploit and monetize our kids’ sensitive data with impunity,” wrote Wicks (D-Oakland).

The bills may be timed to coincide with a US Senate hearing (with five Big Tech CEOs in tow) on Wednesday covering children’s online safety. In addition, California is part of a 41-state coalition that sued Meta in October for harming children’s mental health. The Wall Street Journal reported in 2021 that internal Meta (Facebook at the time) documents described “tweens” as “a valuable but untapped audience.”

This article originally appeared on Engadget at https://www.engadget.com/proposed-california-bill-would-let-parents-block-algorithmic-social-feeds-for-children-220132956.html?src=rss

Proposed California bill would let parents block algorithmic social feeds for children

California will float a pair of bills designed to protect children from social media addiction and preserve their private data. The Protecting Youth from Social Media Addiction Act (SB 976) and California Children’s Data Privacy Act (AB 1949) were introduced Monday by the state’s Attorney General Rob Bonta, State Senator Nancy Skinner and Assemblymember Buffy Wicks. The proposed legislation follows a CA child safety bill that was set to go into effect this year but is now on hold.

SB 976 could give parents the power to remove addictive algorithmic feeds from their children’s social channels. If passed, it would allow parents of children under 18 to choose between the default algorithmic feed — typically designed to create profitable addictions — and a less habit-forming chronological one. It would also let parents block all social media notifications and prevent their kids from accessing social platforms during nighttime and school hours.

 “Social media companies have designed their platforms to addict users, especially our kids. Countless studies show that once a young person has a social media addiction, they experience higher rates of depression, anxiety, and low self-esteem,” California Senator Nancy Skinner (D-Berkeley) wrote in a press release. “We’ve waited long enough for social media companies to act. SB 976 is needed now to establish sensible guardrails so parents can protect their kids from these preventable harms.”

L to R: California AG Rob Bonta, CA State Senator Nancy Skinner and Assemblymember Buffy Wicks standing at a podium in a classroom.
L to R: California AG Rob Bonta, State Senator Nancy Skinner and Assemblymember Buffy Wicks
The Office of Nancy Skinner

Meanwhile, AB 1949 would attempt to strengthen data privacy for CA children under 18. The bill’s language gives the state’s consumers the right to know what personal information social companies collect and sell and allows them to prevent the sale of their children’s data to third parties. Any exceptions would require “informed consent,” which must be from a parent for children under 13.

In addition, AB 1949 would close loopholes in the California Consumer Privacy Act (CCPA) that fail to protect the data of 17-year-olds effectively. The CCPA reserves its most robust protections for those under 16.

“This bill is a crucial step in our work to close the gaps in our privacy laws that have allowed tech giants to exploit and monetize our kids’ sensitive data with impunity,” wrote Wicks (D-Oakland).

The bills may be timed to coincide with a US Senate hearing (with five Big Tech CEOs in tow) on Wednesday covering children’s online safety. In addition, California is part of a 41-state coalition that sued Meta in October for harming children’s mental health. The Wall Street Journal reported in 2021 that internal Meta (Facebook at the time) documents described “tweens” as “a valuable but untapped audience.”

This article originally appeared on Engadget at https://www.engadget.com/proposed-california-bill-would-let-parents-block-algorithmic-social-feeds-for-children-220132956.html?src=rss

A new Deus Ex game was reportedly canceled amid Embracer’s crisis

Embracer Group, the Swedish holding company undergoing restructuring, has reportedly canceled a Deus Ex game. Bloomberg says developers had been working on the unannounced title for two years. Neither Embracer nor developer Eidos addressed the reported cancellation specifically, but they confirmed they were laying off 97 employees at Deus Ex developer Eidos Montreal.

Eidos will reportedly focus instead on “an original franchise.” Bloomberg’s sources say the Deus Ex game was scheduled to start production later this year. The franchise’s most recent mainline installment was 2016’s Deus Ex: Mankind Divided.

After aggressively growing through acquisitions during the pandemic, Embracer Group entered a turbulent period last year. The company announced a restructuring plan in June 2023 after an unnamed partner pulled out of a planned deal that would have brought in $2 billion over six years. Axios later reported the mysterious investor was Savvy Games Group, which the Saudi government funds.

In August, Embracer announced the closure of Volition, the studio behind the Saints Row series. The parent company laid off about 900 employees in September and another 50 workers at Chorus developer Fishlabs. Earlier this month, Embracer shuttered Lost Boys Interactive, makers of Tiny Tina’s Wonderland — pinning the blame on “headwinds facing the industry right now.”

Embracer says the restructuring phase will run until the end of March. The company claims it will provide regular updates on the process, including when it publishes its next quarterly report on February 15.

Alongside the alleged Deus Ex cancellation, Eidos confirmed it let go of 97 employees from development teams, administration and support services. “The global economic context, the challenges of our industry and the comprehensive restructuring announced by Embracer have finally impacted our studio,” Eidos wrote in a statement.

This article originally appeared on Engadget at https://www.engadget.com/a-new-deus-ex-game-was-reportedly-canceled-amid-embracers-crisis-194919207.html?src=rss

A new Deus Ex game was reportedly canceled amid Embracer’s crisis

Embracer Group, the Swedish holding company undergoing restructuring, has reportedly canceled a Deus Ex game. Bloomberg says developers had been working on the unannounced title for two years. Neither Embracer nor developer Eidos addressed the reported cancellation specifically, but they confirmed they were laying off 97 employees at Deus Ex developer Eidos Montreal.

Eidos will reportedly focus instead on “an original franchise.” Bloomberg’s sources say the Deus Ex game was scheduled to start production later this year. The franchise’s most recent mainline installment was 2016’s Deus Ex: Mankind Divided.

After aggressively growing through acquisitions during the pandemic, Embracer Group entered a turbulent period last year. The company announced a restructuring plan in June 2023 after an unnamed partner pulled out of a planned deal that would have brought in $2 billion over six years. Axios later reported the mysterious investor was Savvy Games Group, which the Saudi government funds.

In August, Embracer announced the closure of Volition, the studio behind the Saints Row series. The parent company laid off about 900 employees in September and another 50 workers at Chorus developer Fishlabs. Earlier this month, Embracer shuttered Lost Boys Interactive, makers of Tiny Tina’s Wonderland — pinning the blame on “headwinds facing the industry right now.”

Embracer says the restructuring phase will run until the end of March. The company claims it will provide regular updates on the process, including when it publishes its next quarterly report on February 15.

Alongside the alleged Deus Ex cancellation, Eidos confirmed it let go of 97 employees from development teams, administration and support services. “The global economic context, the challenges of our industry and the comprehensive restructuring announced by Embracer have finally impacted our studio,” Eidos wrote in a statement.

This article originally appeared on Engadget at https://www.engadget.com/a-new-deus-ex-game-was-reportedly-canceled-amid-embracers-crisis-194919207.html?src=rss

Arzette, a love letter to the CD-i Zelda games, will also revive an awful controller

Arzette: The Jewel of Faramore is getting a controller worthy of its inspiration — for better or worse. The upcoming game, a spiritual successor to the infamous 1993 Zelda titles for the Philips CD-i, will launch with a limited edition controller that resembles one of the largely forgotten system’s original remotes.

Developer Seedy Eye Software (a homophone for “CD-i software”) says using the controller on “Classic Controls” mode will let you “play Arzette, as it might have played back in 1993.” That’s when the title’s inspirations — Link: The Faces of Evil and Zelda: The Wand of Gamelon — arrived for Philips’ (brief) stab at a game-changing home entertainment system. (A third title in the series, Zelda’s Adventure, launched in 1994 with a top-down view, and Philips discontinued the system four years later.)

One may wonder why a developer would want to pay homage to a pair of historical duds better known for their memed cutscenes and masochistic gameplay than, oh, fun. Earlier this month, creator Seth “Dopply” Fulkerson told Game Developer he saw “untapped potential” in the notorious titles.

“The limitations the games suffered thanks to the hardware, budget and time constraints became painfully obvious,” he said. “I found it very inspiring to see how much [director Dale DeSharone] and his team accomplished with so little. There is a handcrafted charm to the games. They are hand-animated, hand-drawn, with brilliant music, designed in a surprisingly non-linear way that encourages you to explore them.” He continued, “Though they have many, many flaws, I truly believe there is innate potential in the games. Making a new game in the same style, with improvements to the core gameplay, was an irresistible idea.”

The game brings back several artists from the Zelda CD-i games. These include artist Rob Dunlavey and voice actors Jeffrey Rath (Link) and Bonniejean Wilbur (Zelda).

Promotional image for gear promoting the upcoming game Arzette: The Jewel of Faramore.
Seedy Eye Software / Limited Run Games

The “retro-inspired” controller will only work with Switch and PC. (The game will also support PS5 / PS4 and Xbox.) The remote looks nearly identical to Philips’ paddle controller, except for a couple of extra buttons. A fair warning that, given how the original remote played, the new version won’t likely make Arzette (or any other games) more playable or enjoyable — just more nostalgic.

Seedy Eye has partnered with Limited Run Games to distribute the controllers (and physical game copies). The two companies say they "worked hand-in-hand to craft the perfect physical goodies that pay tribute to Arzette and this oft-overlooked era in gaming.”

The controller will be available to pre-order starting February 2 at 10AM ET. (Pre-orders close on March 17.) The remote costs $35, will ship in a gray color and currently has an estimated November ship date. A pink variant will be exclusive to Arzette’s Collector’s Edition bundle. Arzette: The Jewel of Faramore arrives on February 14. You can watch its trailer below.

This article originally appeared on Engadget at https://www.engadget.com/arzette-a-love-letter-to-the-cd-i-zelda-games-will-also-revive-an-awful-controller-180450138.html?src=rss

Samsung’s AI features on the Galaxy S24 in China reportedly ditch Google for Baidu

The Samsung Galaxy S24 isn’t taking Google’s Gemini AI with it to China. CNBC reported Friday that the Chinese version of the flagship phone uses Baidu’s Ernie chatbot to power the phone’s AI-powered features. Ernie arrived last August after reportedly receiving Chinese government approval. 

“Now featuring Ernie’s understanding and generation capabilities, the upgraded Samsung Note Assistant can translate content and also summarize lengthy content into clear, intelligently organized formats at the click of a button, streamlining the organization of extensive text,” Baidu and Samsung told CNBC in a joint statement.

Samsung’s description of the Galaxy S24 series on its Chinese website advertises many of the same Google-powered features it debuted last week in its San Jose, CA, launch event. These include a version of Circle to Search, real-time call translation, a transcription helper and a photo assistant. The Chinese Galaxy S24 product pages don’t have any references to Google, which has limited operations in the country.

Screenshot of Samsung’s Chinese website for the Galaxy S24 series. Chinese text and bubbles highlighting its AI-fueled features.
Samsung

A recent report suggests Apple recently ended Samsung’s 14-year run as the global smartphone shipment leader. In addition, IDC published data this week suggesting the iPhone maker claimed the top spot in the Chinese market (with a 17.3-percent market share) for the first time in 2023. Samsung didn’t make the top five.

Engadget has tried the Galaxy S24 series, including the standard, Plus and Ultra variants. Samsung’s 2024 flagship phone lineup launches in the US on January 31.

This article originally appeared on Engadget at https://www.engadget.com/samsungs-ai-features-on-the-galaxy-s24-in-china-reportedly-ditch-google-for-baidu-174503505.html?src=rss

How to turn on Stolen Device Protection on your iPhone to secure your data

Apple’s Stolen Device Protection is a new feature that protects iPhone data and makes it harder for thieves to wreak havoc. Introduced in iOS 17.3, the feature requires a combination of Face ID (or Touch ID) scans and time delays before using payment features or changing account security when the device is away from familiar locations. Here’s precisely how Stolen Device Protection works.

What is Stolen Device Protection for iPhone

Stolen Device Protection takes a bad situation — someone stealing your iPhone — and reduces the chance of it spiraling into something much worse. When activated, the feature will prompt you to perform a biometric scan (Face ID or Touch ID) when you’re away from familiar locations, like home or work. In those situations, it won’t allow you (or an iPhone snatcher) to use your passcode as a backup method. It also incorporates time delays for some security-related features.

The tool may have been inspired by a Wall Street Journal report from early 2023 about an increasingly common practice of thieves spying on users while entering their passcode — right before snatching the phone and taking off.

If the perp has both the phone and its passcode (without Stolen Device Protection activated), they could reset the Apple ID password, turn off Find My, possibly steal payment info or passwords and factory reset the iPhone. If they’re experts, they could theoretically do all that within minutes (if not seconds) before you can log onto Find My and report your device as lost.

With Stolen Device Protection turned on, a thief in the same situation would be largely stymied. Requiring Face ID or Touch ID and time delays would prevent them from accessing your passwords and payment information, changing security features (to lock you out and further hijack your device) and factory resetting it. This gives you precious time to find another device, report your phone as lost in Find My, change your password and file a police report.

How does it work?

Stolen Device Protection requires a biometric (Face ID / Touch ID) scan — without the passcode as a backup option — for the following situations when your phone is away from your familiar locations:

  • Turning off Lost Mode

  • Performing a factory reset (“Erase all content and settings”)

  • Using or stealing saved passwords or passkeys for online accounts

  • Using payment methods saved for “autofill” in Safari

  • Using your phone to activate a new Apple device (Quick Start)

  • Viewing your Apple Card’s virtual card number

  • Applying for a new Apple Card

  • “Certain Apple Cash and Savings actions in Wallet” (examples include transferring money to or from Apple Cash or Savings)

In addition, the following actions require an extra time delay. With Stolen Device Protection activated, if someone away from your familiar locations tried to do anything on the list below, they would have to perform a Face ID (or Touch ID) scan, wait an hour and authenticate again with a second biometric scan:

  • Turning off Find My

  • Turning off Stolen Device Protection

  • Changing your Apple ID password

  • Signing out of your Apple ID

  • Adding or removing Face ID or Touch ID

  • Changing your phone’s passcode

  • Changing Apple ID account security (examples include creating a Recovery Key / Recovery Contact or adding / removing a trusted device)

  • Resetting all the phone’s settings

One thing missing from the list is Apple Pay. Someone with your stolen iPhone and passcode could still make Apple Pay purchases using only your passcode, which isn’t ideal.

How to turn on Stolen Device Protection

Before activating the feature, make sure your device is updated to iOS 17.3 (or higher). Head to Settings > General > Software Update on your iPhone to check for updates and ensure you’re on the latest software. (If your device is stuck on pre-iOS 17 software and won’t update past that, your model is too old to run the latest software.)

Once you’re running (at least) iOS 17.3, do the following on your iPhone:

  1. Open the Settings app

  2. Scroll down and tap Face ID & Passcode (it will be called Touch ID & Passcode on older models and the iPhone SE)

  3. Enter your passcode

  4. Scroll down until you see Stolen Device Protection

  5. Tap Turn On Protection

If you ever want to deactivate the feature, follow the same steps — except you’d tap Turn Off Protection in step five. It would perform a Face ID or Touch ID scan to confirm the change.

For more on the latest iPhone features, you can check out Engadget’s review of the latest models and our in-depth review of iOS 17.

This article originally appeared on Engadget at https://www.engadget.com/how-to-turn-on-stolen-device-protection-on-your-iphone-to-secure-your-data-182721345.html?src=rss

How to turn on Stolen Device Protection on your iPhone to secure your data

Apple’s Stolen Device Protection is a new feature that protects iPhone data and makes it harder for thieves to wreak havoc. Introduced in iOS 17.3, the feature requires a combination of Face ID (or Touch ID) scans and time delays before using payment features or changing account security when the device is away from familiar locations. Here’s precisely how Stolen Device Protection works.

What is Stolen Device Protection for iPhone

Stolen Device Protection takes a bad situation — someone stealing your iPhone — and reduces the chance of it spiraling into something much worse. When activated, the feature will prompt you to perform a biometric scan (Face ID or Touch ID) when you’re away from familiar locations, like home or work. In those situations, it won’t allow you (or an iPhone snatcher) to use your passcode as a backup method. It also incorporates time delays for some security-related features.

The tool may have been inspired by a Wall Street Journal report from early 2023 about an increasingly common practice of thieves spying on users while entering their passcode — right before snatching the phone and taking off.

If the perp has both the phone and its passcode (without Stolen Device Protection activated), they could reset the Apple ID password, turn off Find My, possibly steal payment info or passwords and factory reset the iPhone. If they’re experts, they could theoretically do all that within minutes (if not seconds) before you can log onto Find My and report your device as lost.

With Stolen Device Protection turned on, a thief in the same situation would be largely stymied. Requiring Face ID or Touch ID and time delays would prevent them from accessing your passwords and payment information, changing security features (to lock you out and further hijack your device) and factory resetting it. This gives you precious time to find another device, report your phone as lost in Find My, change your password and file a police report.

How does it work?

Stolen Device Protection requires a biometric (Face ID / Touch ID) scan — without the passcode as a backup option — for the following situations when your phone is away from your familiar locations:

  • Turning off Lost Mode

  • Performing a factory reset (“Erase all content and settings”)

  • Using or stealing saved passwords or passkeys for online accounts

  • Using payment methods saved for “autofill” in Safari

  • Using your phone to activate a new Apple device (Quick Start)

  • Viewing your Apple Card’s virtual card number

  • Applying for a new Apple Card

  • “Certain Apple Cash and Savings actions in Wallet” (examples include transferring money to or from Apple Cash or Savings)

In addition, the following actions require an extra time delay. With Stolen Device Protection activated, if someone away from your familiar locations tried to do anything on the list below, they would have to perform a Face ID (or Touch ID) scan, wait an hour and authenticate again with a second biometric scan:

  • Turning off Find My

  • Turning off Stolen Device Protection

  • Changing your Apple ID password

  • Signing out of your Apple ID

  • Adding or removing Face ID or Touch ID

  • Changing your phone’s passcode

  • Changing Apple ID account security (examples include creating a Recovery Key / Recovery Contact or adding / removing a trusted device)

  • Resetting all the phone’s settings

One thing missing from the list is Apple Pay. Someone with your stolen iPhone and passcode could still make Apple Pay purchases using only your passcode, which isn’t ideal.

How to turn on Stolen Device Protection

Before activating the feature, make sure your device is updated to iOS 17.3 (or higher). Head to Settings > General > Software Update on your iPhone to check for updates and ensure you’re on the latest software. (If your device is stuck on pre-iOS 17 software and won’t update past that, your model is too old to run the latest software.)

Once you’re running (at least) iOS 17.3, do the following on your iPhone:

  1. Open the Settings app

  2. Scroll down and tap Face ID & Passcode (it will be called Touch ID & Passcode on older models and the iPhone SE)

  3. Enter your passcode

  4. Scroll down until you see Stolen Device Protection

  5. Tap Turn On Protection

If you ever want to deactivate the feature, follow the same steps — except you’d tap Turn Off Protection in step five. It would perform a Face ID or Touch ID scan to confirm the change.

For more on the latest iPhone features, you can check out Engadget’s review of the latest models and our in-depth review of iOS 17.

This article originally appeared on Engadget at https://www.engadget.com/how-to-turn-on-stolen-device-protection-on-your-iphone-to-secure-your-data-182721345.html?src=rss