The Humane AI Pin is the solution to none of technology’s problems

I’ve found myself at a loss for words when trying to explain the Humane AI Pin to my friends. The best description so far is that it’s a combination of a wearable Siri button with a camera and built-in projector that beams onto your palm. But each time I start explaining that, I get so caught up in pointing out its problems that I never really get to fully detail what the AI Pin can do. Or is meant to do, anyway.

Yet, words are crucial to the Humane AI experience. Your primary mode of interacting with the pin is through voice, accompanied by touch and gestures. Without speaking, your options are severely limited. The company describes the device as your “second brain,” but the combination of holding out my hand to see the projected screen, waving it around to navigate the interface and tapping my chest and waiting for an answer all just made me look really stupid. When I remember that I was actually eager to spend $700 of my own money to get a Humane AI Pin, not to mention shell out the required $24 a month for the AI and the company’s 4G service riding on T-Mobile’s network, I feel even sillier.

What is the Humane AI Pin?

In the company’s own words, the Humane AI Pin is the “first wearable device and software platform built to harness the full power of artificial intelligence.” If that doesn’t clear it up, well, I can’t blame you.

There are basically two parts to the device: the Pin and its magnetic attachment. The Pin is the main piece, which houses a touch-sensitive panel on its face, with a projector, camera, mic and speakers lining its top edge. It’s about the same size as an Apple Watch Ultra 2, both measuring about 44mm (1.73 inches) across. The Humane wearable is slightly squatter, though, with its 47.5mm (1.87 inches) height compared to the Watch Ultra’s 49mm (1.92 inches). It’s also half the weight of Apple’s smartwatch, at 34.2 grams (1.2 ounces).

The top of the AI Pin is slightly thicker than the bottom, since it has to contain extra sensors and indicator lights, but it’s still about the same depth as the Watch Ultra 2. Snap on a magnetic attachment, and you add about 8mm (0.31 inches). There are a few accessories available, with the most useful being the included battery booster. You’ll get two battery boosters in the “complete system” when you buy the Humane AI Pin, as well as a charging cradle and case. The booster helps clip the AI Pin to your clothes while adding some extra hours of life to the device (in theory, anyway). It also brings an extra 20 grams (0.7 ounces) with it, but even including that the AI Pin is still 10 grams (0.35 ounces) lighter than the Watch Ultra 2.

That weight (or lack thereof) is important, since anything too heavy would drag down on your clothes, which would not only be uncomfortable but also block the Pin’s projector from functioning properly. If you're wearing it with a thinner fabric, by the way, you’ll have to use the latch accessory instead of the booster, which is a $40 plastic tile that provides no additional power. You can also get the stainless steel clip that Humane sells for $50 to stick it onto heavier materials or belts and backpacks. Whichever accessory you choose, though, you’ll place it on the underside of your garment and stick the Pin on the outside to connect the pieces.

Humane AI Pin
Hayato Huseman for Engadget

How the AI Pin works

But you might not want to place the AI Pin on a bag, as you need to tap on it to ask a question or pull up the projected screen. Every interaction with the device begins with touching it, there is no wake word, so having it out of reach sucks.

Tap and hold on the touchpad, ask a question, then let go and wait a few seconds for the AI to answer. You can hold out your palm to read what it said, bringing your hand closer to and further from your chest to toggle through elements. To jump through individual cards and buttons, you’ll have to tilt your palm up or down, which can get in the way of seeing what’s on display. But more on that in a bit.

There are some built-in gestures offering shortcuts to functions like taking a picture or video or controlling music playback. Double tapping the Pin with two fingers will snap a shot, while double-tapping and holding at the end will trigger a 15-second video. Swiping up or down adjusts the device or Bluetooth headphone volume while the assistant is talking or when music is playing, too.

Side view of the Humane AI Pin held in mid-air in front of some green foliage and a red brick building.
Cherlynn Low for Engadget

Each person who orders the Humane AI Pin will have to set up an account and go through onboarding on the website before the company will ship out their unit. Part of this process includes signing into your Google or Apple accounts to port over contacts, as well as watching a video that walks you through those gestures I described. Your Pin will arrive already linked to your account with its eSIM and phone number sorted. This likely simplifies things so users won’t have to fiddle with tedious steps like installing a SIM card or signing into their profiles. It felt a bit strange, but it’s a good thing because, as I’ll explain in a bit, trying to enter a password on the AI Pin is a real pain.

Talking to the Humane AI Pin

The easiest way to interact with the AI Pin is by talking to it. It’s supposed to feel natural, like you’re talking to a friend or assistant, and you shouldn’t have to feel forced when asking it for help. Unfortunately, that just wasn’t the case in my testing.

When the AI Pin did understand me and answer correctly, it usually took a few seconds to reply, in which time I could have already gotten the same results on my phone. For a few things, like adding items to my shopping list or converting Canadian dollars to USD, it performed adequately. But “adequate” seems to be the best case scenario.

Sometimes the answers were too long or irrelevant. When I asked “Should I watch Dream Scenario,” it said “Dream Scenario is a 2023 comedy/fantasy film featuring Nicolas Cage, with positive ratings on IMDb, Rotten Tomatoes and Metacritic. It’s available for streaming on platforms like YouTube, Hulu and Amazon Prime Video. If you enjoy comedy and fantasy genres, it may be worth watching.”

Setting aside the fact that the “answer” to my query came after a lot of preamble I found unnecessary, I also just didn’t find the recommendation satisfying. It wasn’t giving me a straight answer, which is understandable, but ultimately none of what it said felt different from scanning the top results of a Google search. I would have gleaned more info had I looked the film up on my phone, since I’d be able to see the actual Rotten Tomatoes and Metacritic scores.

To be fair, the AI Pin was smart enough to understand follow-ups like “How about The Witch” without needing me to repeat my original question. But it’s 2024; we’re way past assistants that need so much hand-holding.

A screenshot showing the data stored on the Humane AI Pin's web portal. At the top is the header

We’re also past the days of needing to word our requests in specific ways for AI to understand us. Though Humane has said you can speak to the pin “naturally,” there are some instances when that just didn’t work. First, it occasionally misheard me, even in my quiet living room. When I asked “Would I like YouTuber Danny Gonzalez,” it thought I said “would I like YouTube do I need Gonzalez” and responded “It’s unclear if you would like Dulce Gonzalez as the content of their videos and channels is not specified.”

When I repeated myself by carefully saying “I meant Danny Gonzalez,” the AI Pin spouted back facts about the YouTuber’s life and work, but did not answer my original question.

That’s not as bad as the fact that when I tried to get the Pin to describe what was in front of me, it simply would not. Humane has a Vision feature in beta that’s meant to let the AI Pin use its camera to see and analyze things in view, but when I tried to get it to look at my messy kitchen island, nothing happened. I’d ask “What’s in front of me” or “What am I holding out in front of you” or “Describe what’s in front of me,” which is how I’d phrase this request naturally. I tried so many variations of this, including “What am I looking at” and “Is there an octopus in front of me,” to no avail. I even took a photo and asked “can you describe what’s in that picture.”

Every time, I was told “Your AI Pin is not sure what you’re referring to” or “This question is not related to AI Pin” or, in the case where I first took a picture, “Your AI Pin is unable to analyze images or describe them.” I was confused why this wasn’t working even after I double checked that I had opted in and enabled the feature, and finally realized after checking the reviewers' guide that I had to use prompts that started with the word “Look.”

Look, maybe everyone else would have instinctively used that phrasing. But if you’re like me and didn’t, you’ll probably give up and never use this feature again. Even after I learned how to properly phrase my Vision requests, they were still clunky as hell. It was never as easy as “Look for my socks” but required two-part sentences like “Look at my room and tell me if there are boots in it” or “Look at this thing and tell me how to use it.”

A screenshot showing recent queries with the Humane AI Pin. The top of the page says

When I worded things just right, results were fairly impressive. It confirmed there was a “Lysol can on the top shelf of the shelving unit” and a “purple octopus on top of the brown cabinet.” I held out a cheek highlighter and asked what to do with it. The AI Pin accurately told me “The Carry On 2 cream by BYBI Beauty can be used to add a natural glow to skin,” among other things, although it never explicitly told me to apply it to my face. I asked it where an object I was holding came from, and it just said “The image is of a hand holding a bag of mini eggs. The bag is yellow with a purple label that says ‘mini eggs.’” Again, it didn't answer my actual question.

Humane’s AI, which is powered by a mix of OpenAI’s recent versions of GPT and other sources including its own models, just doesn’t feel fully baked. It’s like a robot pretending to be sentient — capable of indicating it sort of knows what I’m asking, but incapable of delivering a direct answer.

My issues with the AI Pin’s language model and features don’t end there. Sometimes it just refuses to do what I ask of it, like restart or shut down. Other times it does something entirely unexpected. When I said “Send a text message to Julian Chokkattu,” who’s a friend and fellow AI Pin reviewer over at Wired, I thought I’d be asked what I wanted to tell him. Instead, the device simply said OK and told me it sent the words “Hey Julian, just checking in. How's your day going?” to Chokkattu. I've never said anything like that to him in our years of friendship, but I guess technically the AI Pin did do what I asked.

Humane AI Pin
Hayato Huseman for Engadget

Using the Humane AI Pin’s projector display

If only voice interactions were the worst thing about the Humane AI Pin, but the list of problems only starts there. I was most intrigued by the company’s “pioneering Laser Ink display” that projects green rays onto your palm, as well as the gestures that enabled interaction with “onscreen” elements. But my initial wonder quickly gave way to frustration and a dull ache in my shoulder. It might be tiring to hold up your phone to scroll through Instagram, but at least you can set that down on a table and continue browsing. With the AI Pin, if your arm is not up, you’re not seeing anything.

Then there’s the fact that it’s a pretty small canvas. I would see about seven lines of text each time, with about one to three words on each row depending on the length. This meant I had to hold my hand up even longer so I could wait for notifications to finish scrolling through. I also have a smaller palm than some other reviewers I saw while testing the AI Pin. Julian over at Wired has a larger hand and I was downright jealous when I saw he was able to fit the entire projection onto his palm, whereas the contents of my display would spill over onto my fingers, making things hard to read.

It’s not just those of us afflicted with tiny palms that will find the AI Pin tricky to see. Step outside and you’ll have a hard time reading the faint projection. Even on a cloudy, rainy day in New York City, I could barely make out the words on my hands.

When you can read what’s on the screen, interacting with it might make you want to rip your eyes out. Like I said, you’ll have to move your palm closer and further to your chest to select the right cards to enter your passcode. It’s a bit like dialing a rotary phone, with cards for individual digits from 0 to 9. Go further away to get to the higher numbers and the backspace button, and come back for the smaller ones.

This gesture is smart in theory but it’s very sensitive. There’s a very small range of usable space since there is only so far your hand can go, so the distance between each digit is fairly small. One wrong move and you’ll accidentally select something you didn’t want and have to go all the way out to delete it. To top it all off, moving my arm around while doing that causes the Pin to flop about, meaning the screen shakes on my palm, too. On average, unlocking my Pin, which involves entering a four-digit passcode, took me about five seconds.

On its own, this doesn’t sound so bad, but bear in mind that you’ll have to re-enter this each time you disconnect the Pin from the booster, latch or clip. It’s currently springtime in New York, which means I’m putting on and taking off my jacket over and over again. Every time I go inside or out, I move the Pin to a different layer and have to look like a confused long-sighted tourist reading my palm at various distances. It’s not fun.

Of course, you can turn off the setting that requires password entry each time you remove the Pin, but that’s simply not great for security.

Though Humane says “privacy and transparency are paramount with AI Pin,” by its very nature the device isn’t suitable for performing confidential tasks unless you’re alone. You don’t want to dictate a sensitive message to your accountant or partner in public, nor might you want to speak your Wi-Fi password out loud.

That latter is one of two input methods for setting up an internet connection, by the way. If you choose not to spell your Wi-Fi key out loud, then you can go to the Humane website to type in your network name (spell it out yourself, not look for one that’s available) and password to generate a QR code for the Pin to scan. Having to verbally relay alphanumeric characters to the Pin is not ideal, and though the QR code technically works, it just involves too much effort. It’s like giving someone a spork when they asked for a knife and fork: good enough to get by, but not a perfect replacement.

The Humane AI Pin held in mid-air in front of some bare trees and a street with red brick buildings on it.
Cherlynn Low for Engadget

The Humane AI Pin’s speaker

Since communicating through speech is the easiest means of using the Pin, you’ll need to be verbal and have hearing. If you choose not to raise your hand to read the AI Pin’s responses, you’ll have to listen for it. The good news is, the onboard speaker is usually loud enough for most environments, and I only struggled to hear it on NYC streets with heavy traffic passing by. I never attempted to talk to it on the subway, however, nor did I obnoxiously play music from the device while I was outside.

In my office and gym, though, I did get the AI Pin to play some songs. The music sounded fine — I didn’t get thumping bass or particularly crisp vocals, but I could hear instruments and crooners easily. Compared to my iPhone 15 Pro Max, it’s a bit tinny, as expected, but not drastically worse.

The problem is there are, once again, some caveats. The most important of these is that at the moment, you can only use Tidal’s paid streaming service with the Pin. You’ll get 90 days free with your purchase, and then have to pay $11 a month (on top of the $24 you already give to Humane) to continue streaming tunes from your Pin. Humane hasn’t said yet if other music services will eventually be supported, either, so unless you’re already on Tidal, listening to music from the Pin might just not be worth the price. Annoyingly, Tidal also doesn’t have the extensive library that competing providers do, so I couldn’t even play songs like Beyonce’s latest album or Taylor Swift’s discography (although remixes of her songs were available).

Though Humane has described its “personic speaker” as being able to create a “bubble of sound,” that “bubble” certainly has a permeable membrane. People around you will definitely hear what you’re playing, so unless you’re trying to start a dance party, it might be too disruptive to use the AI Pin for music without pairing Bluetooth headphones. You’ll also probably get better sound quality from Bose, Beats or AirPods anyway.

The Humane AI Pin camera experience

I’ll admit it — a large part of why I was excited for the AI Pin is its onboard camera. My love for taking photos is well-documented, and with the Pin, snapping a shot is supposed to be as easy as double-tapping its face with two fingers. I was even ready to put up with subpar pictures from its 13-megapixel sensor for the ability to quickly capture a scene without having to first whip out my phone.

Sadly, the Humane AI Pin was simply too slow and feverish to deliver on that premise. I frequently ran into times when, after taking a bunch of photos and holding my palm up to see how each snap turned out, the device would get uncomfortably warm. At least twice in my testing, the Pin just shouted “Your AI Pin is too warm and needs to cool down” before shutting down.

A sample image from the Humane AI Pin's 13-megapixel camera, showing a tree-lined path in a park.
A sample image from the Humane AI Pin.
Cherlynn Low for Engadget

Even when it’s running normally, using the AI Pin’s camera is slow. I’d double tap it and then have to stand still for at least three seconds before it would take the shot. I appreciate that there’s audio and visual feedback through the flashing green lights and the sound of a shutter clicking when the camera is going, so both you and people around know you’re recording. But it’s also a reminder of how long I need to wait — the “shutter” sound will need to go off thrice before the image is saved.

I took photos and videos in various situations under different lighting conditions, from a birthday dinner in a dimly lit restaurant to a beautiful park on a cloudy day. I recorded some workout footage in my building’s gym with large windows, and in general anything taken with adequate light looked good enough to post. The videos might make viewers a little motion sick, since the camera was clipped to my sports bra and moved around with me, but that’s tolerable.

In dark environments, though, forget about it. Even my Nokia E7 from 2012 delivered clearer pictures, most likely because I could hold it steady while framing a shot. The photos of my friends at dinner were so grainy, one person even seemed translucent. To my knowledge, that buddy is not a ghost, either.

A sample image from the Humane AI Pin's 13-megapixel camera, showing a group of people sitting around a table in a dimly lit restaurant. One person is staring at the camera with his chin resting on the back of his hand. The photos is fuzzy and grainy.
A sample image from the Humane AI Pin.
Cherlynn Low for Engadget

To its credit, Humane’s camera has a generous 120-degree field of view, meaning you’ll capture just about anything in front of you. When you’re not sure if you’ve gotten your subject in the picture, you can hold up your palm after taking the shot, and the projector will beam a monochromatic preview so you can verify. It’s not really for you to admire your skilled composition or level of detail, and more just to see that you did indeed manage to get the receipt in view before moving on.

Cosmos OS on the Humane AI Pin

When it comes time to retrieve those pictures off the AI Pin, you’ll just need to navigate to humane.center in any browser and sign in. There, you’ll find your photos and videos under “Captures,” your notes, recently played music and calls, as well as every interaction you’ve had with the assistant. That last one made recalling every weird exchange with the AI Pin for this review very easy.

You’ll have to make sure the AI Pin is connected to Wi-Fi and power, and be at least 50 percent charged before full-resolution photos and videos will upload to the dashboard. But before that, you can still scroll through previews in a gallery, even though you can’t download or share them.

The web portal is fairly rudimentary, with large square tiles serving as cards for sections like “Captures,” “Notes” and “My Data.” Going through them just shows you things you’ve saved or asked the Pin to remember, like a friend’s favorite color or their birthday. Importantly, there isn’t an area for you to view your text messages, so if you wanted to type out a reply from your laptop instead of dictating to the Pin, sorry, you can’t. The only way to view messages is by putting on the Pin, pulling up the screen and navigating the onboard menus to find them.

Humane AI Pin interface
Hayato Huseman for Engadget

That brings me to what you see on the AI Pin’s visual interface. If you’ve raised your palm right after asking it something, you’ll see your answer in text form. But if you had brought up your hand after unlocking or tapping the device, you’ll see its barebones home screen. This contains three main elements — a clock widget in the middle, the word “Nearby” in a bubble at the top and notifications at the bottom. Tilting your palm scrolls through these, and you can pinch your index finger and thumb together to select things.

Push your hand further back and you’ll bring up a menu with five circles that will lead you to messages, phone, settings, camera and media player. You’ll need to tilt your palm to scroll through these, but because they’re laid out in a ring, it’s not as straightforward as simply aiming up or down. Trying to get the right target here was one of the greatest challenges I encountered while testing the AI Pin. I was rarely able to land on the right option on my first attempt. That, along with the fact that you have to put on the Pin (and unlock it), made it so difficult to see messages that I eventually just gave up looking at texts I received.

The Humane AI Pin overheating, in use and battery life

One reason I sometimes took off the AI Pin is that it would frequently get too warm and need to “cool down.” Once I removed it, I would not feel the urge to put it back on. I did wear it a lot in the first few days I had it, typically from 7:45AM when I headed out to the gym till evening, depending on what I was up to. Usually at about 3PM, after taking a lot of pictures and video, I would be told my AI Pin’s battery was running low, and I’d need to swap out the battery booster. This didn’t seem to work sometimes, with the Pin dying before it could get enough power through the accessory. At first it appeared the device simply wouldn’t detect the booster, but I later learned it’s just slow and can take up to five minutes to recognize a newly attached booster.

When I wore the AI Pin to my friend (and fellow reviewer) Michael Fisher’s birthday party just hours after unboxing it, I had it clipped to my tank top just hovering above my heart. Because it was so close to the edge of my shirt, I would accidentally brush past it a few times when reaching for a drink or resting my chin on my palm a la The Thinker. Normally, I wouldn’t have noticed the Pin, but as it was running so hot, I felt burned every time my skin came into contact with its chrome edges. The touchpad also grew warm with use, and the battery booster resting against my chest also got noticeably toasty (though it never actually left a mark).

Humane AI Pin
Hayato Huseman for Engadget

Part of the reason the AI Pin ran so hot is likely that there’s not a lot of room for the heat generated by its octa-core Snapdragon processor to dissipate. I had also been using it near constantly to show my companions the pictures I had taken, and Humane has said its laser projector is “designed for brief interactions (up to six to nine minutes), not prolonged usage” and that it had “intentionally set conservative thermal limits for this first release that may cause it to need to cool down.” The company added that it not only plans to “improve uninterrupted run time in our next software release,” but also that it’s “working to improve overall thermal performance in the next software release.”

There are other things I need Humane to address via software updates ASAP. The fact that its AI sometimes decides not to do what I ask, like telling me “Your AI Pin is already running smoothly, no need to restart” when I asked it to restart is not only surprising but limiting. There are no hardware buttons to turn the pin on or off, and the only other way to trigger a restart is to pull up the dreaded screen, painstakingly go to the menu, hopefully land on settings and find the Power option. By which point if the Pin hasn’t shut down my arm will have.

A lot of my interactions with the AI Pin also felt like problems I encountered with earlier versions of Siri, Alexa and the Google Assistant. The overly wordy answers, for example, or the pronounced two or three-second delay before a response, are all reminiscent of the early 2010s. When I asked the AI Pin to “remember that I parked my car right here,” it just saved a note saying “Your car is parked right here,” with no GPS information or no way to navigate back. So I guess I parked my car on a sticky note.

To be clear, that’s not something that Humane ever said the AI Pin can do, but it feels like such an easy thing to offer, especially since the device does have onboard GPS. Google’s made entire lines of bags and Levi’s jackets that serve the very purpose of dropping pins to revisit places later. If your product is meant to be smart and revolutionary, it should at least be able to do what its competitors already can, not to mention offer features they don’t.

A screenshot of the Humane AI Pin's web portal, showing previous requests, with the header
Screenshot

One singular thing that the AI Pin actually manages to do competently is act as an interpreter. After you ask it to “translate to [x language],” you’ll have to hold down two fingers while you talk, let go and it will read out what you said in the relevant tongue. I tried talking to myself in English and Mandarin, and was frankly impressed with not only the accuracy of the translation and general vocal expressiveness, but also at how fast responses came through. You don’t even need to specify the language the speaker is using. As long as you’ve set the target language, the person talking in Mandarin will be translated to English and the words said in English will be read out in Mandarin.

It’s worth considering the fact that using the AI Pin is a nightmare for anyone who gets self-conscious. I’m pretty thick-skinned, but even I tried to hide the fact that I had a strange gadget with a camera pinned to my person. Luckily, I didn’t get any obvious stares or confrontations, but I heard from my fellow reviewers that they did. And as much as I like the idea of a second brain I can wear and offload little notes and reminders to, nothing that the AI Pin does well is actually executed better than a smartphone.

Wrap-up

Not only is the Humane AI Pin slow, finicky and barely even smart, using it made me look pretty dumb. In a few days of testing, I went from being excited to show it off to my friends to not having any reason to wear it.

Humane’s vision was ambitious, and the laser projector initially felt like a marvel. At first glance, it looked and felt like a refined product. But it just seems like at every turn, the company had to come up with solutions to problems it created. No screen or keyboard to enter your Wi-Fi password? No worries, use your phone or laptop to generate a QR code. Want to play music? Here you go, a 90-day subscription to Tidal, but you can only play music on that service.

The company promises to make software updates that could improve some issues, and the few tweaks my unit received during this review did make some things (like music playback) work better. The problem is that as it stands, the AI Pin doesn’t do enough to justify its $700 and $24-a-month price, and I simply cannot recommend anyone spend this much money for the one or two things it does adequately. 

Maybe in time, the AI Pin will be worth revisiting, but it’s hard to imagine why anyone would need a screenless AI wearable when so many devices exist today that you can use to talk to an assistant. From speakers and phones to smartwatches and cars, the world is full of useful AI access points that allow you to ditch a screen. Humane says it’s committed to a “future where AI seamlessly integrates into every aspect of our lives and enhances our daily experiences.” 

After testing the company’s AI Pin, that future feels pretty far away.

This article originally appeared on Engadget at https://www.engadget.com/the-humane-ai-pin-is-the-solution-to-none-of-technologys-problems-120002469.html?src=rss

The Humane AI Pin is the solution to none of technology’s problems

I’ve found myself at a loss for words when trying to explain the Humane AI Pin to my friends. The best description so far is that it’s a combination of a wearable Siri button with a camera and built-in projector that beams onto your palm. But each time I start explaining that, I get so caught up in pointing out its problems that I never really get to fully detail what the AI Pin can do. Or is meant to do, anyway.

Yet, words are crucial to the Humane AI experience. Your primary mode of interacting with the pin is through voice, accompanied by touch and gestures. Without speaking, your options are severely limited. The company describes the device as your “second brain,” but the combination of holding out my hand to see the projected screen, waving it around to navigate the interface and tapping my chest and waiting for an answer all just made me look really stupid. When I remember that I was actually eager to spend $700 of my own money to get a Humane AI Pin, not to mention shell out the required $24 a month for the AI and the company’s 4G service riding on T-Mobile’s network, I feel even sillier.

What is the Humane AI Pin?

In the company’s own words, the Humane AI Pin is the “first wearable device and software platform built to harness the full power of artificial intelligence.” If that doesn’t clear it up, well, I can’t blame you.

There are basically two parts to the device: the Pin and its magnetic attachment. The Pin is the main piece, which houses a touch-sensitive panel on its face, with a projector, camera, mic and speakers lining its top edge. It’s about the same size as an Apple Watch Ultra 2, both measuring about 44mm (1.73 inches) across. The Humane wearable is slightly squatter, though, with its 47.5mm (1.87 inches) height compared to the Watch Ultra’s 49mm (1.92 inches). It’s also half the weight of Apple’s smartwatch, at 34.2 grams (1.2 ounces).

The top of the AI Pin is slightly thicker than the bottom, since it has to contain extra sensors and indicator lights, but it’s still about the same depth as the Watch Ultra 2. Snap on a magnetic attachment, and you add about 8mm (0.31 inches). There are a few accessories available, with the most useful being the included battery booster. You’ll get two battery boosters in the “complete system” when you buy the Humane AI Pin, as well as a charging cradle and case. The booster helps clip the AI Pin to your clothes while adding some extra hours of life to the device (in theory, anyway). It also brings an extra 20 grams (0.7 ounces) with it, but even including that the AI Pin is still 10 grams (0.35 ounces) lighter than the Watch Ultra 2.

That weight (or lack thereof) is important, since anything too heavy would drag down on your clothes, which would not only be uncomfortable but also block the Pin’s projector from functioning properly. If you're wearing it with a thinner fabric, by the way, you’ll have to use the latch accessory instead of the booster, which is a $40 plastic tile that provides no additional power. You can also get the stainless steel clip that Humane sells for $50 to stick it onto heavier materials or belts and backpacks. Whichever accessory you choose, though, you’ll place it on the underside of your garment and stick the Pin on the outside to connect the pieces.

Humane AI Pin
Hayato Huseman for Engadget

How the AI Pin works

But you might not want to place the AI Pin on a bag, as you need to tap on it to ask a question or pull up the projected screen. Every interaction with the device begins with touching it, there is no wake word, so having it out of reach sucks.

Tap and hold on the touchpad, ask a question, then let go and wait a few seconds for the AI to answer. You can hold out your palm to read what it said, bringing your hand closer to and further from your chest to toggle through elements. To jump through individual cards and buttons, you’ll have to tilt your palm up or down, which can get in the way of seeing what’s on display. But more on that in a bit.

There are some built-in gestures offering shortcuts to functions like taking a picture or video or controlling music playback. Double tapping the Pin with two fingers will snap a shot, while double-tapping and holding at the end will trigger a 15-second video. Swiping up or down adjusts the device or Bluetooth headphone volume while the assistant is talking or when music is playing, too.

Side view of the Humane AI Pin held in mid-air in front of some green foliage and a red brick building.
Cherlynn Low for Engadget

Each person who orders the Humane AI Pin will have to set up an account and go through onboarding on the website before the company will ship out their unit. Part of this process includes signing into your Google or Apple accounts to port over contacts, as well as watching a video that walks you through those gestures I described. Your Pin will arrive already linked to your account with its eSIM and phone number sorted. This likely simplifies things so users won’t have to fiddle with tedious steps like installing a SIM card or signing into their profiles. It felt a bit strange, but it’s a good thing because, as I’ll explain in a bit, trying to enter a password on the AI Pin is a real pain.

Talking to the Humane AI Pin

The easiest way to interact with the AI Pin is by talking to it. It’s supposed to feel natural, like you’re talking to a friend or assistant, and you shouldn’t have to feel forced when asking it for help. Unfortunately, that just wasn’t the case in my testing.

When the AI Pin did understand me and answer correctly, it usually took a few seconds to reply, in which time I could have already gotten the same results on my phone. For a few things, like adding items to my shopping list or converting Canadian dollars to USD, it performed adequately. But “adequate” seems to be the best case scenario.

Sometimes the answers were too long or irrelevant. When I asked “Should I watch Dream Scenario,” it said “Dream Scenario is a 2023 comedy/fantasy film featuring Nicolas Cage, with positive ratings on IMDb, Rotten Tomatoes and Metacritic. It’s available for streaming on platforms like YouTube, Hulu and Amazon Prime Video. If you enjoy comedy and fantasy genres, it may be worth watching.”

Setting aside the fact that the “answer” to my query came after a lot of preamble I found unnecessary, I also just didn’t find the recommendation satisfying. It wasn’t giving me a straight answer, which is understandable, but ultimately none of what it said felt different from scanning the top results of a Google search. I would have gleaned more info had I looked the film up on my phone, since I’d be able to see the actual Rotten Tomatoes and Metacritic scores.

To be fair, the AI Pin was smart enough to understand follow-ups like “How about The Witch” without needing me to repeat my original question. But it’s 2024; we’re way past assistants that need so much hand-holding.

A screenshot showing the data stored on the Humane AI Pin's web portal. At the top is the header

We’re also past the days of needing to word our requests in specific ways for AI to understand us. Though Humane has said you can speak to the pin “naturally,” there are some instances when that just didn’t work. First, it occasionally misheard me, even in my quiet living room. When I asked “Would I like YouTuber Danny Gonzalez,” it thought I said “would I like YouTube do I need Gonzalez” and responded “It’s unclear if you would like Dulce Gonzalez as the content of their videos and channels is not specified.”

When I repeated myself by carefully saying “I meant Danny Gonzalez,” the AI Pin spouted back facts about the YouTuber’s life and work, but did not answer my original question.

That’s not as bad as the fact that when I tried to get the Pin to describe what was in front of me, it simply would not. Humane has a Vision feature in beta that’s meant to let the AI Pin use its camera to see and analyze things in view, but when I tried to get it to look at my messy kitchen island, nothing happened. I’d ask “What’s in front of me” or “What am I holding out in front of you” or “Describe what’s in front of me,” which is how I’d phrase this request naturally. I tried so many variations of this, including “What am I looking at” and “Is there an octopus in front of me,” to no avail. I even took a photo and asked “can you describe what’s in that picture.”

Every time, I was told “Your AI Pin is not sure what you’re referring to” or “This question is not related to AI Pin” or, in the case where I first took a picture, “Your AI Pin is unable to analyze images or describe them.” I was confused why this wasn’t working even after I double checked that I had opted in and enabled the feature, and finally realized after checking the reviewers' guide that I had to use prompts that started with the word “Look.”

Look, maybe everyone else would have instinctively used that phrasing. But if you’re like me and didn’t, you’ll probably give up and never use this feature again. Even after I learned how to properly phrase my Vision requests, they were still clunky as hell. It was never as easy as “Look for my socks” but required two-part sentences like “Look at my room and tell me if there are boots in it” or “Look at this thing and tell me how to use it.”

A screenshot showing recent queries with the Humane AI Pin. The top of the page says

When I worded things just right, results were fairly impressive. It confirmed there was a “Lysol can on the top shelf of the shelving unit” and a “purple octopus on top of the brown cabinet.” I held out a cheek highlighter and asked what to do with it. The AI Pin accurately told me “The Carry On 2 cream by BYBI Beauty can be used to add a natural glow to skin,” among other things, although it never explicitly told me to apply it to my face. I asked it where an object I was holding came from, and it just said “The image is of a hand holding a bag of mini eggs. The bag is yellow with a purple label that says ‘mini eggs.’” Again, it didn't answer my actual question.

Humane’s AI, which is powered by a mix of OpenAI’s recent versions of GPT and other sources including its own models, just doesn’t feel fully baked. It’s like a robot pretending to be sentient — capable of indicating it sort of knows what I’m asking, but incapable of delivering a direct answer.

My issues with the AI Pin’s language model and features don’t end there. Sometimes it just refuses to do what I ask of it, like restart or shut down. Other times it does something entirely unexpected. When I said “Send a text message to Julian Chokkattu,” who’s a friend and fellow AI Pin reviewer over at Wired, I thought I’d be asked what I wanted to tell him. Instead, the device simply said OK and told me it sent the words “Hey Julian, just checking in. How's your day going?” to Chokkattu. I've never said anything like that to him in our years of friendship, but I guess technically the AI Pin did do what I asked.

Humane AI Pin
Hayato Huseman for Engadget

Using the Humane AI Pin’s projector display

If only voice interactions were the worst thing about the Humane AI Pin, but the list of problems only starts there. I was most intrigued by the company’s “pioneering Laser Ink display” that projects green rays onto your palm, as well as the gestures that enabled interaction with “onscreen” elements. But my initial wonder quickly gave way to frustration and a dull ache in my shoulder. It might be tiring to hold up your phone to scroll through Instagram, but at least you can set that down on a table and continue browsing. With the AI Pin, if your arm is not up, you’re not seeing anything.

Then there’s the fact that it’s a pretty small canvas. I would see about seven lines of text each time, with about one to three words on each row depending on the length. This meant I had to hold my hand up even longer so I could wait for notifications to finish scrolling through. I also have a smaller palm than some other reviewers I saw while testing the AI Pin. Julian over at Wired has a larger hand and I was downright jealous when I saw he was able to fit the entire projection onto his palm, whereas the contents of my display would spill over onto my fingers, making things hard to read.

It’s not just those of us afflicted with tiny palms that will find the AI Pin tricky to see. Step outside and you’ll have a hard time reading the faint projection. Even on a cloudy, rainy day in New York City, I could barely make out the words on my hands.

When you can read what’s on the screen, interacting with it might make you want to rip your eyes out. Like I said, you’ll have to move your palm closer and further to your chest to select the right cards to enter your passcode. It’s a bit like dialing a rotary phone, with cards for individual digits from 0 to 9. Go further away to get to the higher numbers and the backspace button, and come back for the smaller ones.

This gesture is smart in theory but it’s very sensitive. There’s a very small range of usable space since there is only so far your hand can go, so the distance between each digit is fairly small. One wrong move and you’ll accidentally select something you didn’t want and have to go all the way out to delete it. To top it all off, moving my arm around while doing that causes the Pin to flop about, meaning the screen shakes on my palm, too. On average, unlocking my Pin, which involves entering a four-digit passcode, took me about five seconds.

On its own, this doesn’t sound so bad, but bear in mind that you’ll have to re-enter this each time you disconnect the Pin from the booster, latch or clip. It’s currently springtime in New York, which means I’m putting on and taking off my jacket over and over again. Every time I go inside or out, I move the Pin to a different layer and have to look like a confused long-sighted tourist reading my palm at various distances. It’s not fun.

Of course, you can turn off the setting that requires password entry each time you remove the Pin, but that’s simply not great for security.

Though Humane says “privacy and transparency are paramount with AI Pin,” by its very nature the device isn’t suitable for performing confidential tasks unless you’re alone. You don’t want to dictate a sensitive message to your accountant or partner in public, nor might you want to speak your Wi-Fi password out loud.

That latter is one of two input methods for setting up an internet connection, by the way. If you choose not to spell your Wi-Fi key out loud, then you can go to the Humane website to type in your network name (spell it out yourself, not look for one that’s available) and password to generate a QR code for the Pin to scan. Having to verbally relay alphanumeric characters to the Pin is not ideal, and though the QR code technically works, it just involves too much effort. It’s like giving someone a spork when they asked for a knife and fork: good enough to get by, but not a perfect replacement.

The Humane AI Pin held in mid-air in front of some bare trees and a street with red brick buildings on it.
Cherlynn Low for Engadget

The Humane AI Pin’s speaker

Since communicating through speech is the easiest means of using the Pin, you’ll need to be verbal and have hearing. If you choose not to raise your hand to read the AI Pin’s responses, you’ll have to listen for it. The good news is, the onboard speaker is usually loud enough for most environments, and I only struggled to hear it on NYC streets with heavy traffic passing by. I never attempted to talk to it on the subway, however, nor did I obnoxiously play music from the device while I was outside.

In my office and gym, though, I did get the AI Pin to play some songs. The music sounded fine — I didn’t get thumping bass or particularly crisp vocals, but I could hear instruments and crooners easily. Compared to my iPhone 15 Pro Max, it’s a bit tinny, as expected, but not drastically worse.

The problem is there are, once again, some caveats. The most important of these is that at the moment, you can only use Tidal’s paid streaming service with the Pin. You’ll get 90 days free with your purchase, and then have to pay $11 a month (on top of the $24 you already give to Humane) to continue streaming tunes from your Pin. Humane hasn’t said yet if other music services will eventually be supported, either, so unless you’re already on Tidal, listening to music from the Pin might just not be worth the price. Annoyingly, Tidal also doesn’t have the extensive library that competing providers do, so I couldn’t even play songs like Beyonce’s latest album or Taylor Swift’s discography (although remixes of her songs were available).

Though Humane has described its “personic speaker” as being able to create a “bubble of sound,” that “bubble” certainly has a permeable membrane. People around you will definitely hear what you’re playing, so unless you’re trying to start a dance party, it might be too disruptive to use the AI Pin for music without pairing Bluetooth headphones. You’ll also probably get better sound quality from Bose, Beats or AirPods anyway.

The Humane AI Pin camera experience

I’ll admit it — a large part of why I was excited for the AI Pin is its onboard camera. My love for taking photos is well-documented, and with the Pin, snapping a shot is supposed to be as easy as double-tapping its face with two fingers. I was even ready to put up with subpar pictures from its 13-megapixel sensor for the ability to quickly capture a scene without having to first whip out my phone.

Sadly, the Humane AI Pin was simply too slow and feverish to deliver on that premise. I frequently ran into times when, after taking a bunch of photos and holding my palm up to see how each snap turned out, the device would get uncomfortably warm. At least twice in my testing, the Pin just shouted “Your AI Pin is too warm and needs to cool down” before shutting down.

A sample image from the Humane AI Pin's 13-megapixel camera, showing a tree-lined path in a park.
A sample image from the Humane AI Pin.
Cherlynn Low for Engadget

Even when it’s running normally, using the AI Pin’s camera is slow. I’d double tap it and then have to stand still for at least three seconds before it would take the shot. I appreciate that there’s audio and visual feedback through the flashing green lights and the sound of a shutter clicking when the camera is going, so both you and people around know you’re recording. But it’s also a reminder of how long I need to wait — the “shutter” sound will need to go off thrice before the image is saved.

I took photos and videos in various situations under different lighting conditions, from a birthday dinner in a dimly lit restaurant to a beautiful park on a cloudy day. I recorded some workout footage in my building’s gym with large windows, and in general anything taken with adequate light looked good enough to post. The videos might make viewers a little motion sick, since the camera was clipped to my sports bra and moved around with me, but that’s tolerable.

In dark environments, though, forget about it. Even my Nokia E7 from 2012 delivered clearer pictures, most likely because I could hold it steady while framing a shot. The photos of my friends at dinner were so grainy, one person even seemed translucent. To my knowledge, that buddy is not a ghost, either.

A sample image from the Humane AI Pin's 13-megapixel camera, showing a group of people sitting around a table in a dimly lit restaurant. One person is staring at the camera with his chin resting on the back of his hand. The photos is fuzzy and grainy.
A sample image from the Humane AI Pin.
Cherlynn Low for Engadget

To its credit, Humane’s camera has a generous 120-degree field of view, meaning you’ll capture just about anything in front of you. When you’re not sure if you’ve gotten your subject in the picture, you can hold up your palm after taking the shot, and the projector will beam a monochromatic preview so you can verify. It’s not really for you to admire your skilled composition or level of detail, and more just to see that you did indeed manage to get the receipt in view before moving on.

Cosmos OS on the Humane AI Pin

When it comes time to retrieve those pictures off the AI Pin, you’ll just need to navigate to humane.center in any browser and sign in. There, you’ll find your photos and videos under “Captures,” your notes, recently played music and calls, as well as every interaction you’ve had with the assistant. That last one made recalling every weird exchange with the AI Pin for this review very easy.

You’ll have to make sure the AI Pin is connected to Wi-Fi and power, and be at least 50 percent charged before full-resolution photos and videos will upload to the dashboard. But before that, you can still scroll through previews in a gallery, even though you can’t download or share them.

The web portal is fairly rudimentary, with large square tiles serving as cards for sections like “Captures,” “Notes” and “My Data.” Going through them just shows you things you’ve saved or asked the Pin to remember, like a friend’s favorite color or their birthday. Importantly, there isn’t an area for you to view your text messages, so if you wanted to type out a reply from your laptop instead of dictating to the Pin, sorry, you can’t. The only way to view messages is by putting on the Pin, pulling up the screen and navigating the onboard menus to find them.

Humane AI Pin interface
Hayato Huseman for Engadget

That brings me to what you see on the AI Pin’s visual interface. If you’ve raised your palm right after asking it something, you’ll see your answer in text form. But if you had brought up your hand after unlocking or tapping the device, you’ll see its barebones home screen. This contains three main elements — a clock widget in the middle, the word “Nearby” in a bubble at the top and notifications at the bottom. Tilting your palm scrolls through these, and you can pinch your index finger and thumb together to select things.

Push your hand further back and you’ll bring up a menu with five circles that will lead you to messages, phone, settings, camera and media player. You’ll need to tilt your palm to scroll through these, but because they’re laid out in a ring, it’s not as straightforward as simply aiming up or down. Trying to get the right target here was one of the greatest challenges I encountered while testing the AI Pin. I was rarely able to land on the right option on my first attempt. That, along with the fact that you have to put on the Pin (and unlock it), made it so difficult to see messages that I eventually just gave up looking at texts I received.

The Humane AI Pin overheating, in use and battery life

One reason I sometimes took off the AI Pin is that it would frequently get too warm and need to “cool down.” Once I removed it, I would not feel the urge to put it back on. I did wear it a lot in the first few days I had it, typically from 7:45AM when I headed out to the gym till evening, depending on what I was up to. Usually at about 3PM, after taking a lot of pictures and video, I would be told my AI Pin’s battery was running low, and I’d need to swap out the battery booster. This didn’t seem to work sometimes, with the Pin dying before it could get enough power through the accessory. At first it appeared the device simply wouldn’t detect the booster, but I later learned it’s just slow and can take up to five minutes to recognize a newly attached booster.

When I wore the AI Pin to my friend (and fellow reviewer) Michael Fisher’s birthday party just hours after unboxing it, I had it clipped to my tank top just hovering above my heart. Because it was so close to the edge of my shirt, I would accidentally brush past it a few times when reaching for a drink or resting my chin on my palm a la The Thinker. Normally, I wouldn’t have noticed the Pin, but as it was running so hot, I felt burned every time my skin came into contact with its chrome edges. The touchpad also grew warm with use, and the battery booster resting against my chest also got noticeably toasty (though it never actually left a mark).

Humane AI Pin
Hayato Huseman for Engadget

Part of the reason the AI Pin ran so hot is likely that there’s not a lot of room for the heat generated by its octa-core Snapdragon processor to dissipate. I had also been using it near constantly to show my companions the pictures I had taken, and Humane has said its laser projector is “designed for brief interactions (up to six to nine minutes), not prolonged usage” and that it had “intentionally set conservative thermal limits for this first release that may cause it to need to cool down.” The company added that it not only plans to “improve uninterrupted run time in our next software release,” but also that it’s “working to improve overall thermal performance in the next software release.”

There are other things I need Humane to address via software updates ASAP. The fact that its AI sometimes decides not to do what I ask, like telling me “Your AI Pin is already running smoothly, no need to restart” when I asked it to restart is not only surprising but limiting. There are no hardware buttons to turn the pin on or off, and the only other way to trigger a restart is to pull up the dreaded screen, painstakingly go to the menu, hopefully land on settings and find the Power option. By which point if the Pin hasn’t shut down my arm will have.

A lot of my interactions with the AI Pin also felt like problems I encountered with earlier versions of Siri, Alexa and the Google Assistant. The overly wordy answers, for example, or the pronounced two or three-second delay before a response, are all reminiscent of the early 2010s. When I asked the AI Pin to “remember that I parked my car right here,” it just saved a note saying “Your car is parked right here,” with no GPS information or no way to navigate back. So I guess I parked my car on a sticky note.

To be clear, that’s not something that Humane ever said the AI Pin can do, but it feels like such an easy thing to offer, especially since the device does have onboard GPS. Google’s made entire lines of bags and Levi’s jackets that serve the very purpose of dropping pins to revisit places later. If your product is meant to be smart and revolutionary, it should at least be able to do what its competitors already can, not to mention offer features they don’t.

A screenshot of the Humane AI Pin's web portal, showing previous requests, with the header
Screenshot

One singular thing that the AI Pin actually manages to do competently is act as an interpreter. After you ask it to “translate to [x language],” you’ll have to hold down two fingers while you talk, let go and it will read out what you said in the relevant tongue. I tried talking to myself in English and Mandarin, and was frankly impressed with not only the accuracy of the translation and general vocal expressiveness, but also at how fast responses came through. You don’t even need to specify the language the speaker is using. As long as you’ve set the target language, the person talking in Mandarin will be translated to English and the words said in English will be read out in Mandarin.

It’s worth considering the fact that using the AI Pin is a nightmare for anyone who gets self-conscious. I’m pretty thick-skinned, but even I tried to hide the fact that I had a strange gadget with a camera pinned to my person. Luckily, I didn’t get any obvious stares or confrontations, but I heard from my fellow reviewers that they did. And as much as I like the idea of a second brain I can wear and offload little notes and reminders to, nothing that the AI Pin does well is actually executed better than a smartphone.

Wrap-up

Not only is the Humane AI Pin slow, finicky and barely even smart, using it made me look pretty dumb. In a few days of testing, I went from being excited to show it off to my friends to not having any reason to wear it.

Humane’s vision was ambitious, and the laser projector initially felt like a marvel. At first glance, it looked and felt like a refined product. But it just seems like at every turn, the company had to come up with solutions to problems it created. No screen or keyboard to enter your Wi-Fi password? No worries, use your phone or laptop to generate a QR code. Want to play music? Here you go, a 90-day subscription to Tidal, but you can only play music on that service.

The company promises to make software updates that could improve some issues, and the few tweaks my unit received during this review did make some things (like music playback) work better. The problem is that as it stands, the AI Pin doesn’t do enough to justify its $700 and $24-a-month price, and I simply cannot recommend anyone spend this much money for the one or two things it does adequately. 

Maybe in time, the AI Pin will be worth revisiting, but it’s hard to imagine why anyone would need a screenless AI wearable when so many devices exist today that you can use to talk to an assistant. From speakers and phones to smartwatches and cars, the world is full of useful AI access points that allow you to ditch a screen. Humane says it’s committed to a “future where AI seamlessly integrates into every aspect of our lives and enhances our daily experiences.” 

After testing the company’s AI Pin, that future feels pretty far away.

This article originally appeared on Engadget at https://www.engadget.com/the-humane-ai-pin-is-the-solution-to-none-of-technologys-problems-120002469.html?src=rss

The best smartphone cameras for 2024: How to choose the phone with the best photography chops

I remember begging my parents to get me a phone with a camera when the earliest ones were launched. The idea of taking photos wherever I went was new and appealing, but it’s since become less of a novelty and more of a daily habit. Yes, I’m one of those. I take pictures of everything — from beautiful meals and funny signs to gorgeous landscapes and plumes of smoke billowing in the distance.

If you grew up in the Nokia 3310 era like me, then you know how far we’ve come. Gone are the 2-megapixel embarrassments that we used to post to Friendster with glee. Now, many of us use the cameras on our phones to not only capture precious memories of our adventures and loved ones, but also to share our lives with the world.

I’m lucky enough that I have access to multiple phones thanks to my job, and at times would carry a second device with me on a day-trip just because I preferred its cameras. But most people don’t have that luxury. Chances are, if you’re reading this, a phone’s cameras may be of utmost importance to you. But you’ll still want to make sure the device you end up getting doesn’t fall flat in other ways. At Engadget, we test and review dozens of smartphones every year; our top picks below represent not only the best phone cameras available right now, but also the most well-rounded options out there.

What to look for when choosing a phone for its cameras

Before scrutinizing a phone’s camera array, you’ll want to take stock of your needs — what are you using it for? If your needs are fairly simple, like taking photos and videos of your new baby or pet, most modern smartphones will serve you well. Those who plan to shoot for audiences on TikTok, Instagram or YouTube should look for video-optimizing features like stabilization and high frame rate support (for slow-motion clips).

Most smartphones today have at least two cameras on the rear and one up front. Those that cost more than $700 usually come with three, including wide-angle, telephoto or macro lenses. We’ve also reached a point where the number of megapixels (MP) doesn’t really matter anymore — most flagship phones from Apple, Samsung and Google have sensors that are either 48MP or 50MP. You’ll even come across some touting resolutions of 108MP or 200MP, in pro-level devices like the Galaxy S24 Ultra.

Most people won’t need anything that sharp, and in general, smartphone makers combine the pixels to deliver pictures that are the equivalent of 12MP anyway. The benefits of pixel-binning are fairly minor in phone cameras, though, and you’ll usually need to blow up an image to fit a 27-inch monitor before you’ll see the slightest improvements.

In fact, smartphone cameras tend to be so limited in size that there’s often little room for variation across devices. They typically use sensors from the same manufacturers and have similar aperture sizes, lens lengths and fields of view. So while it might be worth considering the impact of sensor size on things like DSLRs or mirrorless cameras, on a smartphone those differences are minimal.

Sensor size and field of view

If you still want a bit of guidance on what to look for, here are some quick tips: By and large, the bigger the sensor the better, as this will allow more light and data to be captured. Not many phone makers will list the sensor size in spec lists, so you’ll have to dig around for this info. A larger aperture (usually indicated by a smaller number with an “f/” preceding a digit) is ideal for the same reason, and it also affects the level of depth of field (or background blur) that’s not added via software. Since portrait modes are available on most phones these days, though, a big aperture isn’t as necessary to achieve this effect.

When looking for a specific field of view on a wide-angle camera, know that the most common offering from companies like Samsung and Google is about 120 degrees. Finally, most premium phones like the iPhone 15 Pro Max and Galaxy S24 Ultra offer telephoto systems that go up to 5x optical zoom with software taking that to 20x or even 100x.

Processing and extra features

These features will likely perform at a similar quality across the board, and where you really see a difference is in the processing. Samsung traditionally renders pictures that are more saturated, while Google’s Pixel phones take photos that are more neutral and evenly exposed. iPhones have historically produced pictures with color profiles that seem more accurate, though in comparison to images from the other two, they can come off yellowish. However, that was mostly resolved after Apple introduced a feature in the iPhone 13 called Photographic Styles that lets you set a profile with customizable contrast levels and color temperature that would apply to every picture taken via the native camera app.

Pro users who want to manually edit their shots should see if the phone they’re considering can take images in RAW format. Those who want to shoot a lot of videos while on the move should look for stabilization features and a decent frame rate. Most of the phones we’ve tested at Engadget record at either 60 frames per second at 1080p or 30 fps at 4K. It’s worth checking to see what the front camera shoots at, too, since they’re not usually on par with their counterparts on the rear.

Finally, while the phone’s native editor is usually not a dealbreaker (since you can install a third-party app for better controls), it’s worth noting that the latest flagships from Samsung and Google all offer AI tools that make manipulating an image a lot easier. They also offer a lot of fun, useful extras, like erasing photobombers, moving objects around or making sure everyone in the shot has their eyes open.

How we test smartphone cameras

For the last few years, I’ve reviewed flagships from Google, Samsung and Apple, and each time, I do the same set of tests. I’m especially particular when testing their cameras, and usually take all the phones I’m comparing out on a day or weekend photo-taking trip. Any time I see a photo- or video-worthy moment, I whip out all the devices and record what I can, doing my best to keep all factors identical and maintain the same angle and framing across the board.

It isn’t always easy to perfectly replicate the shooting conditions for each camera, even if I have them out immediately after I put the last one away. Of course, having them on some sort of multi-mount rack would be the most scientific way, but that makes framing shots a lot harder and is not representative of most people’s real-world use. Also, just imagine me holding up a three-prong camera rack running after the poor panicked wildlife I’m trying to photograph. It’s just not practical.

For each device, I make sure to test all modes, like portrait, night and video, as well as all the lenses, including wide, telephoto and macro. When there are new or special features, I test them as well. Since different phone displays can affect how their pictures appear, I wanted to level the playing field: I upload all the material to Google Drive in full resolution so I can compare everything on the same large screen. Because the photos from today’s phones are of mostly the same quality, I usually have to zoom in very closely to see the differences. I also frequently get a coworker who’s a photo or video expert to look at the files and weigh in.

This article originally appeared on Engadget at https://www.engadget.com/best-camera-phone-130035025.html?src=rss

The best smartphone cameras for 2024: How to choose the phone with the best photography chops

I remember begging my parents to get me a phone with a camera when the earliest ones were launched. The idea of taking photos wherever I went was new and appealing, but it’s since become less of a novelty and more of a daily habit. Yes, I’m one of those. I take pictures of everything — from beautiful meals and funny signs to gorgeous landscapes and plumes of smoke billowing in the distance.

If you grew up in the Nokia 3310 era like me, then you know how far we’ve come. Gone are the 2-megapixel embarrassments that we used to post to Friendster with glee. Now, many of us use the cameras on our phones to not only capture precious memories of our adventures and loved ones, but also to share our lives with the world.

I’m lucky enough that I have access to multiple phones thanks to my job, and at times would carry a second device with me on a day-trip just because I preferred its cameras. But most people don’t have that luxury. Chances are, if you’re reading this, a phone’s cameras may be of utmost importance to you. But you’ll still want to make sure the device you end up getting doesn’t fall flat in other ways. At Engadget, we test and review dozens of smartphones every year; our top picks below represent not only the best phone cameras available right now, but also the most well-rounded options out there.

What to look for when choosing a phone for its cameras

Before scrutinizing a phone’s camera array, you’ll want to take stock of your needs — what are you using it for? If your needs are fairly simple, like taking photos and videos of your new baby or pet, most modern smartphones will serve you well. Those who plan to shoot for audiences on TikTok, Instagram or YouTube should look for video-optimizing features like stabilization and high frame rate support (for slow-motion clips).

Most smartphones today have at least two cameras on the rear and one up front. Those that cost more than $700 usually come with three, including wide-angle, telephoto or macro lenses. We’ve also reached a point where the number of megapixels (MP) doesn’t really matter anymore — most flagship phones from Apple, Samsung and Google have sensors that are either 48MP or 50MP. You’ll even come across some touting resolutions of 108MP or 200MP, in pro-level devices like the Galaxy S24 Ultra.

Most people won’t need anything that sharp, and in general, smartphone makers combine the pixels to deliver pictures that are the equivalent of 12MP anyway. The benefits of pixel-binning are fairly minor in phone cameras, though, and you’ll usually need to blow up an image to fit a 27-inch monitor before you’ll see the slightest improvements.

In fact, smartphone cameras tend to be so limited in size that there’s often little room for variation across devices. They typically use sensors from the same manufacturers and have similar aperture sizes, lens lengths and fields of view. So while it might be worth considering the impact of sensor size on things like DSLRs or mirrorless cameras, on a smartphone those differences are minimal.

Sensor size and field of view

If you still want a bit of guidance on what to look for, here are some quick tips: By and large, the bigger the sensor the better, as this will allow more light and data to be captured. Not many phone makers will list the sensor size in spec lists, so you’ll have to dig around for this info. A larger aperture (usually indicated by a smaller number with an “f/” preceding a digit) is ideal for the same reason, and it also affects the level of depth of field (or background blur) that’s not added via software. Since portrait modes are available on most phones these days, though, a big aperture isn’t as necessary to achieve this effect.

When looking for a specific field of view on a wide-angle camera, know that the most common offering from companies like Samsung and Google is about 120 degrees. Finally, most premium phones like the iPhone 15 Pro Max and Galaxy S24 Ultra offer telephoto systems that go up to 5x optical zoom with software taking that to 20x or even 100x.

Processing and extra features

These features will likely perform at a similar quality across the board, and where you really see a difference is in the processing. Samsung traditionally renders pictures that are more saturated, while Google’s Pixel phones take photos that are more neutral and evenly exposed. iPhones have historically produced pictures with color profiles that seem more accurate, though in comparison to images from the other two, they can come off yellowish. However, that was mostly resolved after Apple introduced a feature in the iPhone 13 called Photographic Styles that lets you set a profile with customizable contrast levels and color temperature that would apply to every picture taken via the native camera app.

Pro users who want to manually edit their shots should see if the phone they’re considering can take images in RAW format. Those who want to shoot a lot of videos while on the move should look for stabilization features and a decent frame rate. Most of the phones we’ve tested at Engadget record at either 60 frames per second at 1080p or 30 fps at 4K. It’s worth checking to see what the front camera shoots at, too, since they’re not usually on par with their counterparts on the rear.

Finally, while the phone’s native editor is usually not a dealbreaker (since you can install a third-party app for better controls), it’s worth noting that the latest flagships from Samsung and Google all offer AI tools that make manipulating an image a lot easier. They also offer a lot of fun, useful extras, like erasing photobombers, moving objects around or making sure everyone in the shot has their eyes open.

How we test smartphone cameras

For the last few years, I’ve reviewed flagships from Google, Samsung and Apple, and each time, I do the same set of tests. I’m especially particular when testing their cameras, and usually take all the phones I’m comparing out on a day or weekend photo-taking trip. Any time I see a photo- or video-worthy moment, I whip out all the devices and record what I can, doing my best to keep all factors identical and maintain the same angle and framing across the board.

It isn’t always easy to perfectly replicate the shooting conditions for each camera, even if I have them out immediately after I put the last one away. Of course, having them on some sort of multi-mount rack would be the most scientific way, but that makes framing shots a lot harder and is not representative of most people’s real-world use. Also, just imagine me holding up a three-prong camera rack running after the poor panicked wildlife I’m trying to photograph. It’s just not practical.

For each device, I make sure to test all modes, like portrait, night and video, as well as all the lenses, including wide, telephoto and macro. When there are new or special features, I test them as well. Since different phone displays can affect how their pictures appear, I wanted to level the playing field: I upload all the material to Google Drive in full resolution so I can compare everything on the same large screen. Because the photos from today’s phones are of mostly the same quality, I usually have to zoom in very closely to see the differences. I also frequently get a coworker who’s a photo or video expert to look at the files and weigh in.

This article originally appeared on Engadget at https://www.engadget.com/best-camera-phone-130035025.html?src=rss

The Pirate Queen interview: How Singer Studios and Lucy Liu brought forgotten history to life

I had a favorite version of Mulan growing up (Anita Yuen in the 1998 Taiwanese TV series). I obsessed over Chinese period TV series like Legend of the Condor Heroes, My Fair Princess and The Book and the Sword. I consider myself fairly well-versed in Chinese historical figures, especially those represented in ‘90s and 2000s entertainment in Asia. So when I found out that a UK-based studio had made a VR game called The Pirate Queen based on a forgotten female leader who was prolific in the South China Sea, I was shocked. How had I never heard of her? How had the Asian film and TV industry never covered her?

I got to play a bit of the game this week, which was released on the Meta Quest store and Steam on March 7th. The titular character Cheng Shih is voiced by actor Lucy Liu, who also executive produced this version of the game with UK-based Singer Studios’ CEO and founder Eloise Singer. Liu and Singer sat with me for an interview discussing The Pirate Queen, Cheng Shih, VR’s strengths and the importance of cultural and historical accuracy in games and films.

Cheng Shih, which translates to “Madam Cheng” or “Mrs Cheng,” was born Shi Yang. After she married the pirate Cheng Yi (usually romanized as Zheng Yi), she became known as Cheng Yi Sao, which translates to “wife of Cheng Yi.” Together they led the Guangdong Pirate Confederation in the 1800s. Upon her husband’s death in 1807, she took over the reins and went on to become what South China Morning Post described as “history’s greatest pirate.”

A screenshot from The Pirate Queen, showing an ornate ship with a warm glow emanating from its windows. The ship is on a body of water that has some floating lanterns on it.
Singer Studios

How did Singer Studios learn about Cheng Shih and decide to build a game (and upcoming franchise including a film, podcast and graphic novels) around her? According to Singer, it was through word of mouth. “It was a friend of mine who first told me the story,” Singer said. “She said, ‘Did you know that the most famous pirate in history was a woman?’”

Cheng Shih had been loosely referenced in various films and games before this, like the character Mistress Ching in the 2007 film Pirates of the Caribbean: At World’s End and Jing Lang in Assassin’s Creed IV: Black Flag. As Singer pointed out, Cheng Shih had also appeared in a recent episode of Doctor Who.

Singer said that her team started developing the project as a film at the end of 2018. But the pandemic disrupted their plans, causing Singer to adapt it into a game. A short version of The Pirate Queen later debuted at Raindance Film Festival, and shortly after, Meta came onboard and provided funding to complete development of the game. Liu was then approached when the full version was ready and about to make its appearance at Tribeca Film Festival 2023.

“The rest is history,” Liu said, “But not forgotten history.” She said Cheng Shih was never really recognized for being the most powerful pirate. “It seems so crazy that in the 19th century, this woman who started as a courtesan would then rise to power and then have this fleet of pirates that she commanded,” Liu added. She went on to talk about how Cheng Shih was ahead of the time and also represented “a bit of an underdog story.” For the full 15-minute interview, you can watch the video in this article or listen to this week’s episode of The Engadget Podcast and learn more about Liu and Singer’s thoughts on VR and technology over the last 20 years.

Capturing the historical and cultural details of Cheng Shih’s life was paramount to Liu and Singer. They said the team had to create women’s hands from scratch to be represented from the player’s perspective in VR, and a dialect coach was hired to help Liu nail the pronunciation for the Cantonese words that Cheng Shih speaks in the game. Though I’m not completely certain if Cheng Shih spoke Mandarin or Cantonese, the latter seems like the more accurate choice given it’s the lingua franca in the Guangdong region.

A screenshot from The Pirate Queen, showing a scroll depicting a woman, with Chinese characters below it, as well as an English translation saying
Singer Studios

All that added to the immersiveness of The Pirate Queen, in which players find themselves in an atmospheric maritime environment. The Meta Quest 3’s controllers served as my hands in the game, and I rowed boats, climbed rope ladders and picked up items with relative ease. Some of the mechanics, especially the idea of “teleportation” as moving around, were a little clunky, but after about five minutes I got used to how things worked. You’ll have to point the left controller and push the joystick when you’ve chosen a spot, and the scene changes around you. This probably minimizes the possibility of nausea, since you’re not standing still while watching your surroundings move. It’s also pretty typical of VR games, so those who have experience playing in headsets will likely be familiar with the movement.

You can still walk around and explore, of course. I scrutinized the corners of rooms, inspected the insides of cabinets and more, while hunting for keys that would unlock boxes containing clues. A lot of this is pretty standard for a puzzle or room escape game, which is what I used to play the most in my teens. But I was particularly taken by sequences like rowing a boat across the sea and climbing up a rope ladder, both of which caused me to break a mild sweat. Inside Cheng Shih’s cabin, I lit a joss stick and placed it in an incense holder — an action I repeated every week at my grandfather’s altar when I was growing up. It felt so realistic that I tried to wave the joss stick to put out the flame and could almost smell the smoke.

It’s these types of activities that make VR games great vehicles for education and empathy. “We didn’t want to have these combat elements that traditional VR games do have,” Singer said, adding that it was one of the challenges in creating The Pirate Queen.

“It’s nice to see and to learn and be part of that, as opposed to ‘Let’s turn to page 48,’” Liu said. “That’s not as exciting as doing something and being actively part of something.” When you play as a historical character in a game, and one that’s as immersive as a VR game, “you’re living that person’s life or that moment in time,” Liu added.

While The Pirate Queen is currently only available on Quest devices, Singer said there are plans to bring it to “as many headsets as we possibly can.” Singer Studios also said it is “extending The Pirate Queen franchise beyond VR into a graphic novel, film and television series.”

This article originally appeared on Engadget at https://www.engadget.com/the-pirate-queen-interview-how-singer-studios-and-lucy-liu-brought-forgotten-history-to-life-160007029.html?src=rss

The Pirate Queen interview: How Singer Studios and Lucy Liu brought forgotten history to life

I had a favorite version of Mulan growing up (Anita Yuen in the 1998 Taiwanese TV series). I obsessed over Chinese period TV series like Legend of the Condor Heroes, My Fair Princess and The Book and the Sword. I consider myself fairly well-versed in Chinese historical figures, especially those represented in ‘90s and 2000s entertainment in Asia. So when I found out that a UK-based studio had made a VR game called The Pirate Queen based on a forgotten female leader who was prolific in the South China Sea, I was shocked. How had I never heard of her? How had the Asian film and TV industry never covered her?

I got to play a bit of the game this week, which was released on the Meta Quest store and Steam on March 7th. The titular character Cheng Shih is voiced by actor Lucy Liu, who also executive produced this version of the game with UK-based Singer Studios’ CEO and founder Eloise Singer. Liu and Singer sat with me for an interview discussing The Pirate Queen, Cheng Shih, VR’s strengths and the importance of cultural and historical accuracy in games and films.

Cheng Shih, which translates to “Madam Cheng” or “Mrs Cheng,” was born Shi Yang. After she married the pirate Cheng Yi (usually romanized as Zheng Yi), she became known as Cheng Yi Sao, which translates to “wife of Cheng Yi.” Together they led the Guangdong Pirate Confederation in the 1800s. Upon her husband’s death in 1807, she took over the reins and went on to become what South China Morning Post described as “history’s greatest pirate.”

A screenshot from The Pirate Queen, showing an ornate ship with a warm glow emanating from its windows. The ship is on a body of water that has some floating lanterns on it.
Singer Studios

How did Singer Studios learn about Cheng Shih and decide to build a game (and upcoming franchise including a film, podcast and graphic novels) around her? According to Singer, it was through word of mouth. “It was a friend of mine who first told me the story,” Singer said. “She said, ‘Did you know that the most famous pirate in history was a woman?’”

Cheng Shih had been loosely referenced in various films and games before this, like the character Mistress Ching in the 2007 film Pirates of the Caribbean: At World’s End and Jing Lang in Assassin’s Creed IV: Black Flag. As Singer pointed out, Cheng Shih had also appeared in a recent episode of Doctor Who.

Singer said that her team started developing the project as a film at the end of 2018. But the pandemic disrupted their plans, causing Singer to adapt it into a game. A short version of The Pirate Queen later debuted at Raindance Film Festival, and shortly after, Meta came onboard and provided funding to complete development of the game. Liu was then approached when the full version was ready and about to make its appearance at Tribeca Film Festival 2023.

“The rest is history,” Liu said, “But not forgotten history.” She said Cheng Shih was never really recognized for being the most powerful pirate. “It seems so crazy that in the 19th century, this woman who started as a courtesan would then rise to power and then have this fleet of pirates that she commanded,” Liu added. She went on to talk about how Cheng Shih was ahead of the time and also represented “a bit of an underdog story.” For the full 15-minute interview, you can watch the video in this article or listen to this week’s episode of The Engadget Podcast and learn more about Liu and Singer’s thoughts on VR and technology over the last 20 years.

Capturing the historical and cultural details of Cheng Shih’s life was paramount to Liu and Singer. They said the team had to create women’s hands from scratch to be represented from the player’s perspective in VR, and a dialect coach was hired to help Liu nail the pronunciation for the Cantonese words that Cheng Shih speaks in the game. Though I’m not completely certain if Cheng Shih spoke Mandarin or Cantonese, the latter seems like the more accurate choice given it’s the lingua franca in the Guangdong region.

A screenshot from The Pirate Queen, showing a scroll depicting a woman, with Chinese characters below it, as well as an English translation saying
Singer Studios

All that added to the immersiveness of The Pirate Queen, in which players find themselves in an atmospheric maritime environment. The Meta Quest 3’s controllers served as my hands in the game, and I rowed boats, climbed rope ladders and picked up items with relative ease. Some of the mechanics, especially the idea of “teleportation” as moving around, were a little clunky, but after about five minutes I got used to how things worked. You’ll have to point the left controller and push the joystick when you’ve chosen a spot, and the scene changes around you. This probably minimizes the possibility of nausea, since you’re not standing still while watching your surroundings move. It’s also pretty typical of VR games, so those who have experience playing in headsets will likely be familiar with the movement.

You can still walk around and explore, of course. I scrutinized the corners of rooms, inspected the insides of cabinets and more, while hunting for keys that would unlock boxes containing clues. A lot of this is pretty standard for a puzzle or room escape game, which is what I used to play the most in my teens. But I was particularly taken by sequences like rowing a boat across the sea and climbing up a rope ladder, both of which caused me to break a mild sweat. Inside Cheng Shih’s cabin, I lit a joss stick and placed it in an incense holder — an action I repeated every week at my grandfather’s altar when I was growing up. It felt so realistic that I tried to wave the joss stick to put out the flame and could almost smell the smoke.

It’s these types of activities that make VR games great vehicles for education and empathy. “We didn’t want to have these combat elements that traditional VR games do have,” Singer said, adding that it was one of the challenges in creating The Pirate Queen.

“It’s nice to see and to learn and be part of that, as opposed to ‘Let’s turn to page 48,’” Liu said. “That’s not as exciting as doing something and being actively part of something.” When you play as a historical character in a game, and one that’s as immersive as a VR game, “you’re living that person’s life or that moment in time,” Liu added.

While The Pirate Queen is currently only available on Quest devices, Singer said there are plans to bring it to “as many headsets as we possibly can.” Singer Studios also said it is “extending The Pirate Queen franchise beyond VR into a graphic novel, film and television series.”

This article originally appeared on Engadget at https://www.engadget.com/the-pirate-queen-interview-how-singer-studios-and-lucy-liu-brought-forgotten-history-to-life-160007029.html?src=rss

Microsoft’s neural voice tool for people with speech disabilities arrives later this year

At its 14th Ability summit, which kicks off today, Microsoft is highlighting developments and collaborations across its portfolio of assistive products. Much of that is around Azure AI, including features announced yesterday like AI-powered audio descriptions and the Azure AI studio that better enables developers with disabilities to create machine-learning applications. It also showed off new updates like more languages and richer AI-generated descriptions for its Seeing AI tool, as well as new playbooks offering guidelines for best practices in areas like building accessible campuses and greater mental health support.

The company is also previewing a feature called “Speak For Me,” which is coming later this year. Much like Apple’s Personal Voice, Speak For Me can help those with ALS and other speech disabilities to use custom neural voices to communicate. Work on this project has been ongoing “for some time” with partners like the non-profit ALS organization Team Gleason, and Microsoft said it’s “committed to making sure this technology is used for good and plan to launch later in the year.” The company also shared that it’s working with Answer ALS and ALS Therapy Development Institute (TDI) to “almost double the clinical and genomic data available for research.”

One of the most significant accessibility updates coming this month is that Copilot will have new accessibility skills that enable users to ask the assistant to launch Live Caption and Narrator, among other assistive tools. The Accessibility Assistant feature announced last year will be available today in the Insider preview for M365 apps like Word, with the company saying it will be coming “soon” to Outlook and PowerPoint. Microsoft is also publishing four new playbooks today, including a Mental Health toolkit, which covers “tips for product makers to build experiences that support mental health conditions, created in partnership [with] Mental Health America.”

Ahead of the summit, the company’s chief accessibility officer Jenny Lay-Flurrie spoke with Engadget to share greater insight around the news as well as her thoughts on generative AI’s role in building assistive products.

“In many ways, AI isn’t new,” she said, adding “this chapter is new.” Generative AI may be all the rage right now, but Lay-Flurrie believes that the core principle her team relies on hasn’t changed. “Responsible AI is accessible AI,” she said.

Still, generative AI could bring many benefits. “This chapter, though, does unlock some potential opportunities for the accessibility industry and people with disabilities to be able to be more productive and to use technology to power their day,” she said. She highlighted a survey the company did with the neurodiverse community around Microsoft 365 Copilot, and the response of the few hundred people who responded was “this is reducing time for me to create content and it’s shortening that gap between thought and action,” Lay-Flurrie said.

The idea of being responsible in embracing new technology trends when designing for accessibility isn’t far from Lay-Flurrie’s mind. “We still need to be very principled, thoughtful and if we hold back, it’s to make sure that we are protecting those fundamental rights of accessibility.”

Elsewhere at the summit, Microsoft is featuring guest speakers like actor Michelle Williams and its own employee Katy Jo Wright, discussing mental health and their experience living with chronic Lyme disease respectively. We will also see Amsterdam’s Rijksmusem share how it used Azure AI’s computer vision and generative AI to provide image descriptions for over a million pieces of art for visitors who are blind or have low vision.

This article originally appeared on Engadget at https://www.engadget.com/microsofts-neural-voice-tool-for-people-with-speech-disabilities-arrives-later-this-year-161550277.html?src=rss

Microsoft’s neural voice tool for people with speech disabilities arrives later this year

At its 14th Ability summit, which kicks off today, Microsoft is highlighting developments and collaborations across its portfolio of assistive products. Much of that is around Azure AI, including features announced yesterday like AI-powered audio descriptions and the Azure AI studio that better enables developers with disabilities to create machine-learning applications. It also showed off new updates like more languages and richer AI-generated descriptions for its Seeing AI tool, as well as new playbooks offering guidelines for best practices in areas like building accessible campuses and greater mental health support.

The company is also previewing a feature called “Speak For Me,” which is coming later this year. Much like Apple’s Personal Voice, Speak For Me can help those with ALS and other speech disabilities to use custom neural voices to communicate. Work on this project has been ongoing “for some time” with partners like the non-profit ALS organization Team Gleason, and Microsoft said it’s “committed to making sure this technology is used for good and plan to launch later in the year.” The company also shared that it’s working with Answer ALS and ALS Therapy Development Institute (TDI) to “almost double the clinical and genomic data available for research.”

One of the most significant accessibility updates coming this month is that Copilot will have new accessibility skills that enable users to ask the assistant to launch Live Caption and Narrator, among other assistive tools. The Accessibility Assistant feature announced last year will be available today in the Insider preview for M365 apps like Word, with the company saying it will be coming “soon” to Outlook and PowerPoint. Microsoft is also publishing four new playbooks today, including a Mental Health toolkit, which covers “tips for product makers to build experiences that support mental health conditions, created in partnership [with] Mental Health America.”

Ahead of the summit, the company’s chief accessibility officer Jenny Lay-Flurrie spoke with Engadget to share greater insight around the news as well as her thoughts on generative AI’s role in building assistive products.

“In many ways, AI isn’t new,” she said, adding “this chapter is new.” Generative AI may be all the rage right now, but Lay-Flurrie believes that the core principle her team relies on hasn’t changed. “Responsible AI is accessible AI,” she said.

Still, generative AI could bring many benefits. “This chapter, though, does unlock some potential opportunities for the accessibility industry and people with disabilities to be able to be more productive and to use technology to power their day,” she said. She highlighted a survey the company did with the neurodiverse community around Microsoft 365 Copilot, and the response of the few hundred people who responded was “this is reducing time for me to create content and it’s shortening that gap between thought and action,” Lay-Flurrie said.

The idea of being responsible in embracing new technology trends when designing for accessibility isn’t far from Lay-Flurrie’s mind. “We still need to be very principled, thoughtful and if we hold back, it’s to make sure that we are protecting those fundamental rights of accessibility.”

Elsewhere at the summit, Microsoft is featuring guest speakers like actor Michelle Williams and its own employee Katy Jo Wright, discussing mental health and their experience living with chronic Lyme disease respectively. We will also see Amsterdam’s Rijksmusem share how it used Azure AI’s computer vision and generative AI to provide image descriptions for over a million pieces of art for visitors who are blind or have low vision.

This article originally appeared on Engadget at https://www.engadget.com/microsofts-neural-voice-tool-for-people-with-speech-disabilities-arrives-later-this-year-161550277.html?src=rss

Apple sold enough iPhones and services last quarter to reverse a downward revenue trend

After four consecutive quarters of revenue decline, Apple broke the trend and reported its first period of revenue growth today. In its earnings report for the first quarter of the financial year of 2024, the company announced a quarterly revenue of $119.6 billion, which is an increase of 2 percent from the same period last year. 

In addition, Apple CEO Tim Cook said its "installed base of active devices has now surpassed 2.2 billion, reaching an all-time high across all products and geographic segments." This quarter includes money brought in from the sales of the iPhone 15 line introduced in September 2023, which had an obvious impact on performance. 

"Today Apple is reporting revenue growth for the December quarter fueled by iPhone sales, and an all-time revenue record in Services,” Cook said. He noted the company hitting "all-time revenue records across advertising, Cloud services, payment services and video as well as December quarter records in App Store and Apple Care." Cook recapped some updates made to the Apple TV app, as well as TV+ content earning nominations and awards. 

Cook went on to remind us during the company's earnings call that tomorrow is the launch day for the Vision Pro headset, calling it historic. After saying that Apple is dedicated to investing in new technologies, Cook added that the company will be sharing more about its developments in AI later this year. 

Products in the wearables, home and accessories categories didn't fare well in this quarter, though sales in the Mac department did increase year over year. iPad sales in particular dropped 25 percent over the same period last year, though Cook attributed that to a "difficult compare" to the big numbers recorded in the first quarter of 2023 due to new models with refreshed Apple Silicon. Considering the company did not release a new iPad model in 2023 at all, this is not surprising. 

Cook continued by highlighting developments like Apple opening its 100th retail location in Asia Pacific and updates on its sustainability efforts. He wrapped up by saying "Apple is a company that has never shied away from big challenges," adding "so we're optimistic about the future, confident in the long term and as excited as we've ever been to deliver for our users."

This article originally appeared on Engadget at https://www.engadget.com/apple-sold-enough-iphones-and-services-last-quarter-to-reverse-a-downward-revenue-trend-223109289.html?src=rss

Galaxy S24 and S24 Plus hands-on: Samsung’s AI phones are here, but with mixed results

I’ve never thought of Samsung as a software company, let alone as a name to pay attention to in the AI race. But with the launch of the Galaxy S24 series today, the company is eager to have us associate it with the year’s hottest tech trend. The new flagship phones look largely the same as last year’s models, but on the inside, change is afoot. At a hands-on session during CES 2024 in Las Vegas last week, I was more focused on checking out the new software on the Galaxy S24 and S24 Plus.

Thanks to a new Snapdragon 8 Gen 3 processor (in the US) customized “for Galaxy,” the S24 series are capable of a handful of new AI-powered tasks that seem very familiar. In fact, if you’ve used Microsoft’s CoPilot, Google’s Bard AI or ChatGPT, a lot of these tools won’t feel new. What is new is the fact that they’re showing up on the S24s, and are mostly processed on-device by Samsung’s recently announced Gauss generative AI model, which it has been quietly building out.

Samsung’s Galaxy AI features on the S24

There are five main areas where generative AI Is making a big difference in the Galaxy S24 lineup — search, translations, note creation, message composition and photo editing and processing. Aside from the notes and composition features, most of these updates seem like versions of existing Google products. In fact, the new Circle to Search feature is a Google service that is debuting on the S24 series, in addition to the Pixel 8 and Pixel 8 Pro.

Circle to Search

With Circle to Search, you basically press the middle of the screen’s bottom edge, the Google logo and a search bar pop up, and you can draw a ring around anything on the display. Well, almost anything. DRMed content or things protected from screenshots, like your banking app, are off limits. Once you’ve made your selection, a panel slides up showing your selection, along with results from Google’s Search Generative Experience (SGE).

You can scroll down to see image matches, followed by shopping, text, website and other types of listings that SGE thought were relevant. I circled the Samsung clock widget, a picture of beef wellington and a lemon, and each time I was given pretty accurate results. I was also impressed by how quickly Google correctly identified a grill that I circled on an Engadget article featuring a Weber Searwood, especially since the picture I drew around was at an off angle.

This is basically image search via Google or Lens, except it saves you from having to open another app (and take screenshots). You’ll be able to circle items in YouTube videos, your friend’s Instagram Stories (or, let’s be honest, ads). Though I was intrigued by the feature and its accuracy, I’m not sure how often I’d use it in the real world. The long-press gesture to launch Circle to Search works whether you use a gesture-based navigation or if you have the three-button layout. The latter might be slightly confusing, since you pretty much hold your finger down on the home button, but not exactly.

Circle to Search is launching on January 31st, and though it’s reserved for the Galaxy S24s and Pixel 8s for now, it’s not clear whether older devices might get the feature.

Chat Assist to tweak the tone of your messages

The rest of Samsung’s AI features are actually powered by the company’s own language models, not Google’s. This part is worth making clear, because when you use the S24 to translate a message from, say, Portuguese to Mandarin, you’ll be using Samsung’s database, not Google’s. I really just want you to direct your anger at the right target when something inevitably goes wrong.

I will say, I was a little worried when I first heard about Samsung’s new Chat Assist feature. It uses generative AI to help reword a message you’ve composed to change up the tone. Say you’re in a hurry, firing off a reply to a friend whom you know can get anxious and misinterpret texts. The S24 can take your sentences, like “On my way back now what do you need” and make it less curt. The options I saw were “casual,” “emojify,” “polite,” “professional” and “social,” which is a hashtag-filled caption presumably for your social media posts.

I typed “Hey there. Where can I get some delicious barbecue? Also, how are you?” Then I tapped the AI icon above the keyboard and selected the “Writing Style” option. After about one or two seconds, the system returned variations of what I wrote.

At the top of the results was my original, followed by the Professional version, which I honestly found hilarious. It said “Hello, I would like to inquire about the availability of delectable barbecue options in the vicinity. Additionally, I hope this message finds you well. Thank you for your attention to this matter.”

It reminded me of an episode of Friends where Joey uses a thesaurus to sound smarter. Samsung’s AI seems to have simply replaced every word with a slightly bigger word, while also adding some formal greetings. I don’t think “inquire about the availability of delectable barbecue options in the vicinity” is anything a human would write.

That said, the casual option was a fairly competent rewording of what I’d written, as was the polite version. I cannot imagine a scenario where I’d pick the “emojify” option, except for the sake of novelty. And while the social option pained me to read, at least the hashtags of #Foodie and #BBQLover seemed appropriate.

Samsung Translate

You can also use Samsung’s AI to translate messages into one of 13 languages in real-time, which is fairly similar to a feature Google launched on the Pixel 6 in 2021. The S24’s interface looks reminiscent of the Pixel’s, too, with both offering two text input fields. Like Google, Samsung also has a field at the top for you to select your target language, though the system is capable of automatically recognizing the language being used. I never got this to work correctly in a foreign language that I understand, and have no real way of confirming how accurate the S24 was in Portuguese.

Samsung’s translation engine is also used for a new feature called Live Translate, which basically acts as an interpreter for you during phone calls made via the native dialer app. I tried this by calling one of a few actors Samsung had on standby, masquerading as managers of foreign-language hotels or restaurants. After I dialed the number and turned on the Live Translate option, Samsung’s AI read out a brief disclaimer explaining to the “manager at a Spanish restaurant” that I was using a computerized system for translation. Then, when I said “Hello,” I heard a disembodied voice say “Hola” a few seconds later.

The lag was pretty bad and it threw off the cadence of my demo, as the person on the other end of the call clearly understood English and would answer in Spanish before my translated request was even sent over. So instead of:

Me: Can I make a reservation please?

S24: … ¿Puedo hacer una reserva por favor?

Restaurant: Si, cuantas personas y a que hora?

S24 (to me): … Yes, for how many people and at what time?

My demo actually went:

Me: Can I make a reservation please?

pause

Restaurant: Si, cuantas personas y a que hora?

S24: ¿Puedo hacer una reserva por favor?

pause

S24 (to me): Yes, for how many people and at what time?

It was slightly confusing. Do I think this is representative of all Live Translate calls in the real world? No, but Samsung will need to work on cutting down lag if it wants to be helpful and not confusing.

Galaxy AI reorganizing your notes

I was most taken by what Samsung’s AI can do in its Notes app, which historically has had some pretty impressive handwriting recognition and indexing. With the AI’s assistance, you can quickly reformat your large blocks of text into easy-to-read headers, paragraphs and bullets. You can also swipe sideways to see different themes, with various colors and font styles.

Notes can also generate summaries for you, though most of the summaries on the demo units didn’t appear very astute or coherent. After it auto-formatted a note titled “An Exploration of the Celestial Bodies in Our Solar System,” the first section was aptly titled “Introduction,” but the first bullet point under that was, confusingly, “The Solar System.” The second bullet point was two sentences, starting with “The Solar System is filled with an array of celestial bodies.”

Samsung also borrowed another feature from the Pixel ecosystem, using its speech-to-text software to transcribe, summarize and translate recordings. The transcription of my short monologue was accurate enough, but the speaker labels weren’t. Summaries of the transcriptions were similar to those in Notes, in that they’re not quite what I’d personally highlight.

The Galaxy S24 held in mid-air, with the viewfinder of its camera app showing on the screen.
Photo by Sam Rutherford / Engadget

That’s already a lot to cover, and I haven’t even gotten to the photo editing updates yet. My colleague Sam Rutherford goes into a lot more detail on those in his hands-on with the Galaxy S24 Ultra, which has the more-sophisticated camera system. In short though, Samsung offers edit suggestions, generative background filling and an instant slow-mo tool that fills in frames when you choose to slow down a video.

Samsung Galaxy S24 and S24 Plus hardware updates

That brings me to the hardware. On the regular Galaxy S24 and S24 Plus, you’ll be getting a 50-megapixel main sensor, 12MP wide camera and 10MP telephoto lens with 3x optical zoom. Up front is a 12MP selfie camera. So, basically, the same setup as last year. The S24 has a 6.2-inch Full HD+ screen, while the S24 Plus sports a 6.7-inch Quad HD+ panel and both offer adaptive refresh rates that can go between 1 and 120Hz. In the US, all three S24 models use a Snapdragon 8 Gen 3 for Galaxy processor, with the base S24 starting out with 8GB of RAM and 128GB of storage. Both the S24 and S24 Plus have slightly larger batteries than their predecessors, with their respective 4,000mAh and 4,900mAh cells coming in at 100mAh and 200mAh bigger than before.

Though the S24s look very similar to last year’s S23s, my first thought on seeing them was how much they looked like iPhones. That’s neither a compliment nor an indictment. And to be clear, I’m only talking about the S24 and S24 Plus, not the Ultra, which still has the distinctive look of a Note.

Four Galaxy S24 handsets in white, cream, black and purple, laid down on a table with their rear cameras facing up.
Photo by Sam Rutherford / Engadget

It feels like Samsung spent so much time upgrading the software and focusing on joining the AI race this year that it completely overlooked the S24’s design. Plus, unlike the latest iPhones, the S24s are also missing support for the newer Qi 2 wireless charging standard, which includes magnetic support, a la Apple’s MagSafe.

Wrap-up

I know it’s just marketing-speak and empty catchphrases, but I’m very much over Samsung’s use of what it thinks is trendy to appeal to people. Don’t forget, this is the company that had an “Awesome Unpacked” event in 2021 filled to the brim with cringeworthy moments and an embarrassingly large number of utterances of the words “squad” and “iconic”.

That doesn’t mean what Samsung’s done with the Galaxy S24 series is completely meaningless. Some of these features could genuinely be useful, like summarizing transcriptions or translating messages in foreign languages. But after watching the company follow trend after trend (like introducing Bixby after the rise of digital assistants, or bringing scene optimizers to its camera app after Chinese phone makers did), launching generative AI features feels hauntingly familiar. My annoyance at Samsung’s penchant for #trendy #hashtags aside, the bigger issue here is that if the company is simply jumping on a fad instead of actually thoughtfully developing meaningful features, then consumers run the risk of losing support for tools in the future. Just look at what happened to Bixby.

This article originally appeared on Engadget at https://www.engadget.com/galaxy-s24-and-s24-plus-hands-on-samsungs-ai-phones-are-here-but-with-mixed-results-180008236.html?src=rss