Why Are Most AI Voices Female? Exploring the Reasons Behind Female AI Voice Dominance

Siri, Alexa, Cortana, Google Voice ChatGPT 4o, it’s no coincidence that they all have female voices (and sometimes even names). In fact, Spike Jonze even literally called his dystopian AI-based film “Her” after the AI assistant Samantha from the movie. Played by Scarlett Johansson, the movie had a premise that sounded absurd 11 years ago but now feels all too realistic after OpenAI announced their voice-based AI model GPT 4o (omni). The announcement was also followed by an uproar from Johansson, who claimed the AI sounded a lot like her even though she hadn’t given OpenAI the permission to use her voice. Johansson mentioned that she was approached by OpenAI CEO Sam Altman to be the voice of GPT 4o, but declined. Just days before GPT 4o was announced, Altman asked her once again to reconsider, but she still declined. GPT 4o was announced exactly 10 days ago on the 13th of May, and Johansson distinctly recognized the voice as one that sounded quite similar to her own. While there are many who say that the voices don’t sound similar, it’s undeniable that OpenAI was aiming for something that sounded like Samantha from Her rather than going for a more feminine yet mechanical voice like Siri or Google Voice. All this brings a few questions to mind – Why do most AI voice assistants have female voices? How do humans perceive these voices? Why don’t you see that many male AI voice assistants (and does mansplaining have a role to play here)? And finally, do female voice assistants actually help or harm real women and gender equality in the long run? (Hint: a little bit of both, but the latter seems more daunting)

AI Voice Assistants: A History

The history of AI voice assistants extends well before 2011 when Siri was first introduced to the world… however, a lot of these instances were fiction and pop-culture. Siri debuted as the first-ever voice assistant relying on AI, but you can’t really credit Siri with being the first automated female voice because for years, IVR dominated phone conversations. Do you remember the automated voices when you called a company’s service center like your bank, cable company or internet provider? Historically, a lot of times the voices were female, paving the way for Siri in 2011. In fact, this trend dates back to 1878, with Emma Nutt being the first woman telephone operator, ushering in an entirely female-dominated profession. Women operators then naturally set the stage for female-voiced IVR (Interactive Voice Response) calls. However, while IVR calls were predominantly just a set of pre-recorded responses, Siri didn’t blurt out template-ish pre-recorded sentences. She was trained on the voice of a real woman, and conversed with you (at least that time) like an actual human. The choice of a female voice for Siri was influenced by user studies and cultural factors, aiming to make the AI seem friendly and approachable. This decision was not an isolated case; it marked the beginning of a broader trend in the tech industry. In pop culture, however, the inverse was said to be true. Long before Siri in 2011, JARVIS took the stage in the 2008 movie Iron Man as a male voice assistant. Although somewhat robotic, JARVIS could do pretty much anything, like control every micro detail of Tony Stark’s house, suit, and life… and potentially even go rogue. However, that aside, studies show something very interesting about how humans perceive female voices.

JARVIS helping control Iron Man’s supersuit

Historically, Robots are Male, and Voice Assistants are Female

The predominance of female voices in AI systems is not a random occurrence. Several factors contribute to this trend:

  • User Preference: Research indicates that many users find female voices more soothing and pleasant. This preference often drives the design choices of AI developers who seek to create a comfortable user experience.
  • The Emotional Connection: Female voices are traditionally associated with helpful and nurturing roles. This aligns well with the purpose of many AI systems, which are designed to assist and support users in various tasks.
  • Market Research: Companies often rely on market research to determine the most effective ways to engage users. Female voices have consistently tested well in these studies, leading to their widespread adoption.
  • Cultural Influences: There are cultural and social influences that shape how voices are perceived. For instance, in many cultures, female voices are stereotypically associated with service roles (e.g., receptionists, customer service), which can influence design decisions.

These are but theories and studies, and the flip side is equally interesting. Physical robots are often built with male physiques and proportions given that their main job of lifting objects and moving cargo around is traditionally done by men too. Pop culture plays a massive role again, with Transformers being predominantly male, as well as Terminator, T-1000, Ultron, C3PO, Robocop, the list is endless.

What Do Studies Say on Female vs. Male AI Voices?

Numerous studies have analyzed the impact of gender in AI voices, revealing a variety of insights that help us understand user preferences and perceptions. Here’s what these studies reveal:

  • Likability: Research indicates that users generally find female voices more likable. This can enhance the effectiveness of AI in customer service and support roles, where user comfort and trust are paramount.
  • Comfort and Engagement: Female voices are often perceived as more comforting and engaging, which can improve user satisfaction and interaction quality. This is particularly important in applications like mental health support, where a soothing tone can make a significant difference.
  • Perceived Authority: Male voices are sometimes perceived as more authoritative, which can be advantageous in contexts where a strong, commanding presence is needed, such as navigation systems or emergency alerts. However, this perception can vary widely based on individual and cultural differences.
  • Task Appropriateness: The suitability of a voice can depend on the specific task or context. For example, users might prefer female voices for personal assistants who manage everyday tasks, while male voices might be preferred for financial or legal advice due to perceived authority.
  • Cognitive Load: Some research suggests that the perceived ease of understanding and clarity of female voices can reduce cognitive load, making interactions with AI less mentally taxing and more intuitive for users.
  • Mansplaining, A Problem: The concept of “mansplaining” — when a man explains something to someone, typically a woman, in a condescending or patronizing manner — can indirectly influence the preference for female AI voices. Male voices might be perceived as more authoritative, which can sometimes come across as condescending. A male AI voice disagreeing with you or telling you something you already know can feel much more unpleasant than a female voice doing the same thing.

The 2013 movie Her had such a major impact on society and culture that Hong Kong-based Ricky Ma even built a humanoid version of Scarlett Johansson

Do Female AI Voices Help Women Be Taken More Seriously in the Future?

20 years back, it was virtually impossible to determine how addictive and detrimental social media was going to be to our health. We’re at the point in the road where we should be thinking of the implications of AI. Sure, the obvious discussion is about how AI could replace us, flood the airwaves with potential misinformation, and make humans dumb and ineffective… but before that, let’s just focus on the social impact of these voices, and what they do for us and the generations to come. There are a few positive impacts to this trend:

  • Normalization of Female Authority: Regular exposure to female voices in authoritative and knowledgeable roles can help normalize the idea of women in leadership positions. This can contribute to greater acceptance of women in such roles across various sectors.
  • Shifting Perceptions: Hearing female voices associated with expertise and assistance can subtly shift societal perceptions, challenging stereotypes and reducing gender biases.
  • Role Models: AI systems with confident and competent female voices can serve as virtual role models, demonstrating that these traits are not exclusive to men and can be embodied by women as well.

However, the impact of this trend depends on the quality and neutrality of the AI’s responses, which is doubtful at best. If female-voiced AI systems consistently deliver accurate and helpful information, they can enhance the credibility of women in technology and authoritative roles… but what about the opposite?

Female AI Voices Running on Male-biased Databases

The obvious problem, however, is that these AI assistants are still, more often than not, coded by men who may bring their own subtle (or obvious) biases into how these AI bots operate. Moreover, a vast corpus of databases fed into these AI LLMs (Large Language Models) is created by men. Historically, culture, literature, politics, and science, have all been dominated by men for centuries, with women only very recently playing a larger and more visible role in contributing to these fields. All this has a distinct and noticeable effect on how the AI thinks and operates. Having a female voice doesn’t change that – it actually has a more unintended negative effect.

There’s really no problem when the AI is working with hard facts… but it becomes an issue when the AI needs to share opinions. Biases can undermine an AI’s credibility, can cause problems by not accurately representing the women it’s supposed to, can promote wrong stereotypes, and even reinforce biases. We’re already noticing the massive spike in the usage of words like ‘delve’ and ‘testament’ because of how often AI LLMs use them – think about all the stuff we CAN’T see, and how it may affect life and society a decade from now.

In 2014, Alex Garland’s Ex Machina showed how a lifelike female robot passed the Turing Test and won the heart of a young engineer

The Future of AI Voice Assistants

I’m no coder/engineer, but here’s where AI voice assistants should be headed and what steps should be taken:

  • Diverse Training Data: Ensuring that training data is diverse and inclusive can help mitigate biases. This involves sourcing data from a wide range of contexts and ensuring a balanced representation of different genders and perspectives.
  • Bias Detection and Mitigation: Implementing robust mechanisms for detecting and mitigating bias in AI systems is crucial. This includes using algorithms designed to identify and correct biases in training data and outputs.
  • Inclusive Design: Involving diverse teams in the design and development of AI systems can help ensure that different perspectives are considered, leading to more balanced and fair AI systems.
  • Continuous Monitoring: AI systems should be continuously monitored and updated to address any emerging biases. This requires ongoing evaluation and refinement of both the training data and the AI algorithms.
  • User Feedback: Incorporating user feedback can help identify biases and areas for improvement. Users can provide valuable insights into how the AI is perceived and where it might be falling short in terms of fairness and inclusivity.

AI assistants aren’t going anywhere. There was a time not too long ago when it seemed that AI assistants were dead. In the end of 2022, Amazon announced that Alexa had racked up $10 billion in debt and seemed like a failed endeavor – that same month, ChatGPT made its debut. Cut to today and AI assistants have suddenly become mainstream again. Mainstream in a way that almost every company and startup is looking for ways to integrate AI into their products and services. Siri and GPT 4o are just the beginning of this new female voice-led frontier… it’s important we understand the pitfalls and avoid them before it’s too late. After all, if you remember in the movie Terminator Salvation, Skynet was a female too…

The post Why Are Most AI Voices Female? Exploring the Reasons Behind Female AI Voice Dominance first appeared on Yanko Design.

AI-powered modular mouse has some nifty tricks to level up your presentations

The nature and location of work today have changed considerably, especially after the introduction of work-from-home arrangements, but there is one thing that still remains the same. People still hold in-person meetings, which often involve making presentations, be it in front of colleagues or before clients. Despite how common this activity is, the tools used especially by presenters haven’t evolved that much except for teleconferencing equipment. Many of the devices needed for an effective presentation often come as separate products, so this concept tries to integrate not just two but four tools into a single design that, at first glance, looks like a normal mouse.

Designers: TianRun Chen, ZiLong Peng, Yanran Zhao, YueHao Liu

Many computer users use a mouse, even if they actually prefer using laptops. It’s almost an indispensable tool for on-the-go knowledge workers, including those who often find themselves speaking in front of other people in a room. Unfortunately, these people would also find themselves grabbing a presenter and a laser pointer during those presentations, making their work lives needlessly complex. There are some thin, portable mice that try to integrate a laser pointer, but these are still rare, not to mention not ergonomic in their designs.

The OctoAssist concept design has a rather intriguing solution that deconstructs the design of the computer mouse in order to provide more functionality. At its core, it sports a modular design where the main “module” is actually the front third of a conventional mouse, where the buttons would normally be located. This module is actually a touch-sensitive device that you can use on its own as a mini touchpad that supports gestures like pinching and three-finger taps. It can magnetically connect to a “base” that provides the ergonomic shape of a mouse, while potentially also offering additional battery power in its rather large body.

The core module also has a built-in laser pointer and, thanks to its touch-sensitive surface, can be used to easily control presentations with the same hand. It also has a voice recorder so you can have the entire presentation or meeting preserved for documentation purposes. But why stop there when you have today’s ubiquitous AI available to almost everyone? That AI, built into the device, can also summarize the meeting and generate notes in a flash, impressing everyone in the room with your technological wizardry and efficiency.

From a regular office mouse to a miniature touchpad to a presenter to an AI secretary, the OctoAssist offers plenty of features, though perhaps a bit too much as well. The AI-powered summary and notes are definitely convenient, but they could weigh the core module down not just with complexity but also with hardware and battery consumption. It does offload the AI processing to a connected smartphone, but that can sometimes cause lags and even data loss. Regardless, it’s definitely an interesting concept that might even be plausible, presuming a manufacturer sees profitable value in an all-in-one design instead of selling multiple devices that do those tasks separately.

The post AI-powered modular mouse has some nifty tricks to level up your presentations first appeared on Yanko Design.

Portable AI device uses camera, projectors, sensors to make you more productive

For better or for worse, depending on where you stand on the debate, artificial intelligence has changed and will continue changing how we create and communicate. Services like ChatGPT, Midjourney, Gemini, and Copilot are pretty popular with those who are adventurous enough to experiment with AI. We can expect that over the next few years, we’ll see more services, gadgets, and devices that can help us use the technology and integrate it into our workflow and every day lives.

Designers: Mingwan Bae, Sohyun An, Junyoung Min, Youngsuh Yoo

Lay is a concept for a portable AI device that is equipped with a wide-angle camera, a projector, and a sensing module. The 48MP wide-angle camera has a 13mm focal length and is able to recognize objects and space as well as have text recognition and upscale objects it can scan. The 4K UHD projector can project up to 30 inches screen with auto keystone and has under 10cm ultra-short throw distance and high brightness and contrast. The sensing module, which includes LiDAR, ambient light, and proximity sensors, is able to sense its surroundings in real time.

The device basically scans your surroundings and then leverage AI to make suggestions and give assistance on tasks that you can do to as you’re working, drawing, reading, scribbling, building, creating, or just leisurely browsing. It looks like a small spherical robot with a round head that moves around and that you can carry around and place on your desk or space as it helps you make your workflow smoother. It projects onto a surface which will serve as your screen as you do your different tasks. It can recognize and select text, drawings, photos, sketches and then all the content and information are updated in your real-time cloud.

The device still seems to be mostly theoretical and specific tasks you can do or that it can suggest are still a bit vague. But it’s an interesting concept for an AI-powered device that you can carry around with you especially if you’re a digital nomad. And with the speed at which some digital natives and early adapters are using and exploring AI, this can actually be a real device soon.

The post Portable AI device uses camera, projectors, sensors to make you more productive first appeared on Yanko Design.

Nothing just beat Apple by bringing ChatGPT to all its TWS earbuds… even the older models

London-based tech company Nothing is making waves in the tech world by expanding its integration of ChatGPT, a powerful AI language model, to a wider range of its audio devices. This move comes just a month after the feature debuted on the company’s latest earbuds, the Ear and Ear (a), and their smartphone lineup… and coincidentally, just hours before Google’s I/O event, where the company’s expected to announce an entire slew of AI features and upgrades.

The earlier-than-expected rollout signifies Nothing’s commitment to bringing advanced AI features to everyday tech. This integration isn’t limited to Nothing-branded devices; it extends to their sub-brand CMF as well. Users with older Nothing and CMF earbud models, including the Ear (1), Ear (stick), Ear (2), CMF Neckband Pro, and CMF Buds Pro, will be able to leverage the capabilities of ChatGPT starting May 21st with a simple update to the Nothing X app. It also cleverly pre-empts Apple, which is allegedly working with OpenAI to bring ChatGPT to future models of the iPhone.

Read the Nothing Ear (a) Review here

There’s a caveat, however. To enjoy the benefits of ChatGPT through your Nothing or CMF earbuds, you’ll need to be using them with a Nothing smartphone running Nothing OS 2.5.5 or later. The good news is that activating ChatGPT is a breeze. Once you’ve updated the Nothing X app, you can enable a new gesture feature that allows you to initiate conversations with the AI assistant by simply pinching the stem of your earbuds.

This development signifies a growing trend in the tech industry: embedding AI assistants directly into consumer devices. By offering voice control through earbuds, Nothing is making it easier for users to perform everyday tasks hands-free, like checking the weather or controlling music playback. Imagine asking your earbuds for directions while jogging or requesting a quick weather update during your commute – all without reaching for your phone.

The move comes at a perfect time, right between OpenAI’s GPT-4o announcement, and Google’s I/O event, which will include multiple AI improvements including integration of Gemini AI into a vast variety of Google products as well as with the Pixel hardware lineup.

The post Nothing just beat Apple by bringing ChatGPT to all its TWS earbuds… even the older models first appeared on Yanko Design.

Google Pixel 8a official: A more affordable way to experience Google’s AI

Even before AI and machine learning became buzzwords, Google was already utilizing these technologies behind the scenes to power services like Search and Google Assistant. In line with recent trends, however, it has started applying and advertising AI for anything and everything, especially for its Pixel devices. AI features, however, are normally accessed through online services, which incurs security and privacy issues, or on the device itself, which requires powerful hardware that’s often available only on more expensive flagships. That’s the kind of situation that the new Google Pixel 8a is trying to change, offering a more accessible device to access Google’s AI-powered features and services for years on end.

Designer: Google

The Pixel 8a is practically the Pixel 8 in both design and spirit. It has the exact same appearance, though in a slightly smaller size and one important change in materials. The back of this newer Pixel phone is a matte composite instead of the Pixel 8’s glass rear. The color options available are also slightly different, with the Pixel 8a leaning more towards fun and saturated hues like Aloe green and Bay blue. Otherwise, the two are almost exactly identical, which some Pixel fans have grown pretty fond of.

The Pixel 8a even shares the same Tensor G3 processor as the current flagship, though we won’t be surprised if we find out later that it has been dialed down a bit. That said, it still has enough power to support almost all of Google’s AI features on the Pixel, from Circle to Search to Gemini assistant for summarizing pages or notes to removing background noise from recorded video. There will still be some features exclusive to the Pixel 8, of course, but you can already enjoy most of what’s available on the Pixel 8a, especially when it comes to photography.

It will definitely need it because one of the biggest corners that Google had to cut was the camera system. Neither the 64MP main camera nor the 13MP ultra-wide has autofocus, and both have slightly lower specs than the Pixel 8. In other words, the Pixel 8a will rely more heavily on AI and algorithms to compensate for the camera hardware’s limitation. There are also some other key differences, like a slower (but still fast) 18W charging speed.

All in all, you’re getting nearly the same Pixel 8 experience for $200 less, with the Pixel 8a going for $499 for 128GB of storage and $549 for the first-ever 256GB option for a Pixel “a” series. Aside from the camera, none of the “downgrades” are deal-breakers, making the Pixel 8a a very worthwhile investment for the future, especially since the phone will also be getting Android updates for seven years.

The post Google Pixel 8a official: A more affordable way to experience Google’s AI first appeared on Yanko Design.

Kartell and Philippine Starck team up with A.I. for new furniture collection

There have been a lot of discussions about how artificial intelligence affects designers and design in general. This will continue to be a hotly debated topic in the next few years. There are those that believe this is heralding the death of the creative industry while there are those that believe that it can help brands and designers streamline processes and can foster innovation and experimentation. Italian furniture brand Kartell and French architect and designer Philippine Starck seems to be of the latter school of thought as they unveiled their A.I. collection

Designer: Philippe Starck and Kartell (and A.I.)

This collection features eco-friendly pieces of furniture that were a result of input from Kartell and Starck and streamlined by A.I. particularly in terms of prototyping and planning. It was able to contribute to make the collection sustainable and optimize the materials used resulting in reduced waste. Creating eco-friendly products is the ultimate goal and the combination of design, production, and A.I. helped achieve this.

The A.I. Lounge uses thermoplastic techno polymer with a mineral filler. It is available in white, black, green, and gray and can be used both indoors and outdoors or wherever you want to lounge around. The H.H.H Her Highest Highness meanwhile is a chair that should make you sit like a royal. The way the back is shaped will force you to sit as if on a throne. It uses green polycarbonate material for the eco-friendly aspect.

The A.I. Console meanwhile is a minimalist small table that can be placed in the foyers, vestibules, entrances, and hallways, or anywhere you need to have a small stand or table for your stuff. It sports a one legged design and is made from recycled Illy iPerEspresso coffee capsules. You can get it in orange, white, gray, or black.

The post Kartell and Philippine Starck team up with A.I. for new furniture collection first appeared on Yanko Design.

AI artist will “train” robot dogs to do a live painting session

Spot has been a pretty busy dog, previously appearing with super group BTS a few years and just last week, getting its own costume and dancing its heart out to celebrate International Dance Day. Lest you think that it’s an actual dog though, it’s actually a robotic dog that can do more than just jump and roll over. Now it’s branching out to the art world with a new exhibit featuring the power of AI.

Designer: Agnieszka Pilat

There has been a lot of heated discussions about AI and art but not all of them are always negative. While a lot have been critical, there are those that want to explore how autonomous technology and AI-generated art can aid in the democratization of art. One of those people is Polish artist Agnieszka Pilat. She has partnered with Boston Dynamics, or rather, Spot the robot dogs, for the Heterobota exhibition at the Museum of Fine Arts in Boston.

Two of the robot dogs, nicknamed Basia and Omuzana, will do a live painting demonstration in the museum on a 156 x 160 inch canvas on May 10. Pilat will be “training” the dogs to doodle and paint from 8PM to 12AM, with a little resting in between just like an actual artist would. Visitors in the museum can actually watch them live and the final work will not be displayed afterwards so your only chance to see the robot dogs in action would be during the live painting session.

Pilat says that the expected outcome is more like that of a “little kids finger-painting” since the technology is young and new, even though she has collaborated with Spot before. But it’s an interesting experiment in how humans can use AI and robots to generate art. Of course, there’s still a lot of discussion that rightly needs to be had but things like this can open up various viewpoints and opinions that can hopefully enhance the conversation.

The post AI artist will “train” robot dogs to do a live painting session first appeared on Yanko Design.

iPhone 16S concept mimics the Rabbit R1 format to reinstate that a phone is the best pocket AI device

We are still living with the iPhone 15 and its variants; the era of the iPhone 16 is further away from now. As known, it’s customary of Apple to drop its new seedlings (iPhone variants, if you like) in September every year and it looks like there is nothing unusual this year as well. Like every other year in the past, since Steve Jobs revealed the first iPhone – feels like it was a century ago – iPhone 16 and iPhone 16 Pro variants will arrive with new features.

A lot of them are leaking in bits and will continue to do so until the launch date. Irrespective of that, we will continue to have our own wishlists: long battery performance… please, elaborate AI integration into the iOS, and perhaps smaller screen real estate…hmm! When everyone else is putting their money on predicting the possible large display sizes of the iPhone 16 Pro Max, the Phone Industry is taking an ‘S’ route: A concept of an iPhone 16S that looks to take design cues from the Rabbit R1.

Designer: Phone Industry

For reference, the Rabbit R1 isn’t a typical gadget, and so is not its design. The boxy little AI device is designed to learn from your commands and do more than what the average smartphone can do. That is until the recent debacle of reviews that are showing that the real-world evolution of the Rabbit is far from its advocated details. Anyhow, this is not about what the Rabbit R1 does, it’s about the identical-looking (minus the hold bars on the top and bottom) iPhone 16S concept because the best AI device you can have in your pocket – in the foreseeable future is a phone!

Perhaps then the form factor of the concept phone in question may be stolen from the Rabbit R1, it does have some interesting ideas reliving its iPhone 16 identity (as the rumors hold it for now). The iPhone 16S is taking the expected Capture Button idea from the forthcoming iPhone deals, to give us a pocket camera-like physical clicking button from the yesteryears.

So, the hypothetical capture button on the opposite side of the iPhone 15 Pro like the Action Button, gives this iPhone a more camera-like feel. While Apple is considering on reworking the camera array in the upcoming iPhone 16 lineup, this concept sticks to the S series iPhone basics and uses just one – obviously multi-capability – camera in the rear. The highlight for me – besides the square form factor – of the iPhone 16S concept is its all-metal body and an interesting pattern around the Apple logo on the back. What do you think?

 

The post iPhone 16S concept mimics the Rabbit R1 format to reinstate that a phone is the best pocket AI device first appeared on Yanko Design.

Humane AI Pin and Rabbit R1 versus Tech Reviewers: Who’s to blame?

There’s a massive missing link between tech companies and tech reviewers… and instead of fixing it, we’re playing the blame game.

The backlash following bad reviews from MKBHD and other tech outlets like The Verge, Engadget, and CNET has been swift from the AI community. The internet is ablaze, either blaming Marques Brownlee for being too harshly critical in his review of the Humane AI Pin and the Rabbit R1 device… or shaming Humane and Rabbit for not delivering on what they promised. The blame, however, lies on the inherent relationship between the two parties. Like two people who aren’t emotionally ready to date, these AI companies shouldn’t have even shipped their products to tech reviewers.

The job of a tech reviewer, as its name rather simply suggests, is to provide an objective (or sometimes even a subjective) analysis of a product for their consumers/viewers. Tech Reviewers highlight technology through the lens of ‘Is this worth the money or not’… The problem, however, is that Humane and Rabbit needed beta testers, not tech reviewers.

Who’s to blame?

Let’s look at every single stakeholder in this AI charade and you’ll see that there’s some blame to go around for everyone. The first reaction, and justifiably so, is to blame Humane and Rabbit. They overpromised, underdelivered, hyped the product, raked in tonnes of VC and preorder money, but couldn’t stick the landing. Companies all across the world have been rushing to develop the ‘next iPhone’, and while Samsung has hedged all its bets on folding devices, and Apple on a $3400 headset, Humane and Rabbit happened to be at the right place at the right time with the right buzzwords. Imagine this, an AI assistant powerful enough to do anything you ask – it’s literally something out of a sci-fi movie, and that’s precisely what these companies hoped we’d think. They weren’t wrong. However, they committed the cardinal sin of the entrepreneur – they pitched something that didn’t exist. Sure, this wasn’t as detrimental as the stunts Elizabeth Holmes or Sam Bankman-Fried pulled, but in essence, it was still a far-fetched lie or rather a very convenient truth. An AI that does everything you ask doesn’t exist and probably won’t for a while… but a cute design or a body-mounted projector was more than enough to deceive us… and for the sake of this argument, let’s operate under the good-faith assumption that Humane and Rabbit didn’t know they were pushing a bad product.

Why the hardware trickery though? Why did Humane and Rabbit NEED to build hardware devices that looked fancy/quirky/cool? Here’s where the blame shifts to the powers that be – Google, Apple, Microsoft, Amazon, and Meta. For every reviewer that said the Humane AI Pin or Rabbit R1 “could’ve been a smartphone app”, there are thousands of engineers at these companies building JUST THAT. It’s no coincidence that Humane and Rabbit BOTH had their products publicly reviewed well before Google I/O and Apple’s WWDC. Rumor has it that Apple and Google are just waiting to launch AI assistants with similar features, tying into all the smartphone-related services. These large companies have repositories of consumer data, and they have a powerful influence, putting them miles ahead of the starting line when it comes to the AI race. The only way Humane and Rabbit could escape the clutches of these companies was to isolate themselves completely from them. Not to mention, there’s absolutely no way Apple would allow a third-party smartphone app to have Humane or Rabbit’s level of control over your entire device. Sure, Humane and Rabbit could have made all-powerful AI assistant apps, but they A. wouldn’t be as impressive or attractive, and B. they’d be doomed to fail because of the goliath forces that are Apple and Google.

A snippet of the Twitter outrage following MKBHD’s review. Ironically, Sam Sheffer (new media head for Humane) admits the software is bad, while the product sells for $700

A venture capitalist’s job, in Shark Tank parlance, is to “pour gasoline on a fire”, so there’s definitely some blame to share here too. AI became a buzzword in the second half of 2022 and it’s been on the top of everyone’s mind ever since. I don’t blame VCs for seeing potential in the ideas that Humane and Rabbit came up with, but if there’s one thing that absolutely pisses me off, it’s the fact that they took the criticism of Humane and Rabbit’s devices a little too personally. After all, a VC thrives on value creation – take that away and you have a very angry person who’s poured millions into a project that now doesn’t have anywhere to go. However, bad products and bad companies are all too common in the VC world. What they didn’t expect, however, was their golden goose (AI) to lay a rotten egg.

It’s easy to say that tech reviewers were simply doing their job and deserve no blame (after all, I’m a tech reviewer too), but the truth is that the reviewers also share a bit of blame in this entire cycle of events. However, not for the reason you think. Arguably, Marques Brownlee deserves praise for being forthright with his review – some reviewers would probably hesitate to say something bad about a company if there was sponsorship money involved – and although MKBHD didn’t have any financial stake in this product, they spoke their mind (as did every other reviewer). But that isn’t where the problem lies. The problem lies with the hype train that tech reviewers both create and ride. These reviewers are, by nature of their profession, enthusiasts when it comes to technology – so it’s no surprise that they were the biggest cheerleaders of Humane and Rabbit 5-6 months back when the products were first teased. If anything, the media should have balanced their enthusiasm with a pinch of real-world salt. Had that been the case, these disastrous reviews would’ve stung less under the pretext of the age-old “I told you so”…

Dave2D’s review of the Rabbit R1 device may just be the most sensible, erudite take on the internet.

So what’s the solution?

If the last few years have proven anything, it’s that designers and companies operate in such secrecy, they often don’t put themselves in the shoes of the consumer to begin with. With Tesla pushing the steering yoke over a wheel even though consumers have been begging for the latter, with Apple needing EU regulators to force them into adopting USB-C, with Google cancelling products left right and center against the wishes of their consumers, or firing employees who object to their technology being used for warfare (whoops, I went there), there’s a massive disconnect between what companies do and what consumers want. Even though at a smaller scale, Humane and Rabbit seem to find themselves in a similar soup. Whether it’s the holier-than-thou attitude that’s hard-coded into being an entrepreneur, or whether it’s a bunch of VCs deciding what’s good for the public, the one voice that seems to constantly be left out of the room is that of consumers… and their only representative for now is the humble tech reviewer, who actually is incentivized to see things from their points of view. Sadly, that also means Marques Brownlee ends up being in the line of fire when he has to call an AI gadget ‘the worst product he’s ever reviewed’…

The solution lies in reimagining how products are developed and promoted. Humane and Rabbit needed beta testers, not reviewers, who would’ve helped them swallow the hard pill that is the realization that their product isn’t ready for the real world. After all, it’s better to hear that bitter truth behind closed doors instead of an influencer saying it on YouTube… right?

The post Humane AI Pin and Rabbit R1 versus Tech Reviewers: Who’s to blame? first appeared on Yanko Design.

These vases were (almost) completely designed and made by algorithms and machines

From the conceptualization to the actual production, the Differential Growth Vases hardly had any significant human intervention. The vase shape was determined by a differential growth algorithm, while a 3D printer manufactured the vase. Although designer Tim Zarki orchestrated the project and came up with the very idea in the first place, the machines pretty much took over the execution of both the concept and fabrication phases, displaying two things – AI-based creativity, and the ability to have humans step away from creative roles with a fair amount of success.

Designer: Tim Zarki

Differential growth might sound like a fancy term, but it’s a way of explaining how cells multiply. The process can be understood through a series of rules that are repeatedly applied to points in space (called nodes) connected into chains by lines (edges) to form paths. In short, the cells adopt a pattern (based on their DNA) and create within that particular pattern, resulting in growth that follows a template set by previous cells. You can see this in how plant branches grow, how cells expand, how rivers meander, etc. Zarki put the same sort of algorithm to the test with the vases, setting a base shape and having the algorithm expand it. The result is nothing like any pottery you’ll ever see…

While most vases are created using a potter’s wheel, resulting in a rotationally symmetrical design, these vases have undulating designs created by the algorithm. The best way to understand how the algorithm works is to look at the shape of the base of the vase, and the final shape at the top. The vase’s vertical growth shows the transition between these two shapes, helping you understand how the algorithm works. There’s never a set final pattern, as the algorithm creates something new each time. This means each vase ends up looking unique. Zarki experimented with three overall designs, although the possibilities are quite literally endless, much like how no two plants grow the exact same way, or no two fingerprints look the same.

The final forms were then fed into a slicer software, that helps prepare them for 3D printing. The slicer creates a path that the printer’s nozzle has to follow, and once ready, the printer gets to work, slowly, but steadily printing the vase. As is evident, this entire process is nothing like the current conventional pottery methods, but with this project, Zarki hopes to challenge convention. By eliminating standard processes, and to quite an extent the human too, these vases show how oddly appealing a world would be to live in if AI designed more… obviously with humans playing a final role in determining whether the design is aesthetic or not!

The post These vases were (almost) completely designed and made by algorithms and machines first appeared on Yanko Design.