ASUS ProArt PZ13 detachable laptop brings Windows on ARM to creators on the go

Although Windows on ARM has existed before, the latest iteration of Qualcomm’s Snapdragon X processors is coming out in full force to redeem the platform’s previous image. Part of that means having more PC makers on board who are willing to dive head-on into the arena, branding not just more capable silicon but, more importantly, AI-powered features that are all the buzz these days. ASUS, unsurprisingly, isn’t going to be left behind, and it’s bringing all that goodness into the new ProArt PZ13 detachable laptop that offers an ultra-portable form factor aimed to give creators an edge to let their creative juices flow whenever and wherever inspiration strikes them, even if it’s just on their living room couch.

Designer: ASUS

Given the uninspiring legacy of previous Windows on ARM attempts, it’s quite a bold move for ASUS to pitch the platform to one of the most discerning audiences in the market. Artists, designers, and content creators need more than just light, portable devices or long-lasting batteries, they need the performance that will be able to meet the demands of the software they use. At only 9mm thin, weighing only 1.87 lbs, and carrying a large 70Wh battery, the ASUS ProArt PZ13 definitely gets the first two aspects right. The new AI-enabled Qualcomm Snapdragon X processor promises to deliver that last and most critical part.

If it wasn’t painfully obvious yet, this processor harnesses the power of AI to boost its performance, particularly with paired with the Windows 11 operating system. It isn’t just your run-of-the-mill AI voice assistant that answers your search queries but a deeper, more encompassing tool that optimizes the system’s operation to save time, power, effort, and other resources. ASUS is also throwing its own AI-powered creation tools, like a StoryCube media hub for organizing digital assets, and a subscription-based CapCut for dynamic video editing. And, yes, Microsoft’s CoPilot is also onboard to make the other menial tasks more efficient as well.

The ASUS ProArt PZ13 comes in a 2-in-1 detachable design that is reminiscent of the Microsoft Surface, complete with a full-width kickstand and a keyboard cover. The 13.3-inch 3K ASUS Lumina OLED touch screen promises to meet the demanding requirements of creators when it comes to brightness, color accuracy, and precision, supporting an optional ASUS Pen 2.0 stylus for creating digital masterpieces. Unlike the existing line of ASUS ProArt laptops, this tablet and laptop in one is clearly designed for mobility, and the IP52 dust and water resistance rating, along with MIL-STD-810H certification, offer some peace of mind when you need to take the device places.

Of course, the real question will be whether Windows 11 on this new Snapdragon platform will perform just as well as on Intel and AMD processors. Early reviews seem to be promising, but the final judgment will really depend on the compatibility of creators’ tools on this still rare combination. The ASUS ProArt PZ13 AI-powered detachable laptop will go on sale sometime in the third quarter of the year, with pricing details to be released closer to the product’s launch.

The post ASUS ProArt PZ13 detachable laptop brings Windows on ARM to creators on the go first appeared on Yanko Design.

How BMW’s Designworks Transforms Automotive Design Using AI Tools

AI in the design process reshapes the industry, and BMW’s Designworks studio leads this transformation. AI tools like MidJourney, Runway, and Kaiber enhance how designers generate images, create animations, and develop textures inspired by nature and existing designs. This shift streamlines workflows and democratizes creativity, making it easier for designers to visualize and iterate on ideas quickly.

Designer: BMW Designworks

Revolutionizing Workflow and Democratizing Creativity

These AI tools have become essential in BMW’s Designworks studio. They allow the generation of images, animations, and textures that would otherwise require labor-intensive traditional methods. By leveraging AI, designers can explore creative possibilities quickly and efficiently, significantly reducing the time and effort needed to bring concepts to life.

Traditionally, creating a new design involves numerous steps: sketching, modeling, rendering, and iterating based on feedback. Each of these steps can be time-consuming and resource-intensive. However, with AI, designers can input parameters or a simple sketch and receive multiple fully realized design options within minutes. This rapid iteration capability allows for more experimentation and refinement, leading to higher quality and more innovative designs.

AI democratizes creativity by enabling designers with varying skill sets to contribute effectively to projects. Even those without extensive experience in animation or 3D modeling can produce compelling visual content. This inclusivity fosters a more collaborative environment where team members can present fully visualized ideas during brainstorming sessions, resulting in a more productive and innovative development process.

At BMW’s Designworks, this collaborative spirit is evident. Team members from different disciplines—industrial design, UI/UX, and creative consulting—can all contribute to the design process. AI tools bridge the gap between these disciplines, allowing for a unified creative vision. During brainstorming sessions, designers can bring AI-generated visuals to the table, enabling more effective communication and collaboration. This speeds up decision-making and ensures that the final design is holistic and well-rounded.

Inspiration, Innovation, and Practical Applications

AI-generated content provides unique sources of inspiration that traditional methods often miss. This capability helps designers think outside the box and push the boundaries of conventional design. For instance, AI can blend textures, materials, and shapes in ways that might not occur to a human designer, leading to innovative and unexpected results.

While AI tools are powerful, human designers’ creative input and oversight remain essential. AI serves to augment human creativity, not replace it. Designers at BMW’s Designworks use AI as a starting point, a source of inspiration that can be further refined and developed. This synergy between AI and human creativity ensures the final product is technologically advanced and artistically inspired.

One illustrative example is the use of AI in automotive design. AI tools are employed to design various aspects of BMW vehicles, including both interiors and exteriors, emphasizing the integration of all elements. By visualizing how different materials and textures can be blended, AI helps create cohesive and aesthetically pleasing designs. This approach enhances the vehicle’s visual appeal and aligns with BMW’s commitment to sustainability and innovation.

The practical application of AI tools is evident in various projects at BMW’s Designworks. For example, the design team often demonstrates how tools like MidJourney and Runway create design elements. They might show how an AI-generated texture can be applied to a vehicle interior or how an animated sequence can bring a design concept to life. These demonstrations provide tangible insight into AI’s capabilities and how it enhances the design process.

Technological Advancements, Competitive Advantage, and Future Outlook

AI development is rapid, with new tools and updates released frequently. This requires designers to continually update and integrate these new capabilities into their workflows. Designworks continuously explores and adopts the latest AI technologies to maintain a competitive edge.

The competitive advantage that AI provides BMW is multifaceted. First, the ability to rapidly generate and iterate on design ideas allows BMW to stay ahead of market trends and respond quickly to changing consumer preferences. This agility is crucial in the highly competitive automotive industry, where innovation and timely market entry can be decisive success factors.

Second, the democratization of creativity means that BMW can leverage the full potential of its design team, harnessing diverse perspectives and talents to create more innovative and appealing designs. This inclusive approach fosters a more dynamic and creative work environment and leads to designs that resonate more deeply with a broader range of customers.

Third, integrating AI in the design process enhances the quality and precision of BMW’s vehicles. AI tools can analyze vast amounts of data and generate optimized performance, aesthetics, and sustainability designs. This results in cars that are not only visually stunning but also technically superior, reinforcing BMW’s reputation for excellence and innovation.

The future outlook for AI in design is promising. As AI tools become more sophisticated, their integration into the design process will become even more seamless. Designers will have access to more advanced capabilities to create more complex and refined designs. This evolution will likely lead to even greater innovation and efficiency in the design process.

BMW’s Designworks studio illustrates how AI can transform the design process. By leveraging advanced tools like MidJourney, Runway, and Kaiber, the studio has improved its workflows, democratized creativity, and fostered a more collaborative environment. AI-generated content provides unique sources of inspiration, while the rapid development of AI technologies ensures that designers can continue to innovate and push the boundaries of conventional design. As AI tools become more integrated into the design process, the future of design at BMW’s Designworks looks brighter and more exciting than ever.

In embracing AI, BMW stays at the cutting edge of automotive design and secures its position as a leader in innovation, creativity, and excellence in the automotive industry.

The post How BMW’s Designworks Transforms Automotive Design Using AI Tools first appeared on Yanko Design.

The Evolution of Smartphones: What Are GenAI Phones?

Generative AI, or GenAI, has been making waves in the software industry for several years, proving its potential to revolutionize various sectors with its ability to generate new content and provide insightful analyses based on existing data. However, it is only recently that this technology has started to transition from software applications to consumer hardware, specifically within the mobile phone market. This transition marks the beginning of a new era in consumer electronics, where smartphones are not only smart in name but are endowed with the capability to perform complex AI tasks that were once reserved for powerful servers.

The term “GenAI smartphone,” or “GenAI phones” for short, began to gain traction in the last six months, emerging prominently in reports from leading market research firms. These devices are distinguished from standard smartphones’ ability to harness large-scale, pre-trained generative AI models to create and modify content directly on the device. This capability isn’t just a marginal upgrade to existing features. Still, it represents a fundamental shift in how mobile technology interacts with users, offering distinctive personalization and functionality directly from one’s hand.

As these GenAI smartphones prepare to enter the market, they promise to redefine user interactions with mobile devices. With the potential to handle tasks ranging from real-time language translation and complex content creation to intuitive personal assistants that understand and predict user needs, GenAI phones aim to set a new standard in mobile computing. This evolution from a communication tool to an intelligent companion stresses a pivotal shift in the mobile industry, driving consumer excitement and industry innovation. As we stand on the brink of this technological leap, it is crucial to understand what precisely a GenAI smartphone is, how it differs from traditional smartphones, and what it promises for the future of mobile technology.

What is a GenAI Phone?

A GenAI phone represents a new category of smartphones that embed generative artificial intelligence (AI) at the core of their functionality, offering previously unimaginable capabilities in a handheld device. These devices integrate large-scale, pre-trained AI models to provide unprecedented personalization and functionality directly from one’s hand.

At the heart of a GenAI phone are AI-driven applications capable of generating original content. Whether it’s composing personalized emails, designing unique artwork, or creating music from simple user prompts, these applications dynamically produce outputs tailored to user interactions. Unlike traditional apps that operate within their confines, AI tools in a GenAI phone are embedded system-wide, enhancing the user experience across all functionalities. This integration ensures that AI capabilities improve everything from the camera and messaging apps to system settings, adapting to the user’s behavior to predict and automate actions like app selection or environmental adjustments.

GenAI phones are equipped with specialized CPUs explicitly designed for intensive AI tasks to power these sophisticated features. These processors perform billions of operations per second, enabling the device to run complex AI models directly on the device. This capability of processing data locally speeds up operations by eliminating the latency associated with cloud computing. It significantly enhances user privacy and security, as sensitive data does not need to be transmitted over the Internet.

GenAI phones must remain elegantly designed and user-friendly despite the advanced technology embedded within them. Consumers expect their devices to combine functionality with aesthetic appeal, mirroring the sleekness and minimalism of products like the iPad Pro M4. The challenge for manufacturers is integrating these powerful AI capabilities into slim, attractive, and intuitive devices, ensuring that technological advancements enhance rather than complicate the user experience.

What Isn’t a GenAI Phone and Current Market Scenario

Understanding what isn’t a GenAI phone is crucial in distinguishing it from the myriad of smartphones equipped with basic AI functionalities. Although many modern smartphones boast AI capabilities, having AI features doesn’t automatically make a device a GenAI phone. This distinction is required to set realistic expectations about the device’s capabilities and understand smartphone technology’s evolution. A GenAI phone fundamentally differs from standard smartphones because it integrates advanced AI functionalities directly into the device’s core systems and processes data locally rather than relying heavily on cloud computing. This integration means that GenAI phones are equipped with specialized hardware to handle complex AI tasks independently, thereby enhancing privacy and functionality by keeping the processing on the device itself.

In contrast, many smartphones on the market today, often mistaken for GenAI devices, do not meet these criteria. For example, while devices like the iPhone 15 Pro Max offer AI-driven features like facial recognition, predictive text, and enhanced photography tools, these features, although advanced, do not necessarily qualify the device as a GenAI phone. This is because a true GenAI phone not only uses AI for specific tasks but integrates AI deeply across all system operations, processing complex AI tasks entirely on the device. In other words, even though iPhones process a significant amount of data on-device to ensure user privacy, the breadth and independence of AI integration in terms of generative capabilities might not be as extensive as in dedicated GenAI devices. Similarly, many Android devices boast impressive AI-powered photo editing and voice assistant features; however, these tasks are often processed with the aid of cloud servers, making them less autonomous and, therefore, not true GenAI phones. These smartphones might utilize AI for specific applications like optimizing battery life, managing screen brightness based on ambient conditions, or even offering user behavior-driven app suggestions. Yet, because they lack the hardware to independently process complex AI tasks directly on the device, they fall short of the GenAI classification. The reliance on cloud processing raises concerns about data privacy and limits the device’s functionality when offline or in areas with poor connectivity.

Thus, while many current smartphones are marketed with the allure of AI, only a select few truly qualify as GenAI phones by virtue of their ability to perform sophisticated AI operations natively and independently. Among the notable examples that set the benchmark in this emerging category are the Samsung Galaxy S24 series and the Google Pixel 8 Pro. These devices display the integration of AI at a foundational level, equipped with the necessary hardware to process complex AI tasks directly on the phone. This enables a range of innovative applications, from enhanced image processing to real-time language translation without cloud dependency.

For instance, the Samsung Galaxy S24 series demonstrates its generative AI capabilities through features that enable sophisticated on-device content creation and personalization, enhancing user interaction in ways that were previously only achievable with server-based computing. Similarly, the Google Pixel 8 Pro leverages its specialized hardware to deliver advanced AI functionalities like next-generation assistant features and more nuanced user engagement through AI, all processed locally on the device. This focus on native processing is a crucial aspect that boosts performance and efficiency and significantly enhances data privacy, a growing concern among consumers. By minimizing data transmission to external servers, these GenAI phones offer a more secure environment for users to enjoy the benefits of AI without compromising their personal information.

Transitioning into the Apple ecosystem, recent developments indicate a significant shift that could redefine the landscape of GenAI phones. Rumors of Apple’s potential partnership with OpenAI and the integration of an advanced Siri capable of leveraging OpenAI’s technologies suggest a significant upgrade in Apple’s AI capabilities. Such a collaboration could bring about a new iteration of Siri that is far more advanced than its current form, potentially incorporating the ability to understand and generate human-like text, engage in more dynamic conversations, and offer personalized suggestions with a higher degree of relevance and context.

If these speculations hold, this move could be a game-changer for the Apple ecosystem, integrating more deeply with iOS, macOS, visionOS and iPadOS. It could enhance the Apple suite of products with a level of AI sophistication previously unseen in its devices. For Apple, known for its tightly integrated ecosystem and emphasis on user privacy, the challenge will be to balance these advanced capabilities with the need to maintain data security, especially considering the potential use of cloud-based processing to support more complex AI tasks.

Apple iPhone Concept

This anticipated development could position Apple to catch up with and potentially surpass its competitors in the race to refine and expand the capabilities of GenAI phones. Integrating such advanced AI could transform how users interact with their devices, making Apple’s ecosystem even more intuitive and integrated and potentially setting a new standard for what smartphones can achieve in personal technology.

The Future of GenAI Phones, Market Impact, and Consumer Adoption

The future of GenAI phones is anticipated to transform our daily interactions with mobile devices radically. Integrating generative AI features into mainstream smartphones is becoming more prevalent as technology advances. Companies like Apple, NVIDIA, Qualcomm, Microsoft, Samsung, and Google are at the forefront of this evolution, actively developing ways to incorporate GenAI capabilities into future models and through updates to existing devices. This push towards more intelligent smartphones will likely enhance how we communicate, create, and interact with our devices daily.

Apple M4 and Snapdragon X Elite

The impact of these developments on the market cannot be understated. After years of incremental upgrades that have seen diminishing consumer excitement, GenAI phones promise to inject new life into the somewhat stagnant smartphone market. According to insights from Counterpoint Research, incorporating GenAI technologies is expected to boost smartphone sales significantly. Their data projects a notable increase in market share and adoption rates for GenAI smartphones over the next few years, with these advanced devices set to account for a substantial portion of total smartphone shipments by 2027.

This shift is expected to reshape consumer expectations and drive demand for smartphones that are smarter, more intuitive, and capable of independently performing complex tasks. As GenAI phones become more common, they are anticipated to influence a broad spectrum of consumer electronics, setting new standards for functionality and interactivity. Integrating AI into everyday technology promises to make our digital experiences more personalized and efficient, fundamentally changing our relationship with technology.

In essence, the rise of GenAI phones indicates a new era in personal technology, where our devices understand and anticipate our needs better than ever. For everyday users, this means smartphones that can offer real-time translations, sophisticated content creation, and proactive personal assistance—all processed locally on the device for faster, more secure interactions. As these technologies mature, they are expected to become integral to our digital lives, making advanced AI a luxury and a standard component of future smartphones.

The post The Evolution of Smartphones: What Are GenAI Phones? first appeared on Yanko Design.

Arc Search’s “Call Arc” Feature for iPhone Is A Fun Take on ‘Phone a Friend’

Arc Search, my go-to app for all search needs on macOS and iOS, from The Browser Company, has introduced its latest feature, Call Arc, designed to make voice-activated search fun and useful. Released on May 23, 2024, as part of the V1.13.0 update, Call Arc offers a new way for users to interact with their search queries using voice commands. This feature adds a neat twist to the traditional “phone a friend” concept, turning your phone into an instant answer provider.

Designer: The Browser Company

The new feature enables users to activate voice search by simply holding their iPhone to their ear, simulating a phone call. This method enhances the user experience by making it feel more natural while also aiming to provide faster and more convenient responses. The app responds nearly instantly with search results, accompanied by an animated smiley face, adding a touch of personality to the interaction.

The Browser Company aims to create a more conversational and accessible search experience with Call Arc. This feature builds on the existing voice search capabilities of Arc Search, which were previously made available via the iPhone 15 Pro’s Action button. With Call Arc, initiating a search becomes as intuitive as making a phone call, tapping into a deeply ingrained habit in human behavior.

However, the feature is not without its challenges. In my experience, using Call Arc is seamless; simply opening the app and raising the iPhone to my ear triggers the call automatically. There were instances where the app asked for confirmation to connect, which was a minor inconvenience. While single queries were handled well, more complex requests, such as asking Arc to summarize articles into bullet points, led to the app going blank and failing to respond to subsequent queries. This indicates that the feature is still buggy, and user experiences may vary.

In addition, Arc Call can’t open links or show results directly within the app. Instead, it explained in detail how I could search for the information myself. When I asked which browser I should use, it named all the competitors except for Arc Browser.

Arc Call recommends Chrome or Firefox for search and doesn’t mention Arc Browser

Arc Search has made significant strides since its launch in January, with features like “Browse for me,” which compiles information from multiple web pages into a single, user-friendly page. Powered by models from OpenAI and other sources, this feature provides comprehensive responses to user queries.

Regular and meaningful updates are crucial in distinguishing Arc Search from competitors like Safari, Chrome, and other AI assistants. As Apple prepares to roll out its AI-infused updates for Safari and Siri, it will be interesting to see how Arc continues to innovate and improve its core offerings throughout the year.

An intriguing aspect of Arc’s new feature is its choice of a female voice, which aligns with a broader trend in AI voice assistants. Siri, Alexa, Cortana, Google Voice, and even ChatGPT’s voice models have predominantly female voices. This phenomenon is not coincidental but rooted in historical, cultural, and psychological factors. Female voices are often perceived as more soothing, approachable, and helpful, aligning with the roles these assistants are designed to play.

In 2014, Alex Garland’s Ex Machina showed how a lifelike female robot passed the Turing Test and won the heart of a young engineer

The history of female voices in technology can be traced back to early telephone operators and IVR systems, setting a precedent for modern AI assistants. Pop culture, too, has played a significant role, with characters like Samantha from the film “Her” and JARVIS from “Iron Man” shaping our perceptions of AI voices. Studies have shown that users generally find female voices more likable and comforting, which can enhance the user experience and engagement.

Sarang, a colleague who conducted a deep dive analysis, highlighted the importance of recognizing the gender biases present in AI voice assistants. “Why do most AI voice assistants have female voices? How do humans perceive these voices? Why don’t you see that many male AI voice assistants?” he asked. Sarang emphasized that while female AI voices can help normalize female authority and challenge stereotypes, they also risk reinforcing existing biases if not designed and managed carefully. The training data and algorithms behind these voices must be diverse and inclusive to avoid perpetuating harmful stereotypes.

Regarding UI/UX design, the Arc Search app’s Call Arc feature presents a clean and intuitive interface, enhancing user interaction through simplicity and visual appeal. The interface is minimalistic, focusing primarily on essential functions, which reduces cognitive load and allows users to engage with the app effortlessly. This design approach ensures that users can easily navigate the feature without being overwhelmed by unnecessary elements.

A notable UI aspect is the animated smiley face, which adds a friendly and engaging element to the user experience. Similar to Amazon’s Prime smiley logo, this visual indicator conveys a sense of friendliness and approachability. It shows that the app actively listens and responds, creating a more interactive and human-like interaction. The smiley face makes the experience more enjoyable while providing a clear signal that the app is processing user queries.

The call interface is designed to resemble a typical phone call screen, making it familiar and easy to use. A timer at the top of the screen indicates the duration of the interaction, reinforcing the phone call analogy. This familiar design helps users feel comfortable using the feature as it closely mimics the standard phone functions they are already accustomed to.

Key functional buttons like “Speaker” and “End Call” are prominently displayed and easily accessible, mirroring a standard phone call UI. This design choice ensures that users can quickly manage the call without confusion. Additionally, the inclusion of a “Read More” button allows users to access detailed information from the “Browse for me” feature, providing a seamless transition between voice responses and in-depth content.

The background has a gradient of vibrant colors, creating a visually appealing backdrop that enhances the app’s overall aesthetic without distracting from the functional elements. This use of color adds to the app’s modern look and feel, making it attractive and user-friendly.

The design also leverages intuitive gestures, such as raising the phone to the ear to initiate a call. This gesture aligns with natural user behavior, making the feature more seamless and integrated into everyday actions. By incorporating these intuitive interactions, The Browser Company has created a feature that feels innovative and inherently easy to use.

So far, the new Call Arc feature in Arc Search is pretty sleek. Will I use it? I don’t know because, at the moment, it’s a party trick to me, although its true purpose is to provide a more engaging and natural search experience. Despite some bugs, the feature shows promise and reflects The Browser Company’s commitment to pushing the boundaries of AI-powered tools. As AI voice assistants evolve, it is crucial to consider their broader social implications and strive for a balanced and fair representation in their design and implementation.

The post Arc Search’s “Call Arc” Feature for iPhone Is A Fun Take on ‘Phone a Friend’ first appeared on Yanko Design.

Why Are Most AI Voices Female? Exploring the Reasons Behind Female AI Voice Dominance

Siri, Alexa, Cortana, Google Voice ChatGPT 4o, it’s no coincidence that they all have female voices (and sometimes even names). In fact, Spike Jonze even literally called his dystopian AI-based film “Her” after the AI assistant Samantha from the movie. Played by Scarlett Johansson, the movie had a premise that sounded absurd 11 years ago but now feels all too realistic after OpenAI announced their voice-based AI model GPT 4o (omni). The announcement was also followed by an uproar from Johansson, who claimed the AI sounded a lot like her even though she hadn’t given OpenAI the permission to use her voice. Johansson mentioned that she was approached by OpenAI CEO Sam Altman to be the voice of GPT 4o, but declined. Just days before GPT 4o was announced, Altman asked her once again to reconsider, but she still declined. GPT 4o was announced exactly 10 days ago on the 13th of May, and Johansson distinctly recognized the voice as one that sounded quite similar to her own. While there are many who say that the voices don’t sound similar, it’s undeniable that OpenAI was aiming for something that sounded like Samantha from Her rather than going for a more feminine yet mechanical voice like Siri or Google Voice. All this brings a few questions to mind – Why do most AI voice assistants have female voices? How do humans perceive these voices? Why don’t you see that many male AI voice assistants (and does mansplaining have a role to play here)? And finally, do female voice assistants actually help or harm real women and gender equality in the long run? (Hint: a little bit of both, but the latter seems more daunting)

AI Voice Assistants: A History

The history of AI voice assistants extends well before 2011 when Siri was first introduced to the world… however, a lot of these instances were fiction and pop-culture. Siri debuted as the first-ever voice assistant relying on AI, but you can’t really credit Siri with being the first automated female voice because for years, IVR dominated phone conversations. Do you remember the automated voices when you called a company’s service center like your bank, cable company or internet provider? Historically, a lot of times the voices were female, paving the way for Siri in 2011. In fact, this trend dates back to 1878, with Emma Nutt being the first woman telephone operator, ushering in an entirely female-dominated profession. Women operators then naturally set the stage for female-voiced IVR (Interactive Voice Response) calls. However, while IVR calls were predominantly just a set of pre-recorded responses, Siri didn’t blurt out template-ish pre-recorded sentences. She was trained on the voice of a real woman, and conversed with you (at least that time) like an actual human. The choice of a female voice for Siri was influenced by user studies and cultural factors, aiming to make the AI seem friendly and approachable. This decision was not an isolated case; it marked the beginning of a broader trend in the tech industry. In pop culture, however, the inverse was said to be true. Long before Siri in 2011, JARVIS took the stage in the 2008 movie Iron Man as a male voice assistant. Although somewhat robotic, JARVIS could do pretty much anything, like control every micro detail of Tony Stark’s house, suit, and life… and potentially even go rogue. However, that aside, studies show something very interesting about how humans perceive female voices.

JARVIS helping control Iron Man’s supersuit

Historically, Robots are Male, and Voice Assistants are Female

The predominance of female voices in AI systems is not a random occurrence. Several factors contribute to this trend:

  • User Preference: Research indicates that many users find female voices more soothing and pleasant. This preference often drives the design choices of AI developers who seek to create a comfortable user experience.
  • The Emotional Connection: Female voices are traditionally associated with helpful and nurturing roles. This aligns well with the purpose of many AI systems, which are designed to assist and support users in various tasks.
  • Market Research: Companies often rely on market research to determine the most effective ways to engage users. Female voices have consistently tested well in these studies, leading to their widespread adoption.
  • Cultural Influences: There are cultural and social influences that shape how voices are perceived. For instance, in many cultures, female voices are stereotypically associated with service roles (e.g., receptionists, customer service), which can influence design decisions.

These are but theories and studies, and the flip side is equally interesting. Physical robots are often built with male physiques and proportions given that their main job of lifting objects and moving cargo around is traditionally done by men too. Pop culture plays a massive role again, with Transformers being predominantly male, as well as Terminator, T-1000, Ultron, C3PO, Robocop, the list is endless.

What Do Studies Say on Female vs. Male AI Voices?

Numerous studies have analyzed the impact of gender in AI voices, revealing a variety of insights that help us understand user preferences and perceptions. Here’s what these studies reveal:

  • Likability: Research indicates that users generally find female voices more likable. This can enhance the effectiveness of AI in customer service and support roles, where user comfort and trust are paramount.
  • Comfort and Engagement: Female voices are often perceived as more comforting and engaging, which can improve user satisfaction and interaction quality. This is particularly important in applications like mental health support, where a soothing tone can make a significant difference.
  • Perceived Authority: Male voices are sometimes perceived as more authoritative, which can be advantageous in contexts where a strong, commanding presence is needed, such as navigation systems or emergency alerts. However, this perception can vary widely based on individual and cultural differences.
  • Task Appropriateness: The suitability of a voice can depend on the specific task or context. For example, users might prefer female voices for personal assistants who manage everyday tasks, while male voices might be preferred for financial or legal advice due to perceived authority.
  • Cognitive Load: Some research suggests that the perceived ease of understanding and clarity of female voices can reduce cognitive load, making interactions with AI less mentally taxing and more intuitive for users.
  • Mansplaining, A Problem: The concept of “mansplaining” — when a man explains something to someone, typically a woman, in a condescending or patronizing manner — can indirectly influence the preference for female AI voices. Male voices might be perceived as more authoritative, which can sometimes come across as condescending. A male AI voice disagreeing with you or telling you something you already know can feel much more unpleasant than a female voice doing the same thing.

The 2013 movie Her had such a major impact on society and culture that Hong Kong-based Ricky Ma even built a humanoid version of Scarlett Johansson

Do Female AI Voices Help Women Be Taken More Seriously in the Future?

20 years back, it was virtually impossible to determine how addictive and detrimental social media was going to be to our health. We’re at the point in the road where we should be thinking of the implications of AI. Sure, the obvious discussion is about how AI could replace us, flood the airwaves with potential misinformation, and make humans dumb and ineffective… but before that, let’s just focus on the social impact of these voices, and what they do for us and the generations to come. There are a few positive impacts to this trend:

  • Normalization of Female Authority: Regular exposure to female voices in authoritative and knowledgeable roles can help normalize the idea of women in leadership positions. This can contribute to greater acceptance of women in such roles across various sectors.
  • Shifting Perceptions: Hearing female voices associated with expertise and assistance can subtly shift societal perceptions, challenging stereotypes and reducing gender biases.
  • Role Models: AI systems with confident and competent female voices can serve as virtual role models, demonstrating that these traits are not exclusive to men and can be embodied by women as well.

However, the impact of this trend depends on the quality and neutrality of the AI’s responses, which is doubtful at best. If female-voiced AI systems consistently deliver accurate and helpful information, they can enhance the credibility of women in technology and authoritative roles… but what about the opposite?

Female AI Voices Running on Male-biased Databases

The obvious problem, however, is that these AI assistants are still, more often than not, coded by men who may bring their own subtle (or obvious) biases into how these AI bots operate. Moreover, a vast corpus of databases fed into these AI LLMs (Large Language Models) is created by men. Historically, culture, literature, politics, and science, have all been dominated by men for centuries, with women only very recently playing a larger and more visible role in contributing to these fields. All this has a distinct and noticeable effect on how the AI thinks and operates. Having a female voice doesn’t change that – it actually has a more unintended negative effect.

There’s really no problem when the AI is working with hard facts… but it becomes an issue when the AI needs to share opinions. Biases can undermine an AI’s credibility, can cause problems by not accurately representing the women it’s supposed to, can promote wrong stereotypes, and even reinforce biases. We’re already noticing the massive spike in the usage of words like ‘delve’ and ‘testament’ because of how often AI LLMs use them – think about all the stuff we CAN’T see, and how it may affect life and society a decade from now.

In 2014, Alex Garland’s Ex Machina showed how a lifelike female robot passed the Turing Test and won the heart of a young engineer

The Future of AI Voice Assistants

I’m no coder/engineer, but here’s where AI voice assistants should be headed and what steps should be taken:

  • Diverse Training Data: Ensuring that training data is diverse and inclusive can help mitigate biases. This involves sourcing data from a wide range of contexts and ensuring a balanced representation of different genders and perspectives.
  • Bias Detection and Mitigation: Implementing robust mechanisms for detecting and mitigating bias in AI systems is crucial. This includes using algorithms designed to identify and correct biases in training data and outputs.
  • Inclusive Design: Involving diverse teams in the design and development of AI systems can help ensure that different perspectives are considered, leading to more balanced and fair AI systems.
  • Continuous Monitoring: AI systems should be continuously monitored and updated to address any emerging biases. This requires ongoing evaluation and refinement of both the training data and the AI algorithms.
  • User Feedback: Incorporating user feedback can help identify biases and areas for improvement. Users can provide valuable insights into how the AI is perceived and where it might be falling short in terms of fairness and inclusivity.

AI assistants aren’t going anywhere. There was a time not too long ago when it seemed that AI assistants were dead. In the end of 2022, Amazon announced that Alexa had racked up $10 billion in debt and seemed like a failed endeavor – that same month, ChatGPT made its debut. Cut to today and AI assistants have suddenly become mainstream again. Mainstream in a way that almost every company and startup is looking for ways to integrate AI into their products and services. Siri and GPT 4o are just the beginning of this new female voice-led frontier… it’s important we understand the pitfalls and avoid them before it’s too late. After all, if you remember in the movie Terminator Salvation, Skynet was a female too…

The post Why Are Most AI Voices Female? Exploring the Reasons Behind Female AI Voice Dominance first appeared on Yanko Design.

AI-powered modular mouse has some nifty tricks to level up your presentations

The nature and location of work today have changed considerably, especially after the introduction of work-from-home arrangements, but there is one thing that still remains the same. People still hold in-person meetings, which often involve making presentations, be it in front of colleagues or before clients. Despite how common this activity is, the tools used especially by presenters haven’t evolved that much except for teleconferencing equipment. Many of the devices needed for an effective presentation often come as separate products, so this concept tries to integrate not just two but four tools into a single design that, at first glance, looks like a normal mouse.

Designers: TianRun Chen, ZiLong Peng, Yanran Zhao, YueHao Liu

Many computer users use a mouse, even if they actually prefer using laptops. It’s almost an indispensable tool for on-the-go knowledge workers, including those who often find themselves speaking in front of other people in a room. Unfortunately, these people would also find themselves grabbing a presenter and a laser pointer during those presentations, making their work lives needlessly complex. There are some thin, portable mice that try to integrate a laser pointer, but these are still rare, not to mention not ergonomic in their designs.

The OctoAssist concept design has a rather intriguing solution that deconstructs the design of the computer mouse in order to provide more functionality. At its core, it sports a modular design where the main “module” is actually the front third of a conventional mouse, where the buttons would normally be located. This module is actually a touch-sensitive device that you can use on its own as a mini touchpad that supports gestures like pinching and three-finger taps. It can magnetically connect to a “base” that provides the ergonomic shape of a mouse, while potentially also offering additional battery power in its rather large body.

The core module also has a built-in laser pointer and, thanks to its touch-sensitive surface, can be used to easily control presentations with the same hand. It also has a voice recorder so you can have the entire presentation or meeting preserved for documentation purposes. But why stop there when you have today’s ubiquitous AI available to almost everyone? That AI, built into the device, can also summarize the meeting and generate notes in a flash, impressing everyone in the room with your technological wizardry and efficiency.

From a regular office mouse to a miniature touchpad to a presenter to an AI secretary, the OctoAssist offers plenty of features, though perhaps a bit too much as well. The AI-powered summary and notes are definitely convenient, but they could weigh the core module down not just with complexity but also with hardware and battery consumption. It does offload the AI processing to a connected smartphone, but that can sometimes cause lags and even data loss. Regardless, it’s definitely an interesting concept that might even be plausible, presuming a manufacturer sees profitable value in an all-in-one design instead of selling multiple devices that do those tasks separately.

The post AI-powered modular mouse has some nifty tricks to level up your presentations first appeared on Yanko Design.

Portable AI device uses camera, projectors, sensors to make you more productive

For better or for worse, depending on where you stand on the debate, artificial intelligence has changed and will continue changing how we create and communicate. Services like ChatGPT, Midjourney, Gemini, and Copilot are pretty popular with those who are adventurous enough to experiment with AI. We can expect that over the next few years, we’ll see more services, gadgets, and devices that can help us use the technology and integrate it into our workflow and every day lives.

Designers: Mingwan Bae, Sohyun An, Junyoung Min, Youngsuh Yoo

Lay is a concept for a portable AI device that is equipped with a wide-angle camera, a projector, and a sensing module. The 48MP wide-angle camera has a 13mm focal length and is able to recognize objects and space as well as have text recognition and upscale objects it can scan. The 4K UHD projector can project up to 30 inches screen with auto keystone and has under 10cm ultra-short throw distance and high brightness and contrast. The sensing module, which includes LiDAR, ambient light, and proximity sensors, is able to sense its surroundings in real time.

The device basically scans your surroundings and then leverage AI to make suggestions and give assistance on tasks that you can do to as you’re working, drawing, reading, scribbling, building, creating, or just leisurely browsing. It looks like a small spherical robot with a round head that moves around and that you can carry around and place on your desk or space as it helps you make your workflow smoother. It projects onto a surface which will serve as your screen as you do your different tasks. It can recognize and select text, drawings, photos, sketches and then all the content and information are updated in your real-time cloud.

The device still seems to be mostly theoretical and specific tasks you can do or that it can suggest are still a bit vague. But it’s an interesting concept for an AI-powered device that you can carry around with you especially if you’re a digital nomad. And with the speed at which some digital natives and early adapters are using and exploring AI, this can actually be a real device soon.

The post Portable AI device uses camera, projectors, sensors to make you more productive first appeared on Yanko Design.

Nothing just beat Apple by bringing ChatGPT to all its TWS earbuds… even the older models

London-based tech company Nothing is making waves in the tech world by expanding its integration of ChatGPT, a powerful AI language model, to a wider range of its audio devices. This move comes just a month after the feature debuted on the company’s latest earbuds, the Ear and Ear (a), and their smartphone lineup… and coincidentally, just hours before Google’s I/O event, where the company’s expected to announce an entire slew of AI features and upgrades.

The earlier-than-expected rollout signifies Nothing’s commitment to bringing advanced AI features to everyday tech. This integration isn’t limited to Nothing-branded devices; it extends to their sub-brand CMF as well. Users with older Nothing and CMF earbud models, including the Ear (1), Ear (stick), Ear (2), CMF Neckband Pro, and CMF Buds Pro, will be able to leverage the capabilities of ChatGPT starting May 21st with a simple update to the Nothing X app. It also cleverly pre-empts Apple, which is allegedly working with OpenAI to bring ChatGPT to future models of the iPhone.

Read the Nothing Ear (a) Review here

There’s a caveat, however. To enjoy the benefits of ChatGPT through your Nothing or CMF earbuds, you’ll need to be using them with a Nothing smartphone running Nothing OS 2.5.5 or later. The good news is that activating ChatGPT is a breeze. Once you’ve updated the Nothing X app, you can enable a new gesture feature that allows you to initiate conversations with the AI assistant by simply pinching the stem of your earbuds.

This development signifies a growing trend in the tech industry: embedding AI assistants directly into consumer devices. By offering voice control through earbuds, Nothing is making it easier for users to perform everyday tasks hands-free, like checking the weather or controlling music playback. Imagine asking your earbuds for directions while jogging or requesting a quick weather update during your commute – all without reaching for your phone.

The move comes at a perfect time, right between OpenAI’s GPT-4o announcement, and Google’s I/O event, which will include multiple AI improvements including integration of Gemini AI into a vast variety of Google products as well as with the Pixel hardware lineup.

The post Nothing just beat Apple by bringing ChatGPT to all its TWS earbuds… even the older models first appeared on Yanko Design.

Google Pixel 8a official: A more affordable way to experience Google’s AI

Even before AI and machine learning became buzzwords, Google was already utilizing these technologies behind the scenes to power services like Search and Google Assistant. In line with recent trends, however, it has started applying and advertising AI for anything and everything, especially for its Pixel devices. AI features, however, are normally accessed through online services, which incurs security and privacy issues, or on the device itself, which requires powerful hardware that’s often available only on more expensive flagships. That’s the kind of situation that the new Google Pixel 8a is trying to change, offering a more accessible device to access Google’s AI-powered features and services for years on end.

Designer: Google

The Pixel 8a is practically the Pixel 8 in both design and spirit. It has the exact same appearance, though in a slightly smaller size and one important change in materials. The back of this newer Pixel phone is a matte composite instead of the Pixel 8’s glass rear. The color options available are also slightly different, with the Pixel 8a leaning more towards fun and saturated hues like Aloe green and Bay blue. Otherwise, the two are almost exactly identical, which some Pixel fans have grown pretty fond of.

The Pixel 8a even shares the same Tensor G3 processor as the current flagship, though we won’t be surprised if we find out later that it has been dialed down a bit. That said, it still has enough power to support almost all of Google’s AI features on the Pixel, from Circle to Search to Gemini assistant for summarizing pages or notes to removing background noise from recorded video. There will still be some features exclusive to the Pixel 8, of course, but you can already enjoy most of what’s available on the Pixel 8a, especially when it comes to photography.

It will definitely need it because one of the biggest corners that Google had to cut was the camera system. Neither the 64MP main camera nor the 13MP ultra-wide has autofocus, and both have slightly lower specs than the Pixel 8. In other words, the Pixel 8a will rely more heavily on AI and algorithms to compensate for the camera hardware’s limitation. There are also some other key differences, like a slower (but still fast) 18W charging speed.

All in all, you’re getting nearly the same Pixel 8 experience for $200 less, with the Pixel 8a going for $499 for 128GB of storage and $549 for the first-ever 256GB option for a Pixel “a” series. Aside from the camera, none of the “downgrades” are deal-breakers, making the Pixel 8a a very worthwhile investment for the future, especially since the phone will also be getting Android updates for seven years.

The post Google Pixel 8a official: A more affordable way to experience Google’s AI first appeared on Yanko Design.

Kartell and Philippine Starck team up with A.I. for new furniture collection

There have been a lot of discussions about how artificial intelligence affects designers and design in general. This will continue to be a hotly debated topic in the next few years. There are those that believe this is heralding the death of the creative industry while there are those that believe that it can help brands and designers streamline processes and can foster innovation and experimentation. Italian furniture brand Kartell and French architect and designer Philippine Starck seems to be of the latter school of thought as they unveiled their A.I. collection

Designer: Philippe Starck and Kartell (and A.I.)

This collection features eco-friendly pieces of furniture that were a result of input from Kartell and Starck and streamlined by A.I. particularly in terms of prototyping and planning. It was able to contribute to make the collection sustainable and optimize the materials used resulting in reduced waste. Creating eco-friendly products is the ultimate goal and the combination of design, production, and A.I. helped achieve this.

The A.I. Lounge uses thermoplastic techno polymer with a mineral filler. It is available in white, black, green, and gray and can be used both indoors and outdoors or wherever you want to lounge around. The H.H.H Her Highest Highness meanwhile is a chair that should make you sit like a royal. The way the back is shaped will force you to sit as if on a throne. It uses green polycarbonate material for the eco-friendly aspect.

The A.I. Console meanwhile is a minimalist small table that can be placed in the foyers, vestibules, entrances, and hallways, or anywhere you need to have a small stand or table for your stuff. It sports a one legged design and is made from recycled Illy iPerEspresso coffee capsules. You can get it in orange, white, gray, or black.

The post Kartell and Philippine Starck team up with A.I. for new furniture collection first appeared on Yanko Design.