Volvo scales back its EV goals, will no longer be fully electric by 2030

Over three years after saying it would sell only electric vehicles by 2030, Volvo has lowered its EV ambitions. The automaker now says it will aim for 90 to 100 percent electrified vehicles (including full EVs and plug-in hybrids) by the decade’s end, with the remaining 0 to 10 percent being mild hybrids. Volvo chalked up its revised ambitions to “changing market conditions and customer demands.”

Volvo says it’s still committed to long-term electrification. The automaker has launched five fully electric models since laying out its (now aborted) 2030 goal three years ago: the EX40, EC40, EX30, EM90 and EX90.

The company cites the slower-than-expected rollout of EV charging infrastructure as one factor in its decision. Despite the passage of President Biden’s Bipartisan Infrastructure Law in 2021, which allocated $7.5 billion to support the creation of 500,000 EV charging stations, only seven stations in four states had been built as of March. Reasons for the slow rollout allegedly include a lack of experience in the state transportation agencies in charge of execution and various government requirements (submitting plans, soliciting bids, awarding funds).

The Biden Administration said earlier this year it still expects the US to reach 500,000 charging stations by 2026.

Volvo also cited “additional uncertainties created by recent tariffs on EVs in various markets.” That likely refers to the hit the automaker is taking from manufacturing some models in China. Earlier this year, the White House announced new levies on EVs made in China and batteries sourced from China. (Volvo’s parent company, Volvo Car AB, is majority-owned by China’s Geely Holding.) Forbes reported in May that the China-made EX30, expected to start at around $37,000, would be pushed to over $50,000 after tariffs.

The automaker adjusted its CO2 reduction expectations alongside the tweaked timeline. It now says it aims for 65 to 75 percent reduced per-car emissions (compared to a 2018 baseline) by 2030; its previous goal was a hard 75 percent. In addition, it also changed its previous 40-percent per-car reduction (also compared to 2018) by 2025; that goal is now a 30 to 35 percent drop.

“We are resolute in our belief that our future is electric,” Jim Rowan, Volvo Cars CEO, wrote in a press release. “An electric car provides a superior driving experience and increases possibilities for using advanced technologies that improve the overall customer experience. However, it is clear that the transition to electrification will not be linear, and customers and markets are moving at different speeds of adoption. We are pragmatic and flexible, while retaining an industry-leading position on electrification and sustainability.”

This article originally appeared on Engadget at https://www.engadget.com/transportation/evs/volvo-scales-back-its-ev-goals-will-no-longer-be-fully-electric-by-2030-201059287.html?src=rss

Cheaper Copilot+ PCs are coming with Qualcomm’s 8-core Snapdragon X Plus chip

Qualcomm is moving to make AI PCs more affordable. Following the company’s 12-core Snapdragon X Elite and 10-core Snapdragon X Plus, it unveiled a toned-down eight-core version of the Snapdragon X Plus on Wednesday. The chip includes the same Hexagon neural processing unit (NPU) from the higher-end variants, capable of 45 trillion operations per second (TOPS) for powerful on-device AI.

The 4nm AI-focused chip has a custom Qualcomm Orion CPU built for “mainstream” (i.e., cheaper) Copilot+ PCs. Its eight cores can reach speeds of up to 3.2GHz, with single-core performance at up to 3.4GHz. Qualcomm says it enables days-long battery life in laptops.

The chip includes an integrated Adreno GPU, which supports up to three 4K 60Hz monitors or two 5K 60Hz displays. It supports an internal display of up to UHD 120Hz with HDR10.

The chart below shows how the Snapdragon X Plus 8-core’s specs compare to other AI chips in the line:

A chart showing different Snapdragon X AI chips
Qualcomm

“Copilot+ PCs, powered exclusively today by Snapdragon X Series platforms, launched the new generation in personal computing, made possible by our groundbreaking NPU,” Qualcomm President and CEO Cristiano Amon wrote in a press release. “We are now bringing these transformative AI experiences, along with best-in-class performance and unprecedented battery life, to more users worldwide with Snapdragon X Plus 8-core. We’re proud to be working with our global OEM partners to restore performance leadership to the Windows ecosystem.”

The first PCs with the 8-core Snapdragon X Plus include laptops from Acer, Dell, HP, Lenovo and others. They’ll be available starting today.

This article originally appeared on Engadget at https://www.engadget.com/computing/cheaper-copilot-pcs-are-coming-with-qualcomms-8-core-snapdragon-x-plus-chip-110013598.html?src=rss

Copilot+ features are coming in November to AI PCs powered by Intel and AMD’s latest chips

Qualcomm’s exclusivity period on Copilot+ PCs is winding down. Microsoft confirmed on Tuesday that Intel’s new 200V processors and AMD’s Ryzen AI 300 series chips will add Copilot+ AI capabilities beginning in November.

Copilot+ PCs include features like Live Captions (real-time subtitle generation, including translations), Cocreator in Paint (prompt-based image generation), Windows Studio Effects image editing (background blurring, eye contact adjustment and auto-framing) and AI tools in Photos. Of particular interest to gamers is Auto Super Resolution, an Nvidia DLSS competitor that upscales graphical resolution and refresh rates in real time without stunting performance.

The AI PCs will also eventually include Recall, Microsoft’s searchable timeline of PC activity. This feature was delayed to enhance security after an initial blowback. (Who’d have thought a history of everything you do on your PC might need to be locked down as tightly as possible?) The company said the revised Recall would start rolling out to beta testers in October.

Chart showing
Intel

Intel’s 200V series processors, revealed today, include a powerful neural processing unit (NPU) that supports up to 48 TOPS (tera operations per second) for locally processed AI models and tools. With up to 32GB of onboard memory, the 200V is “the most efficient x86 processor ever,” according to Intel, with 50 percent lower on-package power consumption.

Microsoft’s Windows and devices lead, Pavan Davuluri, confirmed that Intel’s new chips will support Copilot+. “All designs featuring Intel Core Ultra 200V series processors and running the latest version of Windows are eligible to receive Copilot+ PC features as a free update starting in November,” Davuluri said onstage at Intel’s IFA launch event in Germany.

Meanwhile, according to a Windows blog post, AMD’s Ryzen AI 300 series chips, revealed earlier this summer, will also receive Copilot+ features in November. The NPUs in AMD’s chips can reach up to 50 TOPS for AI performance and have 16 percent faster overall performance than their predecessors.

The first Copilot+ PCs arrived in June, powered by Qualcomm’s Snapdragon X Elite chip. The initial batch of Arm-based PCs include laptops and 2-in-1s from Microsoft, Acer, HP, Lenovo, Samsung, Asus and Dell.

This article originally appeared on Engadget at https://www.engadget.com/ai/copilot-features-are-coming-to-ai-pcs-powered-by-intel-and-amds-latest-chips-190707475.html?src=rss

MLB’s virtual ballpark returns for four regular-season games in September

Major League Baseball’s virtual ballpark is back. Like a metaverse experience for traditional (non-VR) devices, it lets you watch actual games as they're happening in real time, albeit recreated in a 3D environment. MLB will host interactive watch parties in the environment for select games each Wednesday in September.

Like a baseball-centric take on Second Life (for the old folks in the back), it includes 3D avatars corresponding to players’ movements. The plays and athletes’ precise positions are tracked using the same Sony Hawk-Eye cameras used for the league’s Statcast analytics system. The experience sounds tailor-made for headsets like the Vision Pro and Meta Quest, but it’s limited to traditional screens for now. Improbable, a London-based company known for metaverse experiences, created the tech.

There’s also an audio element, as you can hear the play-by-play and game sounds and chat with other fans in spatial audio. New for this season is a party system that lets you talk directly with your friends. The league is also bringing back a virtual scavenger hunt to keep you interested in case the game is a bore.

MLB’s virtual ballpark debuted during the 2023 season, first for an All-Star exhibition and then for a late regular-season game and a Postseason matchup. This season’s virtual lineup kicks off on Wednesday, with three more games following throughout September:

  • Wednesday, September 4, 6:50PM ET - Tamba Bay Rays vs. Minnesota Twins

  • Wednesday, September 11, 7:45PM ET - Reds vs. Cardinals

  • Wednesday, September 18, 6:35PM ET - Giants vs. Orioles

  • Wednesday, September 25, 6:40PM ET - Rays vs. Tigers

It isn’t clear who this experience was made for, but hey, at least it’s free. You can log into the virtual ballpark using any modern device with a web browser at MLB’s virtual ballpark website. You’ll need to create or log into an MLB account before they let you past the virtual turnstile.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/mlbs-virtual-ballpark-returns-for-four-regular-season-games-in-september-171533897.html?src=rss

Google is rolling out more election-related safeguards in YouTube, search and AI

As the US speeds toward one of the most consequential elections in its 248-year history, Google is rolling out safeguards to ensure users get reliable information. In addition to the measures it announced late last year, the company said on Friday that it’s adding election-related guardrails to YouTube, Search, Google Play and AI products.

YouTube will add information panels above the search results for at least some federal election candidates. The modules, likely similar to those you see when searching the web for prominent figures, will include the candidates’ basic details like their political party and a link to Google Search for more info. The company says the panels may also include a link to the person’s official website (or other channel). As Election Day (November 5) approaches, YouTube’s homepage will also show reminders on where and how to vote.

Google Search will include aggregated voter registration resources from state election offices for all users. Google is sourcing that data through a partnership with Democracy Works, a nonpartisan nonprofit that works with various companies and organizations “to help voters whenever and wherever they need it.”

Meanwhile, the Google Play Store will add a new badge that indicates an app is from an official government agency. The company outlines its requirements for apps that “communicate government information” in a developer help document. Approved applications that have submitted the required forms are eligible for the “official endorsement signified by a clear visual treatment on the Play Store.”

As for generative AI, which can be prone to hallucinations that would make Jerry Garcia blush, Google is expanding its election-related restrictions, which were announced late last year. They’ll include disclosures for ads created or generated using AI, content labels for generated content and embedded SynthID digital watermarking for AI-made text, audio, images and video. Initially described as being for Gemini (apps and on the web), the election guardrails will apply to Search AI Overviews, YouTube AI-generated summaries for Live Chat, Gems (custom chatbots with user-created instructions) and Gemini image generation.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/google-is-rolling-out-more-election-related-safeguards-in-youtube-search-and-ai-190422568.html?src=rss

OpenAI and Anthropic agree to share their models with the US AI Safety Institute

OpenAI and Anthropic have agreed to share AI models — before and after release — with the US AI Safety Institute. The agency, established through an executive order by President Biden in 2023, will offer safety feedback to the companies to improve their models. OpenAI CEO Sam Altman hinted at the agreement earlier this month.

The US AI Safety Institute didn’t mention other companies tackling AI. But in a statement to Engadget, a Google spokesperson told Engadget the company is in discussions with the agency and will share more info when it’s available. This week, Google began rolling out updated chatbot and image generator models for Gemini.

“Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” Elizabeth Kelly, director of the US AI Safety Institute, wrote in a statement. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”

The US AI Safety Institute is part of the National Institute of Standards and Technology (NIST). It creates and publishes guidelines, benchmark tests and best practices for testing and evaluating potentially dangerous AI systems. “Just as AI has the potential to do profound good, it also has the potential to cause profound harm, from AI-enabled cyber-attacks at a scale beyond anything we have seen before to AI-formulated bioweapons that could endanger the lives of millions,” Vice President Kamala Harris said in late 2023 after the agency was established.

The first-of-its-kind agreement is through a (formal but non-binding) Memorandum of Understanding. The agency will receive access to each company’s “major new models” ahead of and following their public release. The agency describes the agreements as collaborative, risk-mitigating research that will evaluate capabilities and safety. The US AI Safety Institute will also collaborate with the UK AI Safety Institute.

It comes as federal and state regulators try to establish AI guardrails while the rapidly advancing technology is still nascent. On Wednesday, the California state assembly approved an AI safety bill (SB 10147) that mandates safety testing for AI models that cost more than $100 million to develop or require a set amount of computing power. The bill requires AI companies to have kill switches that can shut down the models if they become “unwieldy or uncontrollable.”

Unlike the non-binding agreement with the federal government, the California bill would have some teeth for enforcement. It gives the state’s attorney general license to sue if AI developers don’t comply, especially during threat-level events. However, it still requires one more process vote — and the signature of Governor Gavin Newsom, who will have until September 30 to decide whether to give it the green light.

Update, August 29, 2024, 4:53 PM ET: This story has been updated to add a response from a Google spokesperson.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-and-anthropic-agree-to-share-their-models-with-the-us-ai-safety-institute-191440093.html?src=rss

These robots move through the magic of mushrooms

Researchers at Cornell University tapped into fungal mycelia to power a pair of proof-of-concept robots. Mycelia, the underground fungal network that can sprout mushrooms as its above-ground fruit, can sense light and chemical reactions and communicate through electrical signals. This makes it a novel component in hybrid robotics that could someday detect crop conditions otherwise invisible to humans.

The Cornell researchers created two robots: a soft, spider-like one and a four-wheeled buggy. The researchers used mycelia’s light-sensing abilities to control the machines using ultraviolet light. The project required experts in mycology (the study of fungi), neurobiology, mechanical engineering, electronics and signal processing.

“If you think about a synthetic system — let’s say, any passive sensor — we just use it for one purpose,” lead author Anand Mishra said. “But living systems respond to touch, they respond to light, they respond to heat, they respond to even some unknowns, like signals. That’s why we think, OK, if you wanted to build future robots, how can they work in an unexpected environment? We can leverage these living systems, and any unknown input comes in, the robot will respond to that.”

The fungal robot uses an electrical interface that (after blocking out interference from vibrations and electromagnetic signals) records and processes the mycelia’s electrophysical activity in real time. A controller, mimicking a portion of animals' central nervous systems, acted as “a kind of neural circuit.” The team designed the controller to read the fungi’s raw electrical signal, process it and translate it into digital controls. These were then sent to the machine’s actuators.

Diagram showing various parts of a complex fungus-robot hybrid
Cornell University / Science Robotics

The pair of shroom-bots successfully completed three experiments, including walking and rolling in response to the mycelia’s signals and changing their gaits in response to UV light. The researchers also successfully overrode the mycelia’s signals to control the robots manually, a crucial component if later versions were to be deployed in the wild.

As for where this technology goes, it could spawn more advanced versions that tap into mycelia’s ability to sense chemical reactions. “In this case we used light as the input, but in the future it will be chemical,” according to Rob Shepherd, Cornell mechanical and aerospace engineering professor and the paper’s senior author. The researchers believe this could lead to future robots that sense soil chemistry in crops, deciding when to add more fertilizer, “perhaps mitigating downstream effects of agriculture like harmful algal blooms,” Shepherd said.

You can read the team’s research paper at Science Robotics and find out more about the project from the Cornell Chronicle.

This article originally appeared on Engadget at https://www.engadget.com/science/these-robots-move-through-the-magic-of-mushrooms-171612639.html?src=rss

Reddit is back up after a 30-minute outage

Reddit is back up after an outage took the site out for half an hour this afternoon. The site appears to have been down across the board, apart from a blank homepage that didn’t contain or point to any content. “We encountered an error,” the website reads. Attempting to navigate to any specific subreddit brought up the error you see above: “We were unable to load the content for this page.” However, as of 4:45PM ET, we began to see seeing subreddits and comments loading again as usual.

The Reddit status update page lists the problem as “Degraded Performance for reddit.com,” and was initially flagged as “Investigating.” At 4:32PM ET, the status was updated as “Identified - The issue has been identified and a fix is being implemented.”

At 4:45PM ET, it was updated again as “Monitoring - A fix has been implemented and we are monitoring the results.” However, the site's performance still labeled as “Degraded.”

Of course, the jokes on social media didn’t take long to start rolling in:

Reddit and Google signed a high-profile deal for the search giant to train its AI on Reddit user data, and search results have been increasingly Reddit-leaning over the past year. Reddit had another major outage earlier this year when it went down for nearly an hour. We’ll keep an eye on the status and update this story accordingly.

Update, August 28, 5PM ET: This story was a published as a developing news artcle about Reddit being down. It was modified after publish with more details about the breadth and length of the outage, and the headline has been changed to reflect the site's current status.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/reddit-is-back-up-after-a-30-minute-outage-203615473.html?src=rss

GameStop pivots to retro gaming at select locations

GameStop is pivoting to retro games at select locations. As the industry moves to digital media — and the retailer struggles to adapt to the shifting landscape (including a short-lived stab at NFTs) — the company is betting on the old school. The GameStop Retro locations will stock physical consoles, discs and cartridges from classic Nintendo, PlayStation, Xbox and Sega platforms.

The retailer announced the Retro GameStop locations in a post on X (Twitter). The company also has a website where you can search for retro-friendly locations within a 100-mile radius. (I found a grand total of one in my city.)

GameStop lists 18 classic systems supported by its Retro stores, stretching back to the 8-bit glory days of the Nintendo Entertainment System. Here’s the complete list (according to the company’s brief announcement), including their US launch years:

  • NES (1985)

  • SNES (1991)

  • Game Boy (1989)

  • Sega Genesis (1989)

  • PlayStation (1995)

  • Sega Saturn (1995)

  • Nintendo 64 (1996)

  • Sega Dreamcast (1999)

  • PS2 (2000)

  • Game Boy Advance (2001)

  • Nintendo GameCube (2001)

  • Original Xbox (2001)

  • Nintendo DS (2004)

  • Xbox 360 (2005)

  • Nintendo Wii (2006)

  • PS3 (2006)

  • Nintendo Wii U (2012)

  • PS Vita (2012)

You’ll notice that the PSP isn’t among the systems listed. Engadget emailed GameStop to try to confirm it’s omitted and learn more about the initiative. We’ll update this story if we hear back.

This article originally appeared on Engadget at https://www.engadget.com/gaming/gamestop-pivots-to-retro-gaming-at-select-locations-180704406.html?src=rss

Gemini will soon generate AI images of people again with the upgraded Imagen 3

Google’s generative AI tools are getting some of the boosts the company previewed at Google I/O. Starting this week, the company is rolling out the next-gen version of its Imagen image generator, which reintroduces the ability to generate AI people (after an embarrassing controversy earlier this year). Google’s Gemini chatbot also adds Gems, the company’s take on bots with custom instructions, similar to ChatGPT’s custom GPTs.

Google’s Imagen 3 is the upgraded version of its image generator, coming to Gemini. The company says the next-gen AI model “sets a new standard for image quality” and is built with guardrails to avoid overcorrecting for diversity, like the bizarre historical AI images that went viral early this year.

“Across a wide range of benchmarks, Imagen 3 performs favorably compared to other image generation models available,” Gemini Product Manager Dave Citron wrote in a press release. The tool allows you to guide the image generation with additional prompts if you don’t like what it spits out the first time.

Citron says Imagen 3 performs “favorably” compared to the competition. It also includes Google’s SynthID tool to watermark images, making it clear that they’re AI-made and not the genuine article.

AI images created with Google's Imagen 3 model. Foxes and balloons.
Google

Citron says the ability to generate people will return in the coming days for paid users, months after Google yanked the feature. He says new guardrails will prevent the generation of “photorealistic, identifiable individuals” — a far cry from the problematic deepfakes generated by Elon Musk’s Grok. Also off-limits are children and (as with other image generators) any gory, violent or sexual scenes. The product manager grounds expectations by saying Gemini’s images won’t be perfect, but he promises the company will continue to listen to user feedback and refine accordingly.

Starting this week, the Imagen 3 model will be available for all users, but reintroducing images featuring people will begin with paid users. English-speaking Gemini Advanced, Business and Enterprise users can expect human image generation to return “over the coming days.”

A Google AI Gem, custom bot, designed to curate cliffghangers.
Google

Initially previewed at Google I/O 2024, Gems are Google’s custom chatbots with user-created instructions. It’s essentially Gemini’s answer to OpenAI’s GPTs, which Google’s competitor rolled out late last year. Gems begin rolling out in the next few days.

“With Gems, you can create a team of experts to help you think through a challenging project, brainstorm ideas for an upcoming event, or write the perfect caption for a social media post,” Citron wrote. “Your Gem can also remember a detailed set of instructions to help you save time on tedious, repetitive or difficult tasks.”

In addition to the blank slate of custom Gems, Gemini will include premade ones “to help you get started” and inspire new ideas. Prebuilt Gems include:

  • Learning coach - to help you understand complex topics

  • Brainstormer - to inspire new ideas

  • Career guide - walk you through skill upgrades, decisions and goals

  • Writing editor - provide constructive feedback on grammar, tone and structure

  • Coding partner - upgrade coding skills for developers and inspire new projects

Gems begin rolling out today on desktop and mobile. However, they’re only available for Gemini Advanced, Business and Enterprise subscribers, so you’ll need a paid plan to check them out.

This article originally appeared on Engadget at https://www.engadget.com/ai/gemini-will-soon-generate-ai-images-of-people-again-with-the-upgraded-imagen-3-161429310.html?src=rss