Google’s Gemini-powered photo search arrives in early access

Google’s AI-powered Photos upgrades are beginning to trickle in. Ask Photos, the Gemini-powered chatbot that lets you get ultra-specific and conversational with your photo searches, is launching in early access for select users in the US. In addition, the improved search for more descriptive Google Photos queries begins rolling out today for all English-speaking users.

The upgraded search in Google Photos lets you use more descriptive queries. For example, while you could have searched for “lake” before, you can now enter “kayaking on a lake surrounded by mountains.” Or, instead of merely searching for your friend Alice, you can go with “Alice and me laughing.” The idea is to make it easier to narrow things down as our cloud-based photo libraries grow.

Ask Photos, the Google Photos chatbot the company revealed at I/O in May, takes that further. Powered by Gemini, it adds a new tab at the bottom of the Photos app that lets you ask about anything in your library using natural language.

Google provided examples like “Show me the best photo from each national park I’ve visited,” which uses location data to scour your park photos and some subjective robot judgment to determine a favorite. Other examples the company provided include “What did we eat at the hotel in Stanley?” and “Where did we camp last time we went to Yosemite?”

Like other chatbot features, Ask Photos can respond to follow-up prompts. So, if it misses the mark the first time, you can ask it to tweak its parameters and give it another go.

Google says your Photos data will never be used for advertising. Although humans may review queries, they’ll be disconnected from your Google account, so the reviewers won’t know who typed the input. Real people won’t review Ask Google’s answers, including photos or videos, unless you provide feedback or (only in rare cases, according to the company) to address abuse.

If you’re in the US, you can sign up for the waitlist to try to get early access to Ask Photos starting today. Meanwhile, Google Photos’ more descriptive search powers are now beginning to roll out for English-speaking users on Android and iOS.

This article originally appeared on Engadget at https://www.engadget.com/ai/googles-gemini-powered-photo-search-arrives-in-early-access-160041679.html?src=rss

Verizon is reportedly near a deal to buy broadband provider Frontier Communications

Verizon is reportedly near a deal to buy fiber provider Frontier Communications. On Wednesday, The Wall Street Journal said that an announcement could come as early as this week, provided discussions don’t “hit any last-minute snags.”

Frontier has a market value of over $7 billion and provides broadband to around three million locations in 25 states. The company would help Verizon boost its Fios fiber network and better compete with AT&T. The carrier has seen slowing wireless revenue and views fiber investment as a growth area. Acquiring companies with existing infrastructure, like Frontier, is potentially less expensive and time-consuming than rolling out its own network.

Based in Dallas, Frontier is currently upgrading its copper landline system to fiber — enabling it to offer a 5Gbps symmetrical plan. The company filed for Chapter 11 bankruptcy in 2020. It pivoted to a “leaner business,” as the WSJ describes, before running into concerns that it would run out of money before it finishes its current upgrades.

The FTC sued the company in 2021 for misrepresenting its speeds. Under a 2022 settlement, Frontier was required to stop lying about its internet performance, dole out over $8.5 million and install fiber service in 60,000 California homes over four years.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/verizon-is-reportedly-near-a-deal-to-buy-broadband-provider-frontier-communications-210317747.html?src=rss

Volvo scales back its EV goals, will no longer be fully electric by 2030

Over three years after saying it would sell only electric vehicles by 2030, Volvo has lowered its EV ambitions. The automaker now says it will aim for 90 to 100 percent electrified vehicles (including full EVs and plug-in hybrids) by the decade’s end, with the remaining 0 to 10 percent being mild hybrids. Volvo chalked up its revised ambitions to “changing market conditions and customer demands.”

Volvo says it’s still committed to long-term electrification. The automaker has launched five fully electric models since laying out its (now aborted) 2030 goal three years ago: the EX40, EC40, EX30, EM90 and EX90.

The company cites the slower-than-expected rollout of EV charging infrastructure as one factor in its decision. Despite the passage of President Biden’s Bipartisan Infrastructure Law in 2021, which allocated $7.5 billion to support the creation of 500,000 EV charging stations, only seven stations in four states had been built as of March. Reasons for the slow rollout allegedly include a lack of experience in the state transportation agencies in charge of execution and various government requirements (submitting plans, soliciting bids, awarding funds).

The Biden Administration said earlier this year it still expects the US to reach 500,000 charging stations by 2026.

Volvo also cited “additional uncertainties created by recent tariffs on EVs in various markets.” That likely refers to the hit the automaker is taking from manufacturing some models in China. Earlier this year, the White House announced new levies on EVs made in China and batteries sourced from China. (Volvo’s parent company, Volvo Car AB, is majority-owned by China’s Geely Holding.) Forbes reported in May that the China-made EX30, expected to start at around $37,000, would be pushed to over $50,000 after tariffs.

The automaker adjusted its CO2 reduction expectations alongside the tweaked timeline. It now says it aims for 65 to 75 percent reduced per-car emissions (compared to a 2018 baseline) by 2030; its previous goal was a hard 75 percent. In addition, it also changed its previous 40-percent per-car reduction (also compared to 2018) by 2025; that goal is now a 30 to 35 percent drop.

“We are resolute in our belief that our future is electric,” Jim Rowan, Volvo Cars CEO, wrote in a press release. “An electric car provides a superior driving experience and increases possibilities for using advanced technologies that improve the overall customer experience. However, it is clear that the transition to electrification will not be linear, and customers and markets are moving at different speeds of adoption. We are pragmatic and flexible, while retaining an industry-leading position on electrification and sustainability.”

This article originally appeared on Engadget at https://www.engadget.com/transportation/evs/volvo-scales-back-its-ev-goals-will-no-longer-be-fully-electric-by-2030-201059287.html?src=rss

Cheaper Copilot+ PCs are coming with Qualcomm’s 8-core Snapdragon X Plus chip

Qualcomm is moving to make AI PCs more affordable. Following the company’s 12-core Snapdragon X Elite and 10-core Snapdragon X Plus, it unveiled a toned-down eight-core version of the Snapdragon X Plus on Wednesday. The chip includes the same Hexagon neural processing unit (NPU) from the higher-end variants, capable of 45 trillion operations per second (TOPS) for powerful on-device AI.

The 4nm AI-focused chip has a custom Qualcomm Orion CPU built for “mainstream” (i.e., cheaper) Copilot+ PCs. Its eight cores can reach speeds of up to 3.2GHz, with single-core performance at up to 3.4GHz. Qualcomm says it enables days-long battery life in laptops.

The chip includes an integrated Adreno GPU, which supports up to three 4K 60Hz monitors or two 5K 60Hz displays. It supports an internal display of up to UHD 120Hz with HDR10.

The chart below shows how the Snapdragon X Plus 8-core’s specs compare to other AI chips in the line:

A chart showing different Snapdragon X AI chips
Qualcomm

“Copilot+ PCs, powered exclusively today by Snapdragon X Series platforms, launched the new generation in personal computing, made possible by our groundbreaking NPU,” Qualcomm President and CEO Cristiano Amon wrote in a press release. “We are now bringing these transformative AI experiences, along with best-in-class performance and unprecedented battery life, to more users worldwide with Snapdragon X Plus 8-core. We’re proud to be working with our global OEM partners to restore performance leadership to the Windows ecosystem.”

The first PCs with the 8-core Snapdragon X Plus include laptops from Acer, Dell, HP, Lenovo and others. They’ll be available starting today.

This article originally appeared on Engadget at https://www.engadget.com/computing/cheaper-copilot-pcs-are-coming-with-qualcomms-8-core-snapdragon-x-plus-chip-110013598.html?src=rss

Copilot+ features are coming in November to AI PCs powered by Intel and AMD’s latest chips

Qualcomm’s exclusivity period on Copilot+ PCs is winding down. Microsoft confirmed on Tuesday that Intel’s new 200V processors and AMD’s Ryzen AI 300 series chips will add Copilot+ AI capabilities beginning in November.

Copilot+ PCs include features like Live Captions (real-time subtitle generation, including translations), Cocreator in Paint (prompt-based image generation), Windows Studio Effects image editing (background blurring, eye contact adjustment and auto-framing) and AI tools in Photos. Of particular interest to gamers is Auto Super Resolution, an Nvidia DLSS competitor that upscales graphical resolution and refresh rates in real time without stunting performance.

The AI PCs will also eventually include Recall, Microsoft’s searchable timeline of PC activity. This feature was delayed to enhance security after an initial blowback. (Who’d have thought a history of everything you do on your PC might need to be locked down as tightly as possible?) The company said the revised Recall would start rolling out to beta testers in October.

Chart showing
Intel

Intel’s 200V series processors, revealed today, include a powerful neural processing unit (NPU) that supports up to 48 TOPS (tera operations per second) for locally processed AI models and tools. With up to 32GB of onboard memory, the 200V is “the most efficient x86 processor ever,” according to Intel, with 50 percent lower on-package power consumption.

Microsoft’s Windows and devices lead, Pavan Davuluri, confirmed that Intel’s new chips will support Copilot+. “All designs featuring Intel Core Ultra 200V series processors and running the latest version of Windows are eligible to receive Copilot+ PC features as a free update starting in November,” Davuluri said onstage at Intel’s IFA launch event in Germany.

Meanwhile, according to a Windows blog post, AMD’s Ryzen AI 300 series chips, revealed earlier this summer, will also receive Copilot+ features in November. The NPUs in AMD’s chips can reach up to 50 TOPS for AI performance and have 16 percent faster overall performance than their predecessors.

The first Copilot+ PCs arrived in June, powered by Qualcomm’s Snapdragon X Elite chip. The initial batch of Arm-based PCs include laptops and 2-in-1s from Microsoft, Acer, HP, Lenovo, Samsung, Asus and Dell.

This article originally appeared on Engadget at https://www.engadget.com/ai/copilot-features-are-coming-to-ai-pcs-powered-by-intel-and-amds-latest-chips-190707475.html?src=rss

MLB’s virtual ballpark returns for four regular-season games in September

Major League Baseball’s virtual ballpark is back. Like a metaverse experience for traditional (non-VR) devices, it lets you watch actual games as they're happening in real time, albeit recreated in a 3D environment. MLB will host interactive watch parties in the environment for select games each Wednesday in September.

Like a baseball-centric take on Second Life (for the old folks in the back), it includes 3D avatars corresponding to players’ movements. The plays and athletes’ precise positions are tracked using the same Sony Hawk-Eye cameras used for the league’s Statcast analytics system. The experience sounds tailor-made for headsets like the Vision Pro and Meta Quest, but it’s limited to traditional screens for now. Improbable, a London-based company known for metaverse experiences, created the tech.

There’s also an audio element, as you can hear the play-by-play and game sounds and chat with other fans in spatial audio. New for this season is a party system that lets you talk directly with your friends. The league is also bringing back a virtual scavenger hunt to keep you interested in case the game is a bore.

MLB’s virtual ballpark debuted during the 2023 season, first for an All-Star exhibition and then for a late regular-season game and a Postseason matchup. This season’s virtual lineup kicks off on Wednesday, with three more games following throughout September:

  • Wednesday, September 4, 6:50PM ET - Tamba Bay Rays vs. Minnesota Twins

  • Wednesday, September 11, 7:45PM ET - Reds vs. Cardinals

  • Wednesday, September 18, 6:35PM ET - Giants vs. Orioles

  • Wednesday, September 25, 6:40PM ET - Rays vs. Tigers

It isn’t clear who this experience was made for, but hey, at least it’s free. You can log into the virtual ballpark using any modern device with a web browser at MLB’s virtual ballpark website. You’ll need to create or log into an MLB account before they let you past the virtual turnstile.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/mlbs-virtual-ballpark-returns-for-four-regular-season-games-in-september-171533897.html?src=rss

Google is rolling out more election-related safeguards in YouTube, search and AI

As the US speeds toward one of the most consequential elections in its 248-year history, Google is rolling out safeguards to ensure users get reliable information. In addition to the measures it announced late last year, the company said on Friday that it’s adding election-related guardrails to YouTube, Search, Google Play and AI products.

YouTube will add information panels above the search results for at least some federal election candidates. The modules, likely similar to those you see when searching the web for prominent figures, will include the candidates’ basic details like their political party and a link to Google Search for more info. The company says the panels may also include a link to the person’s official website (or other channel). As Election Day (November 5) approaches, YouTube’s homepage will also show reminders on where and how to vote.

Google Search will include aggregated voter registration resources from state election offices for all users. Google is sourcing that data through a partnership with Democracy Works, a nonpartisan nonprofit that works with various companies and organizations “to help voters whenever and wherever they need it.”

Meanwhile, the Google Play Store will add a new badge that indicates an app is from an official government agency. The company outlines its requirements for apps that “communicate government information” in a developer help document. Approved applications that have submitted the required forms are eligible for the “official endorsement signified by a clear visual treatment on the Play Store.”

As for generative AI, which can be prone to hallucinations that would make Jerry Garcia blush, Google is expanding its election-related restrictions, which were announced late last year. They’ll include disclosures for ads created or generated using AI, content labels for generated content and embedded SynthID digital watermarking for AI-made text, audio, images and video. Initially described as being for Gemini (apps and on the web), the election guardrails will apply to Search AI Overviews, YouTube AI-generated summaries for Live Chat, Gems (custom chatbots with user-created instructions) and Gemini image generation.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/google-is-rolling-out-more-election-related-safeguards-in-youtube-search-and-ai-190422568.html?src=rss

OpenAI and Anthropic agree to share their models with the US AI Safety Institute

OpenAI and Anthropic have agreed to share AI models — before and after release — with the US AI Safety Institute. The agency, established through an executive order by President Biden in 2023, will offer safety feedback to the companies to improve their models. OpenAI CEO Sam Altman hinted at the agreement earlier this month.

The US AI Safety Institute didn’t mention other companies tackling AI. But in a statement to Engadget, a Google spokesperson told Engadget the company is in discussions with the agency and will share more info when it’s available. This week, Google began rolling out updated chatbot and image generator models for Gemini.

“Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” Elizabeth Kelly, director of the US AI Safety Institute, wrote in a statement. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”

The US AI Safety Institute is part of the National Institute of Standards and Technology (NIST). It creates and publishes guidelines, benchmark tests and best practices for testing and evaluating potentially dangerous AI systems. “Just as AI has the potential to do profound good, it also has the potential to cause profound harm, from AI-enabled cyber-attacks at a scale beyond anything we have seen before to AI-formulated bioweapons that could endanger the lives of millions,” Vice President Kamala Harris said in late 2023 after the agency was established.

The first-of-its-kind agreement is through a (formal but non-binding) Memorandum of Understanding. The agency will receive access to each company’s “major new models” ahead of and following their public release. The agency describes the agreements as collaborative, risk-mitigating research that will evaluate capabilities and safety. The US AI Safety Institute will also collaborate with the UK AI Safety Institute.

It comes as federal and state regulators try to establish AI guardrails while the rapidly advancing technology is still nascent. On Wednesday, the California state assembly approved an AI safety bill (SB 10147) that mandates safety testing for AI models that cost more than $100 million to develop or require a set amount of computing power. The bill requires AI companies to have kill switches that can shut down the models if they become “unwieldy or uncontrollable.”

Unlike the non-binding agreement with the federal government, the California bill would have some teeth for enforcement. It gives the state’s attorney general license to sue if AI developers don’t comply, especially during threat-level events. However, it still requires one more process vote — and the signature of Governor Gavin Newsom, who will have until September 30 to decide whether to give it the green light.

Update, August 29, 2024, 4:53 PM ET: This story has been updated to add a response from a Google spokesperson.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-and-anthropic-agree-to-share-their-models-with-the-us-ai-safety-institute-191440093.html?src=rss

These robots move through the magic of mushrooms

Researchers at Cornell University tapped into fungal mycelia to power a pair of proof-of-concept robots. Mycelia, the underground fungal network that can sprout mushrooms as its above-ground fruit, can sense light and chemical reactions and communicate through electrical signals. This makes it a novel component in hybrid robotics that could someday detect crop conditions otherwise invisible to humans.

The Cornell researchers created two robots: a soft, spider-like one and a four-wheeled buggy. The researchers used mycelia’s light-sensing abilities to control the machines using ultraviolet light. The project required experts in mycology (the study of fungi), neurobiology, mechanical engineering, electronics and signal processing.

“If you think about a synthetic system — let’s say, any passive sensor — we just use it for one purpose,” lead author Anand Mishra said. “But living systems respond to touch, they respond to light, they respond to heat, they respond to even some unknowns, like signals. That’s why we think, OK, if you wanted to build future robots, how can they work in an unexpected environment? We can leverage these living systems, and any unknown input comes in, the robot will respond to that.”

The fungal robot uses an electrical interface that (after blocking out interference from vibrations and electromagnetic signals) records and processes the mycelia’s electrophysical activity in real time. A controller, mimicking a portion of animals' central nervous systems, acted as “a kind of neural circuit.” The team designed the controller to read the fungi’s raw electrical signal, process it and translate it into digital controls. These were then sent to the machine’s actuators.

Diagram showing various parts of a complex fungus-robot hybrid
Cornell University / Science Robotics

The pair of shroom-bots successfully completed three experiments, including walking and rolling in response to the mycelia’s signals and changing their gaits in response to UV light. The researchers also successfully overrode the mycelia’s signals to control the robots manually, a crucial component if later versions were to be deployed in the wild.

As for where this technology goes, it could spawn more advanced versions that tap into mycelia’s ability to sense chemical reactions. “In this case we used light as the input, but in the future it will be chemical,” according to Rob Shepherd, Cornell mechanical and aerospace engineering professor and the paper’s senior author. The researchers believe this could lead to future robots that sense soil chemistry in crops, deciding when to add more fertilizer, “perhaps mitigating downstream effects of agriculture like harmful algal blooms,” Shepherd said.

You can read the team’s research paper at Science Robotics and find out more about the project from the Cornell Chronicle.

This article originally appeared on Engadget at https://www.engadget.com/science/these-robots-move-through-the-magic-of-mushrooms-171612639.html?src=rss

Reddit is back up after a 30-minute outage

Reddit is back up after an outage took the site out for half an hour this afternoon. The site appears to have been down across the board, apart from a blank homepage that didn’t contain or point to any content. “We encountered an error,” the website reads. Attempting to navigate to any specific subreddit brought up the error you see above: “We were unable to load the content for this page.” However, as of 4:45PM ET, we began to see seeing subreddits and comments loading again as usual.

The Reddit status update page lists the problem as “Degraded Performance for reddit.com,” and was initially flagged as “Investigating.” At 4:32PM ET, the status was updated as “Identified - The issue has been identified and a fix is being implemented.”

At 4:45PM ET, it was updated again as “Monitoring - A fix has been implemented and we are monitoring the results.” However, the site's performance still labeled as “Degraded.”

Of course, the jokes on social media didn’t take long to start rolling in:

Reddit and Google signed a high-profile deal for the search giant to train its AI on Reddit user data, and search results have been increasingly Reddit-leaning over the past year. Reddit had another major outage earlier this year when it went down for nearly an hour. We’ll keep an eye on the status and update this story accordingly.

Update, August 28, 5PM ET: This story was a published as a developing news artcle about Reddit being down. It was modified after publish with more details about the breadth and length of the outage, and the headline has been changed to reflect the site's current status.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/reddit-is-back-up-after-a-30-minute-outage-203615473.html?src=rss