Meta's AI-powered assistant have been accessible on the Ray-Ban smart glasses for quite some time, but the company will only start rolling it out to its Quest headsets next month. The assistant will still be in experimental mode, however, and it's availability will be limited to users in the US and Canada. Meta has revealed the update alongside its announcements for the Llama 3.1 and the new Meta AI capabilities.
Users who get access to the assistant in August will be able to put its hands-free controls to the test. The company said Meta AI is replacing the current technology used for Voice Commands on Quest, so it will be the one controlling the headset whenever people use voice for navigation and the one answering their questions if they ask for information. They can ask the assistant for restaurant recommendations for an upcoming trip, as an example, or ask it for the weather those days, as well as suggestions on how to dress for it.
They will also be able to use the "Meta AI with Vision" feature, which will let them ask the assistant for information on what they're seeing, while using Passthrough on the Quest. Passthrough lets users see their environment through a video feed while watching or doing something else on their headsets. A user can, for instance, ask the assistant to look at what's inside the fridge and suggest what they can cook, or ask for tips on what kind of top would go with a skirt they're holding up, all while watching a YouTube video.
This article originally appeared on Engadget at https://www.engadget.com/metas-ai-assistant-is-coming-to-quest-headsets-in-the-us-and-canada-150033530.html?src=rss
General Motors is putting the autonomous Cruise Origin shuttle van on ice. The company said that the embattled Cruise, of which GM is the majority owner, will now focus on making the next-gen Chevy Bolt. The automaker discontinued the previous Bolt last year due to a shift away from an older battery system but did not reveal plans for a new model at the time.
According to a letter that GM CEO Mary Barra sent to shareholders, the indefinite delay of the shuttle van "addresses the regulatory uncertainty we faced with the Origin because of its unique design." Barra added that the per-unit costs of the next-gen Bolt will be much lower, "which will help Cruise optimize its resources."
GM and Cruise were working on the Origin with Honda. The Origin — which does not have a driver's seat, steering wheel or pedals — was supposed to debut in Japan in 2026.
In October, the California Department of Motor Vehicles suspended Cruise's driverless vehicle permits over safety issues. Earlier that month, a pedestrian in San Francisco was dragged 20 feet by a Cruise vehicle and pinned under it after a hit-and-run by another car pushed her into the robotaxi's path. Cruise later paused all driverless operations before temporarily halting production in November.
According to CNBC, former Cruise CEO Kyle Vogt at one point told staff that hundreds of pre-commercial Origin vehicles had been built. The company has resumed robotaxi operations in Phoenix, Houston and Dallas with human operators on board and is carrying out tests in Dubai. However, it hasn't recommenced operations in San Francisco. It's still under investigation for the October incident there.
Shelving the Origin is not a decision that GM and Cruise would have come to lightly. In GM's second quarter earnings report, the automaker noted that it incurred around $583 million of Cruise restructuring costs. It said these resulted "from Cruise voluntarily pausing its driverless, supervised and manual [autonomous vehicle] operations in the US and the indefinite delay of the Cruise Origin."
On the plus side, resuming work on the Bolt (which will presumably use GM's Ultium battery tech the next time around) could be a boon for GM's bottom line. As of 2023, the Bolt EV and EUV accounted for most of GM's electric vehicle sales. It planned to make around 70,000 of them last year before ceasing production.
This article originally appeared on Engadget at https://www.engadget.com/gm-shelves-the-autonomous-cruise-origin-shuttle-van-144256801.html?src=rss
You can's say Fujifilm is boring. It stuck to APS-C sensors instead of going full-frame like everyone else, while releasing cool and weird models like the X100 VI. That strategy has been refreshing in a conservative industry and undeniably successful.
It also went big by introducing its first medium-format camera seven years ago, the GFX50S. After eight models, they’ve proven to be popular among pro portrait and scenic photographers, a market Fujifilm never really had before. Each has become increasingly more sophisticated, with better image quality, faster shooting speeds and improved video.
Now that the company’s flagship $7,500 100-megapixel GFX 100 II, has been out for awhile and had several firmware updates, I was keen to test the new AF speeds and more. So I went to London to try it out alongside two pro photographer friends who are thinking of buying one.
Body
The original GFX 100 is a gigantic camera, weighing over three pounds with the viewfinder. The GFX 100 II is more manageable at 2.27 pounds, the same as Panasonic’s full-frame S1. Photographers are still likely to be carrying a heavy bag, though, as medium-format GFX lenses are generally bigger and heavier than full-frame glass.
The GFX 100 II also feels more like a full-frame camera than an old-school top-down viewfinder medium format model. It has an updated, modern control layout, with a pair of control dials, a mode dial, a joystick, 14 buttons and a movie/photo switch.
The rear display tilts up, down and to the side, but doesn’t flip out — not a huge deal, as this will never be a vlogging camera. It shines where it counts, though, with a high 2.36 million dot resolution and enough brightness to use in sunlight. The viewfinder, meanwhile, is one of the best on any camera, with an extremely sharp 9.36-million dot resolution and 100 percent magnification.
It’s easy to handle, thanks to the well-placed controls and large grip. The top display, which stays on even when the camera is switched off, shows all the main settings at a glance. I’m not a huge fan of Fujifilm’s overly complicated menu system, but it’s fine once you get used to it.
As with other recent high-end cameras, you get both an SD UHS II card slot and a much faster CFexpress B option. The latter is required for fast burst shooting, as I’ll discuss soon. Battery life is solid, with up to 540 shots on a charge, or about an hour of 8K or 4K 60p recording.
Performance
Steve Dent for Engadget
The GFX 100 II is the fastest medium-format camera to date. You can fire bursts at up to 8 fps with the mechanical shutter enabled and capture about 300 lossless RAW frames before the buffer fills. That’s about 36GB of data, so it requires a fast CFexpress card.
Autofocus wasn’t a strong point on the GFX, but it’s a big step up on this model. The majority of shots in our burst testing were in focus, though it becomes less accurate when the subject is close to the camera. This isn’t a sports camera, obviously, but it still has the best AF I’ve seen on any medium format camera.
Face and eye detection have also improved, usually locking onto the eye and not, say the eyebrow as the older model did. Fujifilm also introduced AI subject detection from recent models, so it now has settings for animals, birds, automobiles, motorcycles, bikes, airplanes and trains.
Nathanael Charpentier for Engadget
The GFX 100 II has a new 5-axis stabilization system with up to eight stops of shake reduction, compared to 5.5 stops before. This is useful for portraits and scenics, letting you shoot down to a quarter second or slower and blur water or people, while keeping the background sharp.
Rolling shutter was pretty abysmal on the original model, and isn’t a lot better here. If you’re taking street photos and want to remain silent, it’s fine if the subject doesn’t move much. For anything else, use the mechanical shutter to avoid some bad skewing.
Image quality
Image quality is this camera’s forte. Naturally, photos are pin sharp thanks to the 102-megapixel sensor. And with 16 bits of color depth in RAW mode, dynamic range is outstanding, right up there with Sony and Nikon. All of that makes it ideal for portraits and landscapes, on top of tasks that benefit from high-resolution, like art preservation.
The GFX 100 II now goes down to ISO 80 instead of 100 to further boost dynamic range. All of that allows photographers to get creative with RAW photos, or tease detail out of highlights and shadows.
It’s not bad at high ISOs either, thanks to the sensor’s backside illumination and dual-gain design. There’s very little noise visible at ISO 6400, and photos are usable up to ISO 12800 if exposure is correct.
The medium format sensor offers incredibly shallow depth of field if you need that for portrait shooting. Combined with a fast lens like the 80 mm f/1.7, it allows for incredible bokeh and subject separation.
For those who prefer to use JPEGs straight out of the camera, it delivers color-accurate images with the perfect amount of in-camera sharpening. That’s ideal for previews or for folks who want to use Fujifilm’s impressive film simulation modes. For the GFX 100 II, Fujifilm introduced a new one called Reala Ace that’s based directly on one of its old negative films. With a punchy, saturated and slightly nostalgic feel, it has become one of my new favorites.
There is one quality issue — the GFX 100 II drops from 16- to effectively less than 14-bits when shooting 8fps bursts in order to reduce throughput. That in itself isn’t a huge problem, but Fujifilm has been cagey about how it markets this, which has rubbed a lot of pro photographers the wrong way.
Video
Steve Dent for Engadget
I’m starting to sound like a broken record, but the X100 II is also Fuji’s best medium format camera for video. It has a host of new modes, most notably 8K. It also offers 6K, 4K/60p and 1080p at 240fps. All those formats can be captured in 12-bit ProRes, along with 10-bit H.265 formats. You also get access to Fujifilm’s excellent F-Log2 capture that boosts dynamic range.
There are some considerable compromises, though. 8K is captured with a 1.53 times crop, reducing the effective sensor size to less than full frame — which negates one of the main medium-format advantages: shallow depth of field. Other resolutions use the full sensor width, but pixel binning reduces sharpness.
Rolling shutter is also an issue at 8K, so be sure not to move the camera much at that resolution. It’s less bothersome at 4K resolutions, likely due to the pixel binning.
All that aside, video from the GFX100 II has a different quality than I’ve seen from most mirrorless cameras. The larger sensor makes it cinematic, especially with some of Fujifilm’s prime lenses. And the 8K video is extremely sharp when downsampled to 4K in DaVinci Resolve.
Realistically though, video is more of a nice-to-have feature for occasional use, as the majority of buyers will certainly be using it for photography.
Wrap-up
Nathanael Charpentier for Engadget
The $7,500 GFX100 II is an impressive medium format camera with improvements in every area compared to the previous model. More importantly, what did my pro photographer friends think and will they buy one? “What’s most noticeable is the evolution of the autofocus compared to the GFX100,” said Nathanael Charpentier. “In our studio we usually work with Sony, and the GFX100 II autofocus is still far from Sony’s level, but it’s a big improvement.
“It’s not a sports camera, it doesn’t have super-fast burst speeds. It’s more for studio portrait work. For certain types of ‘reportage’ like candid wedding shoots, if we really need the extra dynamic range offered by a medium-format camera, I could see using it.” At this point, they’re not planning on buying one due to the high price (and the fact that they just laid down 6,000 euros for an A9 III), but it’s high on their list of future equipment purchases.
Its main competitor is the $8,200 Hasselblad X2D 100C, which has perhaps slightly better color science and image quality — while also bringing a certain prestige with the Hasselblad name. However, the GFX100 II is superior in most other ways, including speeds, autofocus and video. If you really need to nail autofocus in busy or difficult situations, though, full-frame is still best: Sony’s 45-megapixel $6,500 A1 or Nikon’s $3,800 Z8 or $5,500 Z9 (both 45MP as well) are better choices.
This article originally appeared on Engadget at https://www.engadget.com/fujifilm-gfx-100-ii-the-king-of-medium-format-mirrorless-cameras-143009929.html?src=rss
The new Samsung Galaxy devices drop tomorrow which means today is your last chance to take advantage of pre-order promotions. One of the best deals we've seen comes from Amazon, which is offering a $300 gift card to anyone who pre-orders the Samsung Galaxy Z Fold 6. The bundle is available for $1,900 thanks to a six percent discount on the 512GB model (originally $2,020). You can pick it up in Silver, Navy or Pink.
Samsung announced the Galaxy Z Fold 6 earlier this month, and we've had the chance to test it out. We gave it an 86 in our review due to welcome features like native stylus support and an even lighter chassis. It also uses the Snapdragon 8 Gen 3 chip and has a larger vapor chamber, so there's basically no lag, and it's less likely to overheat. The screen is brighter, with a colorful display, and the device lasted over 20 hours during our video rundown test on the main screen and 25 hours and 19 minutes on the exterior screen.
If you're in the market for something cheaper, check out the Samsung Galaxy Z Flip 6 — a smaller device with some of the same perks. The smartphone is also available for pre-order, with the 512GB model and a $200 Amazon gift card on sale for $1,100. The 512GB Samsung Galaxy Z Flip 6 starts at $1,220 on its own (though both Amazon and Samsung are running pre-order sales on just the phone). Like the Galaxy Z Fold 6, it comes out tomorrow so today is the last day to snag a pre-order deal.
This article originally appeared on Engadget at https://www.engadget.com/its-your-last-chance-for-a-300-amazon-gift-card-when-you-pre-order-the-samsung-galaxy-z-fold-6-141053944.html?src=rss
The first reports of instability issues with the 13th-gen Intel desktop CPUs started popping up in late 2022, mere months after the models came out. Those issues persisted, and over time, users reported dealing with unexpected and sudden crashes on PCs equipped with the company's 14th-gen CPUs, as well. Now, Intel has announced that it finally found the reason why its 13th and 14th-gen desktop processors have been causing crashes and giving out on users, and it promises to roll out a fix by next month.
In its announcement, Intel said that based on extensive analysis of the processors that had been returned to the company, it has determined that elevated operating voltage was causing the instability issues. Apparently, it's because a microcode algorithm — microcodes, or machine codes, are sets of hardware-level instructions — has been sending incorrect voltage requests to the processor.
Intel has now promised to release a microcode patch to address the "root cause of exposure to elevated voltages." The patch is still being validated to ensure that it can address all "scenarios of instability reported to Intel," but the company is aiming to roll it out by mid-August.
As wccftech notes, while Intel's CPUs have been causing issues with users for at least a year and a half, a post on X by Sebastian Castellanos in February put the problem in the spotlight. Castellanos wrote that there was a "worrying trend" of 13th and 14th-gen Intel CPUs having stability issues with Unreal Engine 4 and 5 games, such as Fortnite and Hogwarts Legacy. He also noticed that the issue seems to affect mostly higher-end models and linked to a discussion on Steam Community. The user that wrote the post on Steam wanted to issue a warning to those experiencing "out of video memory trying to allocate a rendering resource" errors that it was their CPU that was faulty. They also linked to several Reddit threads with people experiencing the same problem and who had determined that their issue lied with their Intel CPUs.
More recently, the indie studio Alderon Games published a post about "encountering significant problems with Intel CPU stability" while developing its multiplayer dinosaur survival game Path of Titans. Its founder, Matthew Cassells, said the studio found that the issue affected end customers, dedicated game servers, developers' computers, game server providers and even benchmarking tools that use Intel's 13th and 14th-gen CPUs. Cassells added that even the CPUs that initially work well deteriorate and eventually fail, based on the company's observations. "The failure rate we have observed from our own testing is nearly 100 percent," the studio's post reads, "indicating it's only a matter of time before affected CPUs fail."
This article originally appeared on Engadget at https://www.engadget.com/intel-has-finally-figured-out-its-long-standing-desktop-cpu-instability-issues-130042083.html?src=rss
Adobe has widely released a new and potentially contentious feature: text-to-image generation for Photoshop powered by Firefly, first teased in April. As with LLMs like Dall-E and Midjourney, you can use it to create an image from scratch by typing a description into Photoshop's updated generative AI tool.
I tried it with the text "Dramatic low angle view of a steamship from the 1800s in a storm with large waves and lightning" in multiple styles (anime, watercolor, sketch, realistic) and got decent results. The usual AI art caveats apply though, particularly with weird details if you look closely. But it certainly created useable results and you have the benefit of already being inside Photoshop to fix any errors.
Adobe Firefly AI-generated image
Previously, Photoshop's Generative Fill feature only let you add, extend or remove specific parts of an image. Now, you can create images from scratch, then tweak them later. "This really speeds up time to creation," Adobe's Erin Boyce told Engadget in April. "The idea of getting something from your mind to the canvas has never been easier."
The feature is powered by Firefly Image 3 model, something at the heart of a recent artist backlash against Adobe. Creators were incensed by language in Adobe's recent ToS (terms of service), interpreting it to mean that Adobe could freely use their work to train the company's generative AI models.
In its latest post, however, Adobe stated that it has a "commitment to creator friendly AI" which means "never training on customer content." It promised to take a creator-friendly approach as part of its AI ethics principles of accountability, responsibility and transparency.
Adobe
Along with image generation, Adobe introduced an "Enhance Detail" feature in Photoshop's Generative Fill. For Illustrator, it introduced Generative Shape Fill to add detailed vectors in a designer's unique style (above), Enhanced Text to Pattern (creating customized vector patterns in the artists style) and Style Reference. It also added a Mockup tool to create "high-quality visual prototypes of art on objects like product packaging," enhanced selection capabilities and more.
This article originally appeared on Engadget at https://www.engadget.com/adobes-photoshop-can-now-generate-ai-images-via-prompts-like-dall-e-or-mid-journey-130018181.html?src=rss
For all its stacked selection of original content, like Fallout, The Boys and Rings of Power, Prime Video has historically pffered a cluttered, confusing and less-than-intuitive layout — especially compared to rivals like Netflix. That changes today as Amazon begins rolling out a new Prime Video UI that, in the company’s words, brings “clarity and simplicity back to streaming.”
The Prime Video redesign starts with a streamlined navigation bar that should make it easier to find your way around. To the left, the bar includes the general categories Home, Movies, TV Shows, Sports and Live TV. Immediately to the right, the nav bar continues with a dedicated tab for content bundled with your Prime membership, followed by sections for add-on subscriptions like Max, Paramount+, Crunchyroll and others. There’s a separate section to add new subscriptions — from Amazon’s more than 100 options — straight from the bar.
Meanwhile, a new “hero rotator” below the bar drills down to highlight content available within each selected bar section. It looks similar to rival services, which doesn’t sound like a big deal on paper but should be a welcome change for anyone who’s ever futzed around with the confusing old Prime Video UI.
Amazon
Unsurprisingly, Amazon is adding personalized AI-generated recommendations (“Made for you”) when navigating the bar’s Movies and TV Shows sections. Using the company’s Bedrock AI model, the machine learning recommendations will offer content tips based on your watch history and preferences.
AI will also power new show and movie synopses. Amazon says the change will make browsing their blurbs faster, preventing you from having to scroll around to learn more about a given piece of content.
Finally, Amazon says the UI has new animations, snappier page transitions and zoom effects to make the experience more “frictionless.” On living room devices, video content will auto-play on the hero rotator as you browse around (much like Netflix and other competitors). If you head to the Live TV tab, recommended stations will also play on their own, continuing until you pick something to give your full attention.
The UI update begins rolling out on Tuesday. You can read more in Amazon’s announcement post.
This article originally appeared on Engadget at https://www.engadget.com/prime-video-gets-a-much-needed-ui-overhaul-with-a-new-content-bar-and-ai-recommendations-120019397.html?src=rss
Condé Nast, the media giant that owns The New Yorker, Vogue and Wired, has sent a cease-and-desist letter to AI-powered search startup Perplexity, according to The Information. The letter, sent on Monday, demanded Perplexity stop using content from Condé Nast publications in its AI-generated responses and accused the startup of plagiarism. It comes a month after Forbes took similar action.
Condé Nast CEO Roger Lynch has warned “many” media companies could face financial ruin in the time it would take for litigation against generative AI companies to conclude. Lynch has called upon Congress to take “immediate action.”
Right in the middle of BBQ season, ThermoWorks, makers of the Thermapen, is upgrading its wireless meat probe. The RFX Meat uses radio technology rather than Bluetooth to transmit data. The company explains its “patent-pending sub-GHz RFX wireless technology” provides a more reliable connection with up to 2,132 feet of direct line of sight range. When placed inside a grill or smoker, it should work at up to 659 feet of range, ThermoWorks says. The $159 RFX Meat starter kit is available for pre-order. Shipping starts September 10, so, arguably, not quite in time for BBQ season.
The advertising industry can heave a sigh of relief.
Google won’t kill third-party cookies in Chrome after all, the company said on Monday in a blog. Instead, it’ll introduce a new experience in the browser that will allow users to make informed choices about their web browsing preferences. Killing cookies, Google said, would hurt online publishers and advertisers.
Over the past few years, multiple delays and regulatory hurdles have hit Google’s plans to eliminate third-party cookies. Initially, the company wanted to phase out these cookies by the end of 2022 but pushed the deadline to late 2024 and then to early 2025 because of various challenges and feedback from stakeholders, including advertisers, publishers and regulatory bodies, like the UK’s Competition and Markets Authority (CMA).
The company says it will now focus on giving users more control over their browsing data, including additional privacy controls, like IP Protection in Chrome’s Incognito mode, and ongoing improvements to Privacy Sandbox APIs.
Google’s Pixel 8a is the best Android phone for less than $500, and now it’s even cheaper than usual, making it the best Android phone for less than $450. Like past A-series devices (usually the best cheap Android phones in their time), it takes most of the headline features from last year’s flagship Pixel phone — the Pixel 8, in this case — and puts them in a slightly cheaper design. You still get a bright and vivid OLED display with a smooth 120Hz refresh rate and superb camera performance.
This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-conde-nast-is-the-latest-media-company-to-accuse-ai-search-engine-perplexity-of-plagiarism-111559877.html?src=rss
iRobot unveiled its most advanced and expensive robot vacuum yet on Tuesday. The (deep breath) Roomba Combo 10 Max Robot + AutoWash Dock automatically washes and dries the mopping pad, something you had to do manually on all its previous combo vacs. But at $1,399, many customers will want to wait several generations for the feature to trickle down to models that don’t cost nearly the equivalent of a MacBook Pro.
Cleaning robots exist to automate tasks that are a pain for us, and the Roomba Combo 10 Max Robot expands on that. iRobot says the dock, which contains “premium antimicrobial materials,” can empty its dirt into an enclosed bag, refill the mopping solution tank and clean itself after each pad wash. You can manually run self-cleaning, and its companion app will remind you when it’s time for standard maintenance or a deeper cleaning.
The robot can store dirt and debris for up to 60 days before emptying, and the mopping pad and self-cleaning tank hold up to seven days of water. At least in theory, the Combo 10 Max leaves less work for the user than any other Roomba before it.
iRobot
iRobot says the new Roomba can seamlessly transition from vacuuming carpet to mopping floors, automatically boosting its suction power when it detects carpets. It can then move back and forth with consistent pressure and deeper scrubbing when it senses that it’s time to mop.
The combo vacuum is designed to retract its entire mopping system when it reaches carpet, “lifting its mop pad to the top of the robot to keep even high-pile carpets fresh and dry.” Meanwhile, it can vacuum and mop simultaneously on hard floors.
While other Roomba models have been able to sense particularly messy areas, the Combo 10 Max adds a camera to “visually pinpoint dirt on the floor.” The company claims this allows it to recognize the dirtiest spots up to eight times more frequently, making multiple passes on those areas more efficiently.
Like other models, the robot cleaner can map your home, but iRobot says it can do so seven times faster than other models while automatically labeling each room type. Its software can even use past cleaning information to predict each room’s cleanliness, proceeding accordingly.
iRobot
The robot works with Alexa, Siri and Google Assistant, and iRobot expects it to be Matter-enabled by the end of 2024. That should cover just about every type of smart home. Of course, it includes the company’s memorably branded Pet Owner Official Promise (P.O.O.P.). It provides a free device replacement if the robot accidentally plows through pet waste and ruins your day.
The Roomba Combo 10 Max is available for pre-order today on iRobot’s website in the US and Canada. (It’s also available to reserve in Europe and will launch there in “the coming months.”) However, as marvelous as the technological cleaning wonders sound, its $1,399 cost of admission prices it out of everything but the most well-heeled homes.
This article originally appeared on Engadget at https://www.engadget.com/irobots-newest-cleaning-machine-is-the-first-to-wash-and-dry-its-mopping-pad-for-you-110100150.html?src=rss
Google won’t kill third-party cookies in Chrome after all, the company said on Monday. Instead, it will introduce a new experience in the browser that will allow users to make informed choices about their web browsing preferences, Google announced in a blog post. Killing cookies, Google said, would adversely impact online publishers and advertisers. This announcement marks a significant shift from Google's previous plans to phase out third-party cookies by early 2025.
“[We] are proposing an updated approach that elevates user choice,” wrote Anthony Chavez, vice president of Google’s Privacy Sandbox initiative. “Instead of deprecating third-party cookies, we would introduce a new experience in Chrome that lets people make an informed choice that applies across their web browsing, and they’d be able to adjust that choice at any time. We're discussing this new path with regulators, and will engage with the industry as we roll this out.”
Google will now focus on giving users more control over their browsing data, Chavez wrote. This includes additional privacy controls like IP Protection in Chrome's Incognito mode and ongoing improvements to Privacy Sandbox APIs.
Google’s decision provides a reprieve for advertisers and publishers who rely on cookies to target ads and measure performance. Over the past few years, the company’s plans to eliminate third-party cookies have been riding on a rollercoaster of delays and regulatory hurdles. Initially, Google aimed to phase out these cookies by the end of 2022, but the deadline was pushed to late 2024 and then to early 2025 due to various challenges and feedback from stakeholders, including advertisers, publishers, and regulatory bodies like the UK's Competition and Markets Authority (CMA).
In January 2024, Google began rolling out a new feature called Tracking Protection, which restricts third-party cookies by default for 1% of Chrome users globally. This move was perceived as the first step towards killing cookies completely. However, concerns and criticism about the readiness and effectiveness of Google's Privacy Sandbox, a collection of APIs designed to replace third-party cookies, prompted further delays.
The CMA and other regulatory bodies have expressed concerns about Google's Privacy Sandbox, fearing it might limit competition and give Google an unfair advantage in the digital advertising market. These concerns have led to extended review periods and additional scrutiny, complicating Google's timeline for phasing out third-party cookies. Shortly after Google’s Monday announcement, the CMA said that it was “considering the impact” of Google’s change of direction.
This article originally appeared on Engadget at https://www.engadget.com/google-isnt-killing-third-party-cookies-in-chrome-after-all-202031863.html?src=rss