Bose launched a new wireless portable speaker on Tuesday. The SoundLink Home is a relatively small addition to the lineup with “premium sound” and around nine hours of battery life for $219.
The SoundLink Home is quite “mini” for a home-branded speaker: 8.5 inches high, 4.4 inches wide and 2.3 inches deep. It weighs 1.93 lbs (0.88 kg). It shouldn’t be hard to tote it from room to room or find an open spot on a desk or table.
A small size often means compromised audio, but Bose promises its dual passive radiators will produce “deep bass that fills any room.” The company also says it has “premium sound” with “great acoustics.” The company can squeeze surprisingly powerful sound into small packages, as evidenced by its SoundLink Flex lineup (more on that in a second).
Bose
The SoundLink Home forgoes Bose app access, so you’ll need to tweak your source audio if you want to adjust EQ levels. In addition to Bluetooth 5.3 (including multipoint!), it lets you attach a USB-C cable for wired input. The speaker also has a built-in mic to use for voice assistant access or as a speakerphone for calls.
You can use its bundled USB-C cable for charging, too, and Bose says it will go from empty to full in around four hours. You can link it wirelessly with a second unit for a stereo setup.
As its photos indicate, it’s a snazzy-looking little speaker. Its body is made from anodized aluminum, and it has a “high-quality” fabric grille and a built-in stand.
The Bose SoundLink Home is available now exclusively on the company website. You can buy it in gray and silver colorways. The speaker costs $219 and (at least for me) shows shipping available immediately.
Bose SoundLink Flex
Bose
The company recently updated its SoundLink Flex, a pill-shaped portable speaker that’s one of Engadget’s picks for the best Bluetooth speakers. Unlike the first version (and the SoundLink Home), this second-gen model now connects to the Bose app. There, you can make EQ adjustments and store stereo pairing connections with other compatible Bose speakers. The new version supports AAC and aptX audio codecs and comes in a new Alpine Sage colorway.
The new model also gains a shortcut button (similar to the one on the SoundLink Max). Like that model, the button on the Flex is customizable through the app.
The second-gen SoundLink Flex is available now for $149.
This article originally appeared on Engadget at https://www.engadget.com/audio/speakers/the-bose-soundlink-home-brings-premium-audio-to-a-small-and-portable-package-171006213.html?src=rss
Adobe’s updated consumer-focused Elements apps are here. Photoshop Elements 2025 adds new Magic Eraser-style object removal, depth of field adjustments and more. Meanwhile, Premiere Elements 2025 for video creators introduces dynamic titles, color correction tools and a simplified timeline.
The Elements apps, which Adobe debuted 23 years ago, take select features from the high-end professional suites and trickle them down to casual users. They’re like pared-down and easier-to-use versions of Photoshop and Premiere Pro for people who don’t want to learn pro graphic design or video-editing skills. The company also sells them as $100 each one-time purchases, rather than requiring a subscription. (You can also bundle both for $150.) With today’s AI features, the consumer-friendly apps let you do more than ever without much technical know-how.
Photoshop Elements 2025 adds an AI-powered Remove feature similar to the version in the pro Photoshop (along with Google’s Magic Eraser and Apple’s Clean-Up tool). Like those competing versions, Adobe’s tool lets you brush an object, person or animal, and it removes it, filling in a replacement background.
Elements 2025 also adds a faux portrait mode feature (Depth Blur) for any image. Select a focal point, and Adobe’s AI will add blur to create a sense of depth to simulate a wide-aperture lens. From there, you can tweak the blur strength, focal distance and focal range.
Adobe
A new color correction feature lets you select an area of a photo, pick a new color from a pop-up dial and slide it over until it looks how you want it. Photoshop Elements also has a photo-combining tool that lets you blend a subject from one image and a background from another — creating something new. The app also adds an AI motion effect feature that simulates movement blur for the subject.
Premiere Elements, Adobe’s consumer-level video app, incorporates new AI features, too. A new white balance tool and footage color LUTs (lookup tables) give you user-friendly color curves and presets — making it easier to tweak the overall mood.
Adobe
The video app also adds a simpler timeline. “See video tracks grouped together and audio tracks grouped together for easier navigation, find the editing options you use most in the new Quick Tools menu, lock individual tracks to prevent accidental changes, and more,” Adobe wrote in its press release. In addition, Premiere Elements adds dynamic titles with more text controls, and you can use Adobe Stock title templates without paying extra.
Both Elements apps fully support Apple’s M3 chip “for faster performance on Mac computers.” (Here are the full Windows and macOS system requirements for Photoshop Elements and Premiere Elements.) The pair of apps will also have scaled-down web and mobile app counterparts for editing on the go.
Adobe’s MAX conference starts on October 14. That’s where the pro editor community can learn more about the new AI (and other) features coming to the company’s high-end subscription-based desktop apps.
This article originally appeared on Engadget at https://www.engadget.com/computing/adobe-photoshop-elements-and-premiere-elements-updated-with-new-ai-features-130029684.html?src=rss
On Monday, the National Highway Traffic Safety Administration (NHTSA) fined Cruise, GM’s self-driving vehicle division, $1.5 million. The penalty was imposed for omitting key details from an October 2023 accident in which one of the company’s autonomous vehicles struck and dragged a San Francisco pedestrian.
Cruise is being fined for initially submitting several incomplete reports. The NHTSA’s reports require pre-crash, crash and post-crash details, which the company gave to the agency without a critical detail: that the pedestrian was dragged by the vehicle for 20 feet at around 7 MPH, causing severe injuries. Eventually, the company released a 100-page report from a law firm detailing its failures surrounding the accident.
That report states that Cruise executives initially played a video of the accident during October 3 meetings with the San Francisco Mayor's Office, NHTSA, DMV and other officials. However, the video stream was “hampered by internet connectivity issues” that concealed the part where the vehicle dragged the victim. Executives, who the report stated knew about the dragging, also failed to verbally mention that crucial detail in the initial meetings because they wanted to let “the video speak for itself.”
Investigators finally found out about the dragging after the NHTSA asked the company to submit the full video. The government agency says Cruise also amended four other incomplete crash reports involving its vehicles to add additional details.
The NHTSA's new requirements for Cruise include submitting a corrective action plan, along with others covering its total number of vehicles, their miles traveled and whether they operated without a driver. It also has to summarize software updates that affect operation, report citations and observed violations of traffic laws and let the agency know how it will improve safety. Finally, Cruise will have to meet with the NHTSA quarterly to discuss the state of its operations while reviewing its reports and compliance.
The order lasts at least two years, and the NHTSA can extend it to a third year. Reutersreported on Monday that, despite the fine, the NHTSA’s investigation into whether Cruise is taking proper safety precautions to protect pedestrians is still open. Cruise still faces probes by the Department of Justice and the Securities and Exchange Commission.
To say the incident sparked shakeups at Cruise would be an understatement. The company halted its self-driving operations after the accident. Then, last November, the dominoes began to fall: Its CEO resigned, and GM said it would cut its Cruise investment by “hundreds of millions of dollars” and restructure its leadership. Nine more executives were dismissed in December.
Nonetheless, Cruise is trying to rebound under its new leadership. Vehicles with drivers returned to Arizona and Houston this year, and GM said it’s pouring an additional $850 million into it. Earlier this month, it began operating in California again, also with drivers — which, it’s safe to say, is a good thing.
This article originally appeared on Engadget at https://www.engadget.com/transportation/gms-cruise-fined-15-million-for-omitting-details-about-its-gruesome-2023-crash-210559255.html?src=rss
The first of twoSea of Stars content updates for the next year has an official release date. The free Dawn of Equinox, which adds a co-op mode, new combat and other features, arrives on November 12 on all platforms.
Announced in March, Dawn of Equinox adds new game modes and mechanics for our favorite lunar-solar heroes, Valere and Zale. It includes a new local co-op mode that lets you and up to two friends play the entire game together. Each player has independent movement when traversing the world (as long as you stay within the screen’s confines), and there’s a new co-op Timed Hits feature that turns one of the core game’s mechanics into a group effort.
Sabotage Studio
The update also includes Combat 2.0, which adds some fun wrinkles to Sea of Stars’ battles. Mystery Locks adds a new challenge to unlock enemies’ spells the first time you face them. (A corresponding “Reveal” action will appear in some of your party’s special skills.) Combo points also remain after battles, which should open the door to some epic beat-downs on your opening moves in subsequent standoffs. In addition, developer Sabotage Studio says it’s put effort into rebalancing the entire game to reflect the new mechanics and incorporate player feedback.
Other changes include a more action-oriented prologue that ditches the old flashback structure, a bonus cinematic and a relic (game mode) designed for speedrunners. There will also be three difficulty options when starting the game. Finally, it enhances the game’s secret-tracking parrot and adds a French Canadian translation, “for Quebec’s finest Solstice Warriors.”
The new features in Dawn of Equinox will also apply in the upcoming Throes of the Watchmaker DLC. That content will add an all-new storyline next spring in what Sabotage Studio describes as “an encore to Sea of Stars’ original adventure” (perhaps before a full-fledged sequel?). The DLC will send Valere and Zale into a “magical miniature clockwork world threatened by a cursed carnival,” forcing the heroes to adapt their sun and moon magic to the mysterious environment.
Sabotage Studio
Sea of Stars was one of 2023’s biggest surprises, garnering grassroots praise and taking home the hardware for Best Indie Game at last year’s Game Awards. Engadget’s Lawrence Bonk praised the game’s Chrono Trigger vibes earlier this year, calling out its gorgeous pixel art and an overworld map that pays proper tribute to its ’90s RPG inspirations.
Sea of Stars is available now on all major platforms: PC, Switch, PS5/4, Xbox One Series X/S and Xbox One (including on Game Pass). The full game costs $35, and both big upcoming content updates will be added for free.
This article originally appeared on Engadget at https://www.engadget.com/gaming/sea-of-stars-free-dawn-of-equinox-update-arrives-in-november-164023516.html?src=rss
With macOS Sequoia and iOS 18, Apple has a handy new way to hop between devices while on desktop. iPhone Mirroring shows your phone’s screen on your computer; you can even use your mouse and keyboard to interact with it. Here’s how to set up and get the most out of iPhone Mirroring.
Requirements
First, iPhone Mirroring has several conditions. It only works with Apple Silicon Macs (late 2020 and later) or Intel-based models with the Apple T2 Security Chip (2018 to 2020). Of course, you’ll need to install macOS Sequoia first to use the feature. Any iPhone running iOS 18 will do.
The feature only works when your iPhone is locked (it’s okay if it’s charging or using Standby). If you unlock your iPhone while using iPhone Mirroring, the feature will temporarily disconnect.
Both devices also need Wi-Fi and Bluetooth turned on, and you’ll have to sign with your Apple Account on each. Your account needs two-factor authentication (using a trusted device or phone number) activated. The feature won’t work if your phone’s Personal Hotspot is active or you’re using AirPlay, Sidecar or internet sharing on your Mac.
How to set up iPhone Mirroring
Screenshot by Will Shanklin for Engadget
Open the iPhone Mirroring app on your Mac. It should already be in your Dock (see the screenshot above), but you can also find it in your Applications folder.
The app starts with a welcome screen. Tap “Continue,” then follow the prompt to unlock your iPhone.
Next, approve iPhone notifications on your Mac. This feature shows your handset’s alerts in your Mac’s Notification Center. (When you click an iOS alert on your Mac, it will open the corresponding app in the iPhone Mirroring app.) iPhone notifications on your Mac work even when the iPhone Mirroring app is closed or inactive, or if your phone isn’t nearby.
After approving notifications, a final screen will confirm that iPhone Mirroring is ready. Click the “Get Started” button to start. Once it loads, you’ll see your iPhone’s screen.
Using iPhone Mirroring
First, you may want to resize the iPhone Mirroring app. Apple only gives you three options: actual size, smaller and larger. You can change them using keyboard shortcuts: larger (Cmd +), actual size (Cmd 0) and smaller (Cmd -). You can also resize the window in your Mac’s menu bar under the View section. Dragging the edges of the window to resize it (like with other macOS apps) won’t work here.
In most cases, interacting with your virtual iPhone on your Mac is as simple as mimicking its usual touch gestures with your trackpad and typing in text fields using your Mac’s keyboard.
Screenshot by Will Shanklin for Engadget
Swipe-based gestures for Home, App Switcher and Control Center won’t work on Mac, but they have shortcuts. If you move your pointer to the top of the iPhone Mirroring window, a new area will appear, revealing buttons for the iOS Home Screen (left) and the App Switcher (right). (See the screenshot above.) This area also lets you click-hold and drag the app to reposition it.
You can also go to the Home Screen by clicking on the horizontal bar at the bottom of the app’s window or using the Cmd 1 keyboard shortcut. In addition, Cmd 2 activates the App Switcher, and Cmd 3 triggers a Spotlight search. Or, swipe down with two fingers on your Mac’s trackpad from the iPhone Home Screen (in the Mac app) for Spotlight.
There’s no way to activate the iOS Control Center from your Mac. You also can’t manually change the orientation of the virtual iPhone screen, but it will rotate automatically if you launch a game that starts by default in landscape mode:
Screenshot by Will Shanklin for Engadget
iPhone audio will play on your Mac while using the feature. Some iPhone videos will play in the iPhone Mirroring window, too. However, copyrighted content will be restricted in some cases, so some videos will only be viewable through corresponding macOS apps or desktop browser windows.
Apple’s Universal Clipboard can be useful while using iPhone Mirroring. Copy something on your virtual iPhone, and you can paste it on your Mac, and vice versa. You can also use AirDrop to transfer files between the two devices while using iPhone Mirroring.
iPhone Mirroring will time out if you don’t use the virtual phone for a while. Ditto for if you move your handset away from your computer. If it times out, just follow the app’s prompt to reconnect.
iPhone Mirroring login settings
Screenshot by Will Shanklin for Engadget
You can choose whether to require authentication every time you use iPhone Mirroring. In the Mac app, choose iPhone Mirroring > Settings in the menu bar (or type Cmd space), and you’ll see a barebones settings screen.
You can choose “Ask Every Time” or “Authenticate Automatically.” The former requires your Mac login password, Touch ID or Apple Watch confirmation to use your virtual iPhone on your desktop. Meanwhile, the latter will log into your phone automatically without authenticating each time.
You can also reset iPhone access in this settings screen. This removes your entire setup, and you’ll need to start the process from scratch the next time you open the iPhone Mirroring app.
If you have more than one iPhone tied to your Apple Account, you can choose which one to use with iPhone Mirroring under Settings > Desktop & Dock on your Mac. If this applies to you, you’ll see the option under the “Use iPhone widgets” section. (If you only have one iPhone under your Apple Account, this option won’t appear.)
This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/how-to-mirror-your-iphone-on-macos-sequoia-130003743.html?src=rss
Square Enix’s terrific Final Fantasy Pixel Remaster series has finally made its way to Xbox. The 1980s and ’90s classics, which arrived on PC and mobile starting in 2021 and Switch and PS4 last year, are now available on Xbox Series X/S.
The Xbox Store sells the six-game series in a $75 bundle ($60 for a limited time). Alternatively, you can buy the individual installments for prices ranging from $12 to $18 (on sale now for $9.59 to $14.39).
The series remasters Final Fantasy I through Final Fantasy VI with updated pixel graphics designed to pop on modern HD displays. The games also have remastered soundtracks (and the option to switch back to the chiptune-tastic originals). You can also choose between updated fonts and the originals.
Square Enix
Gameplay customizations include options to boost experience gains (up to 4x) or turn off random encounters for a breezier play-through. For an even easier run, you can turn on auto-battle.
This article originally appeared on Engadget at https://www.engadget.com/gaming/xbox/the-final-fantasy-pixel-remaster-series-finally-arrives-on-xbox-200713407.html?src=rss
Researchers have spotted an apparent downside of smarter chatbots. Although AI models predictably become more accurate as they advance, they’re also more likely to (wrongly) answer questions beyond their capabilities rather than saying, “I don’t know.” And the humans prompting them are more likely to take their confident hallucinations at face value, creating a trickle-down effect of confident misinformation.
“They are answering almost everything these days,” José Hernández-Orallo, professor at the Universitat Politecnica de Valencia, Spain, toldNature. “And that means more correct, but also more incorrect.” Hernández-Orallo, the project lead, worked on the study with his colleagues at the Valencian Research Institute for Artificial Intelligence in Spain.
The team studied three LLM families, including OpenAI’s GPT series, Meta’s LLaMA and the open-source BLOOM. They tested early versions of each model and moved to larger, more advanced ones — but not today’s most advanced. For example, the team began with OpenAI’s relatively primitive GPT-3 ada model and tested iterations leading up to GPT-4, which arrived in March 2023. The four-month-old GPT-4o wasn’t included in the study, nor was the newer o1-preview. I’d be curious if the trend still holds with the latest models.
The researchers tested each model on thousands of questions about “arithmetic, anagrams, geography and science.” They also quizzed the AI models on their ability to transform information, such as alphabetizing a list. The team ranked their prompts by perceived difficulty.
The data showed that the chatbots’ portion of wrong answers (instead of avoiding questions altogether) rose as the models grew. So, the AI is a bit like a professor who, as he masters more subjects, increasingly believes he has the golden answers on all of them.
Further complicating things is the humans prompting the chatbots and reading their answers. The researchers tasked volunteers with rating the accuracy of the AI bots’ answers, and they found that they “incorrectly classified inaccurate answers as being accurate surprisingly often.” The range of wrong answers falsely perceived as right by the volunteers typically fell between 10 and 40 percent.
“Humans are not able to supervise these models,” concluded Hernández-Orallo.
The research team recommends AI developers begin boosting performance for easy questions and programming the chatbots to refuse to answer complex questions. “We need humans to understand: ‘I can use it in this area, and I shouldn’t use it in that area,’” Hernández-Orallo told Nature.
It’s a well-intended suggestion that could make sense in an ideal world. But fat chance AI companies oblige. Chatbots that more often say “I don’t know” would likely be perceived as less advanced or valuable, leading to less use — and less money for the companies making and selling them. So, instead, we get fine-print warnings that “ChatGPT can make mistakes” and “Gemini may display inaccurate info.”
That leaves it up to us to avoid believing and spreading hallucinated misinformation that could hurt ourselves or others. For accuracy, fact-check your damn chatbot’s answers, for crying out loud.
This article originally appeared on Engadget at https://www.engadget.com/ai/advanced-ai-chatbots-are-less-likely-to-admit-they-dont-have-all-the-answers-172012958.html?src=rss
Sony’s PlayStation 5 Pro is almost here. The upgraded console is available to pre-order today with a whopping $700 price tag, thanks to new features like a more powerful GPU, better ray tracing and a narrower gap between graphical fidelity and performance.
But that isn’t far beyond your console budget, you can reserve pricey powerhouse before its November 7 launch. And, as to be expected, you'll probably want to do so immediately since the console already sold out in the UK. Here's everything you need to know about the new console and how to pre-order the PS5 Pro.
This article originally appeared on Engadget at https://www.engadget.com/gaming/playstation/sonys-ps5-pro-is-available-to-pre-order-today-115549437.html?src=rss
Although Meta Connect 2024 lacked a marquee high-end product for the holiday season, it still included a new budget VR headset and a tease of the “magic glasses” Meta’s XR gurus have been talking about for the better part of a decade. In addition, the company keeps plowing forward with new AI tools for its Ray-Ban glasses and social platforms. Here’s everything the company announced at Meta Connect 2024.
Orion AR glasses
Meta
Today’s best mixed reality gear — like Apple’s Vision Pro and the Meta Quest 3 — are headsets with passthrough video capabilities. But the tech industry eventually wants to squeeze that tech into something resembling a pair of prescription glasses. We’ll let you judge whether the Orion AR glasses pictured above pass that test, but they’re certainly closer than other full-fledged AR devices we’ve seen.
First, the bad news. These puppies won’t be available this year and don’t have an official release date. A leaked roadmap from last year suggested they’d arrive in 2027. However, Meta said on Wednesday that Orion would launch “in the near future,” so take what you will from that. For its part, Meta says the full-fledged product prototype is “truly representative of something that could ship to consumers” rather than a research device that’s decades away from shipping.
The glasses include tiny projectors to display holograms onto the lenses. Meta describes them as having a large field of view and immersive capabilities. Sensors can track voice, eye gaze, hand tracking and electromyography (EMG) wristband input.
The glasses combine that sensory input with AI capabilities. Meta gave the example of looking in a refrigerator and asking the onboard AI to spit out a recipe based on your ingredients. It will also support video calls, the ability to send messages on Meta’s platforms and spatial versions of Spotify, YouTube and Pinterest apps.
Meta Quest 3S
Meta
This year’s new VR headset focuses on the entry-level rather than early adopters wanting the latest cutting-edge tech. The Meta Quest 3S is a $300 baby sibling to last year’s Quest 3, shaving money off the higher-end model’s entry fee in exchange for cheaper lenses, a resolution dip and skimpier storage.
The headset includes Fresnel lenses, which are familiar to Quest 2 owners, instead of the higher-end pancake ones in Quest 3. It has a 1,832 x 1,920 resolution (20 pixels per degree), a drop from the 2,064 x 2,208 (25 PPD) in the Quest 3. Meta says the budget model’s field of view is also slightly lower.
The Quest 3S starts with a mere 128GB of storage, which could fill up quickly after installing a few of the platform’s biggest games. But if you’re willing to shell out $400, you can bump that up to a more respectable 256GB. (Alongside the announcement, Meta also dropped the 512GB Quest 3 price to $500 from $650.)
The headset may outlast the Quest 3 in one respect: battery life. Meta estimates the Quest 3S will last 2.5 hours, while the Quest 3 is rated for 2.2 hours.
Those ordering the headset will get a special Bat-bonus. Quest 3S (and Quest 3) orders between now and April 2025 will receive a free copy of Batman: Arkham Shadow, the VR action game coming next month.
The Quest 3S is now available for pre-order. It begins shipping on October 15.
Out with the old
To celebrate the arrival of the Meta Quest 3S, Meta is kicking two older models to the curb. The Quest 2 and Quest Pro will be discontinued by the end of the year. The company says sales will continue until inventory runs out or the end of the year, whichever comes first.
The company now views the Quest 3S, with its much better mixed reality capabilities, as the new budget model, so the $200 Quest 2 no longer has a place. The Quest Pro, which never gained much traction with consumers, has inferior cameras and passthrough video than the two Quest 3-tier models. The Pro launched two years ago as a Metaverse-centric device — back when the industry was pounding that word as hard as it’s pushing “AI” now. The headset launched at a whopping $1,500 and was later reduced to $1,000.
The assistant will now let you set reminders based on objects you see. For example, you could say, “Hey Meta, remind me to buy that book next Monday” to set an alert for something you see in the library. The glasses can also scan QR codes and dial phone numbers from text it recognizes.
Meta’s assistant should also respond to more natural commands. You’ll need to worry less about remembering formal prompts to trigger it (“Hey Meta, look and tell me”). It will let you use more casual phrasing like “What am I looking at?” The AI can also handle complex follow-up questions for more fluid chats with the robot friend living in your sunglasses.
According to Meta, the glasses’ live translation is also getting better. While last year’s version struggled with longer text, the company says the software will now translate larger chunks more effectively. Live translations will arrive in English, French, Italian and Spanish by the end of 2024.
Meta AI updates
Meta
The company said Met AI now supports voice chats. Although this capability existed before, it was limited to the Ray-Ban glasses.
Meta also partnered with celebrities to help draw customers into its chatbots. That’s right, folks: You can now hear Meta’s chatbot responses in the dulcet tones of the one and only John Cena! Other celebrity voices include Dame Judi Dench, Awkwafina, Keegan Michael Key and Kristen Bell.
Meta’s AI can now edit photos with text prompts, performing tasks like adding or removing objects or changing details like backgrounds or clothes. AI photo editing will be available on Meta’s social apps, including Instagram, Messenger, and WhatsApp.
Meanwhile, Meta’s Llama 3.2 AI model introduces vision capabilities. It can analyze and describe images, competing with similar features in ChatGPT and Anthropic’s Claude.
Although Meta Connect 2024 lacked a marquee high-end product for the holiday season, it still included a new budget VR headset and a tease of the “magic glasses” Meta’s XR gurus have been talking about for the better part of a decade. In addition, the company keeps plowing forward with new AI tools for its Ray-Ban glasses and social platforms. Here’s everything the company announced at Meta Connect 2024.
Orion AR glasses
Meta
Today’s best mixed reality gear — like Apple’s Vision Pro and the Meta Quest 3 — are headsets with passthrough video capabilities. But the tech industry eventually wants to squeeze that tech into something resembling a pair of prescription glasses. We’ll let you judge whether the Orion AR glasses pictured above pass that test, but they’re certainly closer than other full-fledged AR devices we’ve seen.
First, the bad news. These puppies won’t be available this year and don’t have an official release date. A leaked roadmap from last year suggested they’d arrive in 2027. However, Meta said on Wednesday that Orion would launch “in the near future,” so take what you will from that. For its part, Meta says the full-fledged product prototype is “truly representative of something that could ship to consumers” rather than a research device that’s decades away from shipping.
The glasses include tiny projectors to display holograms onto the lenses. Meta describes them as having a large field of view and immersive capabilities. Sensors can track voice, eye gaze, hand tracking and electromyography (EMG) wristband input.
The glasses combine that sensory input with AI capabilities. Meta gave the example of looking in a refrigerator and asking the onboard AI to spit out a recipe based on your ingredients. It will also support video calls, the ability to send messages on Meta’s platforms and spatial versions of Spotify, YouTube and Pinterest apps.
Meta Quest 3S
Meta
This year’s new VR headset focuses on the entry-level rather than early adopters wanting the latest cutting-edge tech. The Meta Quest 3S is a $300 baby sibling to last year’s Quest 3, shaving money off the higher-end model’s entry fee in exchange for cheaper lenses, a resolution dip and skimpier storage.
The headset includes Fresnel lenses, which are familiar to Quest 2 owners, instead of the higher-end pancake ones in Quest 3. It has a 1,832 x 1,920 resolution (20 pixels per degree), a drop from the 2,064 x 2,208 (25 PPD) in the Quest 3. Meta says the budget model’s field of view is also slightly lower.
The Quest 3S starts with a mere 128GB of storage, which could fill up quickly after installing a few of the platform’s biggest games. But if you’re willing to shell out $400, you can bump that up to a more respectable 256GB. (Alongside the announcement, Meta also dropped the 512GB Quest 3 price to $500 from $650.)
The headset may outlast the Quest 3 in one respect: battery life. Meta estimates the Quest 3S will last 2.5 hours, while the Quest 3 is rated for 2.2 hours.
Those ordering the headset will get a special Bat-bonus. Quest 3S (and Quest 3) orders between now and April 2025 will receive a free copy of Batman: Arkham Shadow, the VR action game coming next month.
The Quest 3S is now available for pre-order. It begins shipping on October 15.
Out with the old
To celebrate the arrival of the Meta Quest 3S, Meta is kicking two older models to the curb. The Quest 2 and Quest Pro will be discontinued by the end of the year. The company says sales will continue until inventory runs out or the end of the year, whichever comes first.
The company now views the Quest 3S, with its much better mixed reality capabilities, as the new budget model, so the $200 Quest 2 no longer has a place. The Quest Pro, which never gained much traction with consumers, has inferior cameras and passthrough video than the two Quest 3-tier models. The Pro launched two years ago as a Metaverse-centric device — back when the industry was pounding that word as hard as it’s pushing “AI” now. The headset launched at a whopping $1,500 and was later reduced to $1,000.
The assistant will now let you set reminders based on objects you see. For example, you could say, “Hey Meta, remind me to buy that book next Monday” to set an alert for something you see in the library. The glasses can also scan QR codes and dial phone numbers from text it recognizes.
Meta’s assistant should also respond to more natural commands. You’ll need to worry less about remembering formal prompts to trigger it (“Hey Meta, look and tell me”). It will let you use more casual phrasing like “What am I looking at?” The AI can also handle complex follow-up questions for more fluid chats with the robot friend living in your sunglasses.
According to Meta, the glasses’ live translation is also getting better. While last year’s version struggled with longer text, the company says the software will now translate larger chunks more effectively. Live translations will arrive in English, French, Italian and Spanish by the end of 2024.
Meta AI updates
Meta
The company said Met AI now supports voice chats. Although this capability existed before, it was limited to the Ray-Ban glasses.
Meta also partnered with celebrities to help draw customers into its chatbots. That’s right, folks: You can now hear Meta’s chatbot responses in the dulcet tones of the one and only John Cena! Other celebrity voices include Dame Judi Dench, Awkwafina, Keegan Michael Key and Kristen Bell.
Meta’s AI can now edit photos with text prompts, performing tasks like adding or removing objects or changing details like backgrounds or clothes. AI photo editing will be available on Meta’s social apps, including Instagram, Messenger, and WhatsApp.
Meanwhile, Meta’s Llama 3.2 AI model introduces vision capabilities. It can analyze and describe images, competing with similar features in ChatGPT and Anthropic’s Claude.