When the earliest users of Apple's Vision Pro get their headsets in February, they'll find a few of the most popular entertainment apps missing from its system's app store. According to Bloomberg, Google's YouTube and Spotify don't have any plans to develop an application for visionOS, the device's platform, at the moment. A YouTube representative also told the publication that it's not going to make its iPad app available for download on the headset for now. "YouTube users will be able to use YouTube in Safari on the Vision Pro at launch," the spokesperson said. As for Spotify, a source told the publication that it doesn't intend to make its iPad app downloadable on the Vision Pro, as well.
As MacStories noted in a report listing popular apps that will be compatible with the headset at launch, apps for the iPhone and iPad will automatically show up on the device's store by default. Developers have to opt out of making their apps downloadable on the Vision Pro. It's unclear why YouTube and Spotify have chosen not to make their apps available on the headset, but they're not the only ones. Bloombergpreviously reported that Netflix won't be releasing a dedicated app for the Vision Pro either. In addition, Netflix told the publication that subscribers will have to access its service from a browser on the device, which means its iPad app won't be downloadable. Based on MacStories' report, Meta' Instagram and Facebook might also be missing from the Vision Pro's app store.
These companies may have chosen to wait and see whether it's worth dedicating resources towards creating a dedicated app for the $3,500 headset. They may also be worried about having to deal with potential issues that Vision Pro users could encounter if they use the iPad versions of the apps on a device that's from a totally different category. That said, the first Vision Pro users will still have a lot of entertainment apps to choose from, including Disney+, which is giving users access to special immersive environments that can serve as backdrops for its shows.
This article originally appeared on Engadget at https://www.engadget.com/apples-vision-pro-wont-have-access-to-youtube-and-spotify-apps-at-launch-083434306.html?src=rss
The Rabbit R1 launch at CES left many questions unanswered, but earlier today, the brand finally shed light on which LLM (large language model) will be powering the device's interaction with us mere mortals. The AI provider in question is none other than Perplexity, a San Francisco-based startup with ambitions to overtake Google in the AI space, which is no wonder that it has already received investments from the likes of NVIDIA and Jeff Bezos.
Perplexity will be providing up-to-date search results via Rabbit's $199 orange brick — without the need of any subscription. That said, the first 100,000 R1 buyers will receive one year of Perplexity Pro subscription — normally costing $200 — for free. This advanced service adds file upload support, a daily quota of over 300 complex queries and the ability to switch to other AI models (GPT-4, Claude 2.1 or Gemini), though these don't necessarily apply to the R1's use case.
In case you were wondering:
Today we announced that we will use Perplexity as one of our key LLM services for r1 – and r1 still does not require any subscription to benefit from this partnership.
The $200 credit for Perplexity Pro is a standalone bonus kindly offered by… https://t.co/qYMM7TKFyZ
The Rabbit R1, designed by Teenage Engineering, features a 2.88-inch touchscreen, a scroll wheel, two mics, a speaker, a rotational camera and a "Push-to-Talk" button. By leveraging its Large Action Model (LAM), this dedicated gadget can perform tasks like booking rides, finding recipes based on the ingredients you have, identifying people and objects (including items in, say, your fridge), or just fact checking — which we now know will rely on Perplexity's real-time search engine. The R1 is available for pre-order now ahead of shipment in March or April.
This article originally appeared on Engadget at https://www.engadget.com/the-rabbit-r1-will-offer-up-to-date-answers-powered-by-perplexitys-ai-031313883.html?src=rss
The Rabbit R1 launch at CES left many questions unanswered, but earlier today, the brand finally shed light on which LLM (large language model) will be powering the device's interaction with us mere mortals. The AI provider in question is none other than Perplexity, a San Francisco-based startup with ambitions to overtake Google in the AI space, which is no wonder that it has already received investments from the likes of NVIDIA and Jeff Bezos.
Perplexity will be providing up-to-date search results via Rabbit's $199 orange brick — without the need of any subscription. That said, the first 100,000 R1 buyers will receive one year of Perplexity Pro subscription — normally costing $200 — for free. This advanced service adds file upload support, a daily quota of over 300 complex queries and the ability to switch to other AI models (GPT-4, Claude 2.1 or Gemini), though these don't necessarily apply to the R1's use case.
In case you were wondering:
Today we announced that we will use Perplexity as one of our key LLM services for r1 – and r1 still does not require any subscription to benefit from this partnership.
The $200 credit for Perplexity Pro is a standalone bonus kindly offered by… https://t.co/qYMM7TKFyZ
The Rabbit R1, designed by Teenage Engineering, features a 2.88-inch touchscreen, a scroll wheel, two mics, a speaker, a rotational camera and a "Push-to-Talk" button. By leveraging its Large Action Model (LAM), this dedicated gadget can perform tasks like booking rides, finding recipes based on the ingredients you have, identifying people and objects (including items in, say, your fridge), or just fact checking — which we now know will rely on Perplexity's real-time search engine. The R1 is available for pre-order now ahead of shipment in March or April.
This article originally appeared on Engadget at https://www.engadget.com/the-rabbit-r1-will-offer-up-to-date-answers-powered-by-perplexitys-ai-031313883.html?src=rss
Microsoft is rolling out Reading Coach as a standalone app, which will expand its tools for educators in Microsoft Teams. The new app will be part of its Reading Progress suite designed to help students improve literacy in the classroom and at home. The tool will use artificial intelligence to provide users with personalized feedback on how to improve reading scores as well as specific suggestions for how to improve things like pronunciation. It will be free to any users that have a Microsoft account.
With prolonged use, the AI tool will flag specific words that a reader frequently mispronounces or misunderstands during reading sessions. To keep students engaged, the program will also ask a reader to choose prompts that can change a storyline as they progress.
Microsoft says teachers can integrate its program in classrooms through learning platforms starting in the Spring. But the tool is available to educators this month in preview. Teachers will be able to track how student’s feel about assignments using the Reflect tool within the program. This kind of feedback might help an educator determine what assignments students feel most excited about and which lessons might not be working. Beyond tracking student performance, the new features for Microsoft’s Teams for Education suite will help teachers generate content for lessons, such as passages and assignments for a student to engage with.
Microsoft also introduced new features for its Teams for Education app, which is designed to help educators tailor content for digital learning platforms. The Classwork tool will use AI to emphasize particular messages in an assignment’s instructions, according to an educator's particular goals for that lesson. The Assignments tool will use AI to streamline the rubric generating process. Outlines can be tailored by a teacher based on grade level, evaluation scale or other factors.
This article originally appeared on Engadget at https://www.engadget.com/microsofts-tool-for-ai-reading-lessons-is-now-a-standalone-app-230520756.html?src=rss
Microsoft is rolling out Reading Coach as a standalone app, which will expand its tools for educators in Microsoft Teams. The new app will be part of its Reading Progress suite designed to help students improve literacy in the classroom and at home. The tool will use artificial intelligence to provide users with personalized feedback on how to improve reading scores as well as specific suggestions for how to improve things like pronunciation. It will be free to any users that have a Microsoft account.
With prolonged use, the AI tool will flag specific words that a reader frequently mispronounces or misunderstands during reading sessions. To keep students engaged, the program will also ask a reader to choose prompts that can change a storyline as they progress.
Microsoft says teachers can integrate its program in classrooms through learning platforms starting in the Spring. But the tool is available to educators this month in preview. Teachers will be able to track how student’s feel about assignments using the Reflect tool within the program. This kind of feedback might help an educator determine what assignments students feel most excited about and which lessons might not be working. Beyond tracking student performance, the new features for Microsoft’s Teams for Education suite will help teachers generate content for lessons, such as passages and assignments for a student to engage with.
Microsoft also introduced new features for its Teams for Education app, which is designed to help educators tailor content for digital learning platforms. The Classwork tool will use AI to emphasize particular messages in an assignment’s instructions, according to an educator's particular goals for that lesson. The Assignments tool will use AI to streamline the rubric generating process. Outlines can be tailored by a teacher based on grade level, evaluation scale or other factors.
This article originally appeared on Engadget at https://www.engadget.com/microsofts-tool-for-ai-reading-lessons-is-now-a-standalone-app-230520756.html?src=rss
Instagram has revealed its latest mindfulness feature targeted at teens. When a younger user scrolls for more than 10 minutes in the likes of Reels or their direct messages, the app will suggest that they close the app and get to bed.
These "Nighttime Nudges" will automatically appear on teens' accounts and it won't be possible to switch them off. Instagram didn't specify whether the feature will be enabled for all teenagers or only under-18s.
The idea, according to Instagram, is to give teens who aren't already using features such as Take a Break reminders to close the app for the night. "We want teens to leave Instagram feeling like the time they spend on the app is meaningful and intentional, and we know sleep is particularly important for young people," Instagram said.
The new tool follows other features Instagram has rolled out to help teens and their parents manage time spent on the app. Along with Take a Break and parental supervision features, this includes the likes of Quiet Mode. The latter enables teens to mute notifications, automatically reply to messages and let their friends and followers know that they're unavailable and doing something else, such as studying or sleeping.
This article originally appeared on Engadget at https://www.engadget.com/instagram-will-start-telling-night-owl-teens-to-close-the-app-and-go-to-sleep-152600078.html?src=rss
Samsung’s big Unpacked event yesterday unabashedly focused on the company’s annual flagship phone refresh. No smart speakers, no tablets, no wearables (pretty much…) just three more phones, each with entirely different unique features. Just kidding: It’s mostly just changes to cameras and screen size. Same as it’s been since the Galaxy S20.
While introducing the Galaxy S24, S24+ and S24 Ultra, the company wheeled out streamer and YouTuber Pokimane to cheerlead the even brighter screens, while MrBeast — who Samsung couldn’t afford to have there in person? — showcased some of the camera tricks and specs of the flagship S24 Ultra.
However, beyond the predictable spec bumps, Samsung went to town on AI features this year. And they’re intriguing, inching beyond what Google’s been doing on its Pixel series for years.
Sure, there are photography-augmenting features, with the S24 sniffing out unwanted reflections and shadows, but now generative AI will power auto-fill features, extending the background of shots to help recompose wonky photos. With video, a new feature will use AI to generate more frames to create slow-mo clips not actually captured in slow motion.
Samsung’s added AI smarts beyond the camera too, with new features for search, translations, note creation and message composition. New transcription tricks, when you record meetings and other conversations, mean S24 will split audio recordings into separate people talking and reformat it on the fly. You can even share selected parts or get the smartphone to summarize meetings and notes for you. I’m intrigued to see what my smartphone thinks is important during my weekly catchups with the Engadget team.
I’ll dig into the specs for the new flagship S24 below (it’s a Samsung-heavy TMA), but this year, it’s really about the software. And the good news is that many of these features will make their way to selected older Galaxy devices later this year.
— Mat Smith
You can get these reports delivered daily direct to your inbox. Subscribe right here!
The $1,300 Galaxy S24 Ultra is Samsung’s biggest AI bet yet. Sure, the hardware design doesn’t appear to have changed much, but there’s now a titanium frame (available in colors beyond monochrome shades, Apple), ensuring the biggest flagship should feel lighter and easier to wield than previous iterations. The S24 Ultra’s telephoto camera is now based on a 50-megapixel sensor (up from 10MP on the S23 Ultra) with a 5x optical zoom. If you’re obsessed with specs, you might recall the S23 Ultra packed a 10x optical zoom. The company apparently chose this tweak based on customer feedback and use patterns, which saw 5x as the most frequently used zoom mode. We’ve got first impressions right here.
Near the end of its Unpacked event, Samsung started talking about its health-focused software, Samsung Health, and those watching the show fought to maintain concentration. Then, Samsung teased a new tinier piece of health-focused hardware, the Galaxy Ring. It’ll have lots of sensors and hooks into the Health software suite. But that’s all we know.
But if Samsung’s getting involved with smart rings, all we can say is: Watch out, Oura.
The company updated its disclaimer after settling a lawsuit.
When you open an Incognito browser on Chrome, you’ll see a notification warning that other people using your device won’t be able to see your activity, but your downloads, bookmarks and reading items will still be saved. Now, Google has updated that disclaimer in Chrome’s experimental Canary channel, shortly after agreeing to settle a $5 billion lawsuit accusing it of tracking Incognito users. The plaintiffs of the 2020 lawsuit argued that by tracking users on Incognito, Google was giving people the false belief that they could control the information they were willing to share. The new disclaimer in Canary says Incognito mode won’t change how websites collect people’s data.
She spent 14 years as COO and 12 as a board member.
Sheryl Sandberg is leaving Meta’s board of directors after 12 years, her last official role with the company. Sandberg spent 14 years as Meta’s COO and Mark Zuckerberg’s top lieutenant and 12 years on the company’s board. Her role as a board member will officially end in May. In a post on Facebook, she said, “This feels like the right time to step away,” and she would continue to advise the company. Hey, at least she posted it on Facebook.
When Apple announced the Vision Pro headset, it namedropped a number of streaming services with dedicated apps for the device, including Disney+, Max, Amazon Prime Video and Paramount+. It put a lot of focus on the headset's entertainment features and is most likely hoping that they could help convince tentative buyers to take the plunge. But one name was clearly missing from the list of streaming apps arriving on the platform, and it's the biggest one of them all: Netflix. Now, Bloomberg is reporting that Netflix currently has no plans to release a special application for the Vision Pro.
"Our members will be able to enjoy Netflix on the web browser on the Vision Pro, similar to how our members can enjoy Netflix on Macs," the company told Bloomberg's Mark Gurman in a statement. As Gurman notes, Vision Pro will be able to run iPad apps on the headset's visionOS in addition to applications especially designed for the platform. That means Netflix isn't even modifying its iPad app to run on the Vision Pro, and users will not be able to enjoy the features they use on mobile devices, such as offline viewing.
In comparison, Disney+ has gone all in and is even giving users access to immersive environments, including one based on the Avengers Tower, that can serve as backdrops for its shows. Based on another Bloomberg report from 2023, Netflix really didn't have a plan to develop an application for the headset. It's unclear why that's the case, but the company may have chosen to wait and see whether the Vision Pro could achieve a certain level of popularity before dedicating resources towards developing an app for for it. The device could have a dedicated Netflix application in the future if that's the case, but early adopters would have to make do with watching the service's shows on a browser.
This article originally appeared on Engadget at https://www.engadget.com/netflix-wont-launch-an-app-for-the-apple-vision-pro-at-least-right-now-120520406.html?src=rss
Samsung Galaxy Unpacked 2024 has come and gone, leaving behind a series of new Galaxy devices. If you missed the event, we've got you covered: You can watch Samsung Galaxy Unpacked S24 in less than 10 minutes right now. Between new smartphones and a dive into AI — here's what you can expect to see.
The event revealed three new smartphones that make up the Samsung Galaxy S24 series. There's the S24, starting at $799 for the 128GB model — plus, order it by January 25, and Samsung will throw in a free Watch 6. The Galaxy S24+ and Galaxy S24 Ultra start at $1,000 and $1,300, respectively, for their 256GB options. The entire S24 series comes equipped with the new Snapdragon 8 Gen 3 processor in the United States, providing the necessary power for the smartphones' AI features.
The Galaxy S24 series uses Samsung's new Gauss Generative AI model. Galaxy AI, as the company refers to the overall system, allows for quite a few fresh features, including live two-way translations for phone calls. The system works right on the phone and doesn't require Wi-Fi or cellular connections. The same applies to Interpreter, an in-person translator, and Samsung Keyboard, which can translate messages across 13 languages. Speaking of messages, Android Auto can summarize any messages you receive while driving and suggest responses for you to approve with voice commands.
Galaxy AI will also come into play for any photos you take using the S24 series. According to Samsung, it can help with image stabilization, digital zoom and content captured in low-light. Galaxy AI can also suggest photo edits and offers Generative Fill to change the background. However, the latter requires a network connection and will give the photo a watermark.
This article originally appeared on Engadget at https://www.engadget.com/watch-the-samsung-galaxy-unpacked-2024-event-in-under-10-minutes-110059576.html?src=rss
Social media apps tend to offer the convenience of their very own camera tools, but on the flip side, these are limited by the few shooting options. Understanding our modern-day pain point, Samsung has teamed up with Instagram and Snap to integrate some of its handy native camera features into their apps, in order to up our social media game via the brand new Galaxy S24 series smartphones — namely the titanium-framed S24 Ultra with its 200-megapixel main shooter. Specifically, you'll be able to leverage Samsung's "Super HDR" option, upgraded "Nightography" power and video stabilization within Instagram and Snapchat's in-app cameras. The only caveat here is that for video stabilization, you'll need to have it enabled in the native camera settings first.
Samsung's collaboration with Instagram goes deeper, offering upgraded editing, uploading and viewing experiences tailored to its devices. These also include the ability to create Instagram stories directly from motion photos. With their Super HDR capabilities, the Galaxy S24 devices are also the first to receive HDR photo support on Instagram — likely marking the first of many more apps to potentially support this vibrant display format in the near future.
With its new AI capabilities playing a big role in the Galaxy S24 lineup's camera systems, it's no wonder that Samsung is pushing its camera integration into the two popular social media platforms. Still, you'll probably want to stick to the native camera app and editing tools for maximum versatility — especially when it comes to the more AI-specific tools like "Edit Suggestion," "Generative Edit" (network connection required) and "Instant Slow-mo."
This article originally appeared on Engadget at https://www.engadget.com/instagram-and-snapchat-can-use-samsung-galaxy-s24s-native-camera-features-070121743.html?src=rss