Netflix is becoming an ad-tech company

There was a time when streamers wooed potential customers with the promise of an ad-free experience. In recent years, however, companies such as Netflix, Amazon, Disney and more have hiked up their prices and made an ad-supported tier the most affordable option. Now, Netflix is taking the next step towards becoming a de-facto ad tech company by moving its development in-house, according to The Hollywood Reporter

Netflix announced the shift during its upfront preview, in which the company also shared that its $7 per month ad-supported tier has 40 million monthly active users. The ad-supported plan is reportedly getting 40 percent of new signups, with it having 15 million users just six months ago, in November. 

The streaming company has relied heavily on Microsoft to reach this success, partnering with the tech giant in 2022 on advertising and sales. But, the training wheels are coming off with Netflix's choice to move things in house, a choice that "will allow us to power the ads plan with the same level of excellence that’s made Netflix the leader in streaming technology today," Netflix ads chief Amy Reinhard said. Microsoft will also no longer be Netflix's sole ad tech partner, as the streamer will start working with companies like Google’s Display & Video 360 and The Trade Desk later this summer. 

This article originally appeared on Engadget at https://www.engadget.com/netflix-is-becoming-an-ad-tech-company-130004240.html?src=rss

Intel’s Thunderbolt Share makes it easier to move large files between PCs

Intel has launched a new software application called Thunderbolt Share that will make controlling two or more PCs a more seamless experience. It will allow you to sync files between PCs through its interface, or see multiple computers' folders so you can drag and drop and specific documents, images and other file types. That makes collaborations easy if you're transferring particularly hefty files, say raw photos or unedited videos, between you and a colleague. You can also use the app to transfer data from an old PC to a new one, so you don't have to use an external drive to facilitate the move. 

When it comes to screen sharing, Intel says the software can retain the resolution of the source PC without compression, so long as the maximum specs only reach Full HD at up to 60 frames per second. The mouse cursor and keyboard also remain smooth and responsive between PCs, thanks to the Thunderbolt technology's high bandwidth and low latency. 

The company says it's licensing Thunderbolt Share to OEMs as a value-add feature for their upcoming PCs and accessories. You will need Windows computers with Thunderbolt 4 or 5 ports to be able to use it, and they have to be directly connected with a Thunderbolt cable, or connected to the same Thunderbolt dock or monitor. The first devices that support the application will be available in the second half of 2024 and will be coming from various manufacturers, including Lenovo, Acer, MSI, Razer, Kensington and Belkin.

This article originally appeared on Engadget at https://www.engadget.com/intels-thunderbolt-share-makes-it-easier-to-move-large-files-between-pcs-123011505.html?src=rss

Intel’s Thunderbolt Share makes it easier to move large files between PCs

Intel has launched a new software application called Thunderbolt Share that will make controlling two or more PCs a more seamless experience. It will allow you to sync files between PCs through its interface, or see multiple computers' folders so you can drag and drop and specific documents, images and other file types. That makes collaborations easy if you're transferring particularly hefty files, say raw photos or unedited videos, between you and a colleague. You can also use the app to transfer data from an old PC to a new one, so you don't have to use an external drive to facilitate the move. 

When it comes to screen sharing, Intel says the software can retain the resolution of the source PC without compression, so long as the maximum specs only reach Full HD at up to 60 frames per second. The mouse cursor and keyboard also remain smooth and responsive between PCs, thanks to the Thunderbolt technology's high bandwidth and low latency. 

The company says it's licensing Thunderbolt Share to OEMs as a value-add feature for their upcoming PCs and accessories. You will need Windows computers with Thunderbolt 4 or 5 ports to be able to use it, and they have to be directly connected with a Thunderbolt cable, or connected to the same Thunderbolt dock or monitor. The first devices that support the application will be available in the second half of 2024 and will be coming from various manufacturers, including Lenovo, Acer, MSI, Razer, Kensington and Belkin.

This article originally appeared on Engadget at https://www.engadget.com/intels-thunderbolt-share-makes-it-easier-to-move-large-files-between-pcs-123011505.html?src=rss

Google I/O 2024: Everything revealed including Gemini AI, Android 15 and more

At the end of I/O, Google’s annual developer conference at the Shoreline Amphitheater in Mountain View, Google CEO Sundar Pichai revealed that the company had said “AI” 121 times. That, essentially, was the crux of Google’s two-hour keynote — stuffing AI into every Google app and service used by more than two billion people around the world. Here are all the major updates from Google's big event, along with some additional announcements that came after the keynote.

Gemini Pro
Google

Google announced a brand new AI model called Gemini 1.5 Flash, which it says is optimised for speed and efficiency. Flash sits between Gemini 1.5 Pro and Gemini 1.5 Nano, which its the company’s smallest model that runs locally on device. Google said that it created Flash because developers wanted a lighter and less expensive model than Gemini Pro to build AI-powered apps and services while keeping some of the things like a long context window of one million tokens that differentiates Gemini Pro from competing models. Later this year, Google will double Gemini’s context window to two million tokens, which means that it will be able to process two hours of video, 22 hours of audio, more than 60,000 lines of code or more than 1.4 million words at the same time.

Project Astra
Google

Google showed off Project Astra, an early version of a universal assistant powered by AI that Google’s DeepMind CEO Demis Hassabis said was Google’s version of an AI agent “that can be helpful in everyday life.”

In a video that Google says was shot in a single take, an Astra user moves around Google’s London office holding up their phone and pointing the camera at various things — a speaker, some code on a whiteboard, and out a window — and has a natural conversation with the app about what it seems. In one of the video’s most impressive moments, the correctly tells the user where she left her glasses before without the user ever having brought up the glasses.

The video ends with a twist — when the user finds and wears the missing glasses, we learn that they have an onboard camera system and are capable of using Project Astra to seamlessly carry on a conversation with the user, perhaps indicating that Google might be working on a competitor to Meta’s Ray Ban smart glasses.

Ask Photos
Google

Google Photos was already intelligent when it came to searching for specific images or videos, but with AI, Google is taking things to the next level. If you’re a Google One subscriber in the US, you will be able to ask Google Photos a complex question like “show me the best photo from each national park I’ve visited" when the feature rolls out over the next few months. Google Photos will use GPS information as well as its own judgement of what is “best” to present you with options. You can also ask Google Photos to generate captions to post the photos to social media.

Veo
Google

Google’s new AI-powered media creation engines are called Veo and Imagen 3. Veo is Google’s answer to OpenAI’s Sora. It can produce “high-quality” 1080p videos that can last “beyond a minute”, Google said, and can understand cinematic concepts like a timelapse.

Imagen 3, meanwhile, is a text-to-image generator that Google claims handles text better than its previous version, Imagen 2. The result is the company’s highest quality” text-to-image model with “incredible level of detail” for “photorealistic, lifelike images” and fewer artifacts — essentially pitting it against OpenAI’s DALLE-3.

Google Search
Google

Google is making big changes to how Search fundamentally works. Most of the updates announced today like the ability to ask really complex questions (“Find the best yoga or pilates studios in Boston and show details on their intro offers and walking time from Beacon Hill.”) and using Search to plan meals and vacations won’t be available unless you opt in to Search Labs, the company’s platform that lets people try out experimental features.

But a big new feature that Google is calling AI Overviews and which the company has been testing for a year now, is finally rolling out to millions of people in the US. Google Search will now present AI-generated answers on top of the results by default, and the company says that it will bring the feature to more than a billion users around the world by the end of the year.

Gemini on Android
Google

Google is integrating Gemini directly into Android. When Android 15 releases later this year, Gemini will be aware of the app, image or video that you’re running, and you’ll be able to pull it up as an overlay and ask it context-specific questions. Where does that leave Google Assistant that already does this? Who knows! Google didn’t bring it up at all during today’s keynote.

Google isn't quite ready to roll out the latest version of it smartwatch OS, but it is promising some major battery life improvements when it comes. The company said that Wear OS 5 will consume 20 percent less power than Wear OS 4 if a user runs a marathon. Wear OS 4 already brought battery life improvements to smartwatches that support it, but it could still be a lot better at managing a device's power. Google also provided developers with a new guide on how to conserve power and battery, so that they can create more efficient apps.

Android 15's developer preview may have been rolling for months, but there are still features to come. Theft Detection Lock is a new Android 15 feature that will use AI (there it is again) to predict phone thefts and lock things up accordingly. Google says its algorithms can detect motions associated with theft, like those associated with grabbing the phone and bolting, biking or driving away. If an Android 15 handset pinpoints one of these situations, the phone’s screen will quickly lock, making it much harder for the phone snatcher to access your data.

There were a bunch of other updates too. Google said it would add digital watermarks to AI-generated video and text, make Gemini accessible in the side panel in Gmail and Docs, power a virtual AI teammate in Workspace, listen in on phone calls and detect if you’re being scammed in real time, and a lot more.


Catch up on all the news from Google I/O 2024 right here!

Update May 15, 2:45PM ET: This story was updated after being published to include details on new Android 15 and WearOS 5 announcements made following the I/O 2024 keynote.

This article originally appeared on Engadget at https://www.engadget.com/google-io-2024-everything-revealed-including-gemini-ai-android-15-and-more-210414423.html?src=rss

Google’s Wear OS 5 promises better battery life

Google has unveiled Wear OS 5 at its I/O developer conference today, giving us a glimpse of new features and other improvements coming with the platform. The company isn't quite ready to roll out the final version of the wearable OS, but its developer preview already features enhanced battery life. As an example, Google said Wear OS 5 will consume 20 percent less power than Wear OS 4 if the user runs a marathon. Wear OS 4 already brought battery life improvements to smartwatches that support it, but it could still be a lot better at managing a device's power. Google also provided developers with a new guide on how to conserve power and battery, so that they can create more efficient apps.

In addition, Google has launched new features in Watch Face Format, allowing developers to make more types of watch faces that show different kinds of information. It has enabled the creation of apps that can show current weather information at a glance with this update, including the temperature and chances of rain. The company is also adding support for new complication types. They include "goal progress," which suits data wherein the user has a target but can exceed it, and "weighted elements," which can be used to represent discrete subsets of data.

Wear OS 5 could give rise to new apps and new functionalities in old apps, as well. Google's Health Connect API for the platform will allow apps to access user data even while they're only running in the background. It will also enable them to access health information over the past 30 days, though users will have to give their explicit permission before apps can take advantage of both features. Finally, Wear OS 5's Health Services API supports new data types for running, such as ground contact time and stride length.

Google didn't announce when Wear OS 5 will be available, but its predecessor, Wear OS 4, launched with the Samsung Galaxy Watch 6 in August 2023. Based on the timeline and the devices that support the current platform, Watch OS 5 could launch with the Samsung Galaxy 7 or the Pixel Watch 3 later this year.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-wear-os-5-promises-better-battery-life-182834300.html?src=rss

Google’s Wear OS 5 promises better battery life

Google has unveiled Wear OS 5 at its I/O developer conference today, giving us a glimpse of new features and other improvements coming with the platform. The company isn't quite ready to roll out the final version of the wearable OS, but its developer preview already features enhanced battery life. As an example, Google said Wear OS 5 will consume 20 percent less power than Wear OS 4 if the user runs a marathon. Wear OS 4 already brought battery life improvements to smartwatches that support it, but it could still be a lot better at managing a device's power. Google also provided developers with a new guide on how to conserve power and battery, so that they can create more efficient apps.

In addition, Google has launched new features in Watch Face Format, allowing developers to make more types of watch faces that show different kinds of information. It has enabled the creation of apps that can show current weather information at a glance with this update, including the temperature and chances of rain. The company is also adding support for new complication types. They include "goal progress," which suits data wherein the user has a target but can exceed it, and "weighted elements," which can be used to represent discrete subsets of data.

Wear OS 5 could give rise to new apps and new functionalities in old apps, as well. Google's Health Connect API for the platform will allow apps to access user data even while they're only running in the background. It will also enable them to access health information over the past 30 days, though users will have to give their explicit permission before apps can take advantage of both features. Finally, Wear OS 5's Health Services API supports new data types for running, such as ground contact time and stride length.

Google didn't announce when Wear OS 5 will be available, but its predecessor, Wear OS 4, launched with the Samsung Galaxy Watch 6 in August 2023. Based on the timeline and the devices that support the current platform, Watch OS 5 could launch with the Samsung Galaxy 7 or the Pixel Watch 3 later this year.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-wear-os-5-promises-better-battery-life-182834300.html?src=rss

Xbox Cloud Gaming finally supports keyboard and mouse inputs on web browsers

Microsoft just released a new update for Xbox Cloud Gaming that finally brings mouse and keyboard support, after teasing the feature for years. The tool is currently in beta release and works with both the Edge and Chrome web browsers. It looks pretty simple to use. Just select a game that supports a mouse and keyboard and have at it.

You can also instantly switch between a mouse/keyboard combination to a standard controller by pressing the Xbox button on the controller or pressing a key on the keyboard. The company says it’ll be rolling out badges later in the month to alert users which games support mouse and keyboard inputs.

For now, there’s support for 26 games. These include blockbusters like ARK Survival Evolved, Halo Infinite and, of course, Fortnite. Smaller games like High on Life and Pentiment can also be controlled via mouse and keyboard. Check the above link for the full list.

Microsoft hasn’t said what took it so long to get this going. The feature was originally presumed to launch back in June of 2022, but we didn’t get a progress update until two months ago. No matter the reason, KBM setups are practically a requirement for first-person shooters and, well, better late than never.

This article originally appeared on Engadget at https://www.engadget.com/xbox-cloud-gaming-finally-supports-keyboard-and-mouse-inputs-on-web-browsers-165215925.html?src=rss

Xbox Cloud Gaming finally supports keyboard and mouse inputs on web browsers

Microsoft just released a new update for Xbox Cloud Gaming that finally brings mouse and keyboard support, after teasing the feature for years. The tool is currently in beta release and works with both the Edge and Chrome web browsers. It looks pretty simple to use. Just select a game that supports a mouse and keyboard and have at it.

You can also instantly switch between a mouse/keyboard combination to a standard controller by pressing the Xbox button on the controller or pressing a key on the keyboard. The company says it’ll be rolling out badges later in the month to alert users which games support mouse and keyboard inputs.

For now, there’s support for 26 games. These include blockbusters like ARK Survival Evolved, Halo Infinite and, of course, Fortnite. Smaller games like High on Life and Pentiment can also be controlled via mouse and keyboard. Check the above link for the full list.

Microsoft hasn’t said what took it so long to get this going. The feature was originally presumed to launch back in June of 2022, but we didn’t get a progress update until two months ago. No matter the reason, KBM setups are practically a requirement for first-person shooters and, well, better late than never.

This article originally appeared on Engadget at https://www.engadget.com/xbox-cloud-gaming-finally-supports-keyboard-and-mouse-inputs-on-web-browsers-165215925.html?src=rss

Apple brings eye-tracking to recent iPhones and iPads

Ahead of Global Accessibility Awareness Day this week, Apple is issuing its typical annual set of announcements around its assistive features. Many of these are useful for people with disabilities, but also have broader applications as well. For instance, Personal Voice, which was released last year, helps preserve someone's speaking voice. It can be helpful to those who are at risk of losing their voice or have other reasons for wanting to retain their own vocal signature for loved ones in their absence. Today, Apple is bringing eye-tracking support to recent models of iPhones and iPads, as well as customizable vocal shortcuts, music haptics, vehicle motion cues and more. 

The most intriguing feature of the set is the ability to use the front-facing camera on iPhones or iPads (at least those with the A12 chip or later) to navigate the software without additional hardware or accessories. With this enabled, people can look at their screen to move through elements like apps and menus, then linger on an item to select it. 

That pause to select is something Apple calls Dwell Control, which has already been available elsewhere in the company's ecosystem like in Mac's accessibility settings. The setup and calibration process should only take a few seconds, and on-device AI is at work to understand your gaze. It'll also work with third-party apps from launch, since it's a layer in the OS like Assistive Touch. Since Apple already supported eye-tracking in iOS and iPadOS with eye-detection devices connected, the news today is the ability to do so without extra hardware.

Apple is also working on improving the accessibility of its voice-based controls on iPhones and iPads. It again uses on-device AI to create personalized models for each person setting up a new vocal shortcut. You can set up a command for a single word or phrase, or even an utterance (like "Oy!" perhaps). Siri will understand these and perform your designated shortcut or task. You can have these launch apps or run a series of actions that you define in the Shortcuts app, and once set up, you won't have to first ask Siri to be ready. 

Another improvement coming to vocal interactions is "Listen for Atypical Speech," which has iPhones and iPads use on-device machine learning to recognize speech patterns and customize their voice recognition around your unique way of vocalizing. This sounds similar to Google's Project Relate, which is also designed to help technology better understand those with speech impairments or atypical speech.

To build these tools, Apple worked with the Speech Accessibility Project at the Beckman Institute for Advanced Science and Technology at the University of Illinois Urbana-Champaign. The institute is also collaborating with other tech giants like Google and Amazon to further development in this space across their products.

For those who are deaf or hard of hearing, Apple is bringing haptics to music players on iPhone, starting with millions of songs on its own Music app. When enabled, music haptics will play taps, textures and specialized vibrations in tandem with the audio to bring a new layer of sensation. It'll be available as an API so developers can bring greater accessibility to their apps, too. 

Drivers with disabilities need better systems in their cars, and Apple is addressing some of the issues with its updates to CarPlay. Voice control and color filters are coming to the interface for vehicles, making it easier to control apps by talking and for those with visual impairments to see menus or alerts. To that end, CarPlay is also getting bold and large text support, as well as sound recognition for noises like sirens or honks. When the system identifies such a sound, it will display an alert at the bottom of the screen to let you know what it heard. This works similarly to Apple's existing sound recognition feature in other devices like the iPhone.

A graphic demonstrating Vehicle Motion Cues on an iPhone. On the left is a drawing of a car with two arrows on either side of its rear. The word
Apple

For those who get motion sickness while using their iPhones or iPads in moving vehicles, a new feature called Vehicle Motion Cues might alleviate some of that discomfort. Since motion sickness is based on a sensory conflict from looking at stationary content while being in a moving vehicle, the new feature is meant to better align the conflicting senses through onscreen dots. When enabled, these dots will line the four edges of your screen and sway in response to the motion it detects. If the car moves forward or accelerates, the dots will sway backwards as if in reaction to the increase in speed in that direction.

There are plenty more features coming to the company's suite of products, including Live Captions in VisionOS, a new Reader mode in Magnifier, support for multi-line braille and a virtual trackpad for those who use Assistive Touch. It's not yet clear when all of these announced updates will roll out, though Apple has historically made these features available in upcoming versions of iOS. With its developer conference WWDC just a few weeks away, it's likely many of today's tools get officially released with the next iOS.

This article originally appeared on Engadget at https://www.engadget.com/apple-brings-eye-tracking-to-recent-iphones-and-ipads-140012990.html?src=rss

Apple brings eye-tracking to recent iPhones and iPads

Ahead of Global Accessibility Awareness Day this week, Apple is issuing its typical annual set of announcements around its assistive features. Many of these are useful for people with disabilities, but also have broader applications as well. For instance, Personal Voice, which was released last year, helps preserve someone's speaking voice. It can be helpful to those who are at risk of losing their voice or have other reasons for wanting to retain their own vocal signature for loved ones in their absence. Today, Apple is bringing eye-tracking support to recent models of iPhones and iPads, as well as customizable vocal shortcuts, music haptics, vehicle motion cues and more. 

The most intriguing feature of the set is the ability to use the front-facing camera on iPhones or iPads (at least those with the A12 chip or later) to navigate the software without additional hardware or accessories. With this enabled, people can look at their screen to move through elements like apps and menus, then linger on an item to select it. 

That pause to select is something Apple calls Dwell Control, which has already been available elsewhere in the company's ecosystem like in Mac's accessibility settings. The setup and calibration process should only take a few seconds, and on-device AI is at work to understand your gaze. It'll also work with third-party apps from launch, since it's a layer in the OS like Assistive Touch. Since Apple already supported eye-tracking in iOS and iPadOS with eye-detection devices connected, the news today is the ability to do so without extra hardware.

Apple is also working on improving the accessibility of its voice-based controls on iPhones and iPads. It again uses on-device AI to create personalized models for each person setting up a new vocal shortcut. You can set up a command for a single word or phrase, or even an utterance (like "Oy!" perhaps). Siri will understand these and perform your designated shortcut or task. You can have these launch apps or run a series of actions that you define in the Shortcuts app, and once set up, you won't have to first ask Siri to be ready. 

Another improvement coming to vocal interactions is "Listen for Atypical Speech," which has iPhones and iPads use on-device machine learning to recognize speech patterns and customize their voice recognition around your unique way of vocalizing. This sounds similar to Google's Project Relate, which is also designed to help technology better understand those with speech impairments or atypical speech.

To build these tools, Apple worked with the Speech Accessibility Project at the Beckman Institute for Advanced Science and Technology at the University of Illinois Urbana-Champaign. The institute is also collaborating with other tech giants like Google and Amazon to further development in this space across their products.

For those who are deaf or hard of hearing, Apple is bringing haptics to music players on iPhone, starting with millions of songs on its own Music app. When enabled, music haptics will play taps, textures and specialized vibrations in tandem with the audio to bring a new layer of sensation. It'll be available as an API so developers can bring greater accessibility to their apps, too. 

Drivers with disabilities need better systems in their cars, and Apple is addressing some of the issues with its updates to CarPlay. Voice control and color filters are coming to the interface for vehicles, making it easier to control apps by talking and for those with visual impairments to see menus or alerts. To that end, CarPlay is also getting bold and large text support, as well as sound recognition for noises like sirens or honks. When the system identifies such a sound, it will display an alert at the bottom of the screen to let you know what it heard. This works similarly to Apple's existing sound recognition feature in other devices like the iPhone.

A graphic demonstrating Vehicle Motion Cues on an iPhone. On the left is a drawing of a car with two arrows on either side of its rear. The word
Apple

For those who get motion sickness while using their iPhones or iPads in moving vehicles, a new feature called Vehicle Motion Cues might alleviate some of that discomfort. Since motion sickness is based on a sensory conflict from looking at stationary content while being in a moving vehicle, the new feature is meant to better align the conflicting senses through onscreen dots. When enabled, these dots will line the four edges of your screen and sway in response to the motion it detects. If the car moves forward or accelerates, the dots will sway backwards as if in reaction to the increase in speed in that direction.

There are plenty more features coming to the company's suite of products, including Live Captions in VisionOS, a new Reader mode in Magnifier, support for multi-line braille and a virtual trackpad for those who use Assistive Touch. It's not yet clear when all of these announced updates will roll out, though Apple has historically made these features available in upcoming versions of iOS. With its developer conference WWDC just a few weeks away, it's likely many of today's tools get officially released with the next iOS.

This article originally appeared on Engadget at https://www.engadget.com/apple-brings-eye-tracking-to-recent-iphones-and-ipads-140012990.html?src=rss