The Morning After: Does your car need a rear windshield?

You know those folks who say they’d donate a major organ to own a fancy car? Ask them if they’d feel as comfortable sacrificing a rear window instead. Polestar’s newest ride has made its North American debut at the NY Auto Show and notably lacks a rear windshield. The rationale is rear passengers get better headroom and a more comfortable ride than in other cars. Drivers, meanwhile, get a high-res display where the rear-view mirror used to be, linked to a live feed from a rear-mounted camera. Given how often people’s heads or luggage obscure the backward view, it’s a trade I’m readily prepared to accept.

— Dan Cooper

The biggest stories you might have missed

You can now use ChatGPT without an account

How to watch (and record) the 2024 solar eclipse on April 8

Open Roads review: Take it slow and savor the drama

Microsoft unbundles Teams and Office 365 for customers worldwide

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!

From its start, Gmail conditioned us to trade privacy for free services

Two decades of the customer being the product.

Gmail wasn’t the first service that turned its users into the product, but it’s probably the one we’re the most comfortable with. After all, while Facebook and its kin have been perpetually slammed for privacy issues, who really gets mad at Gmail? Our anniversary package has a deep dive into the last 20 years of Google’s flagship mail product.

Continue Reading.

Google says it will destroy browsing data collected from Chrome’s Incognito mode

… Oh, and speaking of Google and privacy.

The search giant has settled a recent class-action lawsuit relating to Chrome’s tracking of Incognito users. It has pledged to wipe out “billions” of data points it improperly collected and take steps to block any further tracking for five years. (Always a good sign when a company pledges to stop doing something it’s been told off for doing, but only for a short period.)

Continue Reading.

Tekken director apparently keeps getting requests to add a Waffle House stage

They could call it the Waffle Rough-House.

For the uninitiated, Waffle House is a waffle-centric chain of 24/7 American diners with a reputation for random outbursts of violence. It’s apparently so well known that Tekken players have been petitioning the game’s director to add a Waffle House level. Sadly, it probably won’t happen because Waffle House stands accused of underpaying its workers and, given the above context, exposing them to an unsafe working environment.

Continue Reading

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-does-your-car-need-a-rear-windshield-111523159.html?src=rss

The Morning After: Does your car need a rear windshield?

You know those folks who say they’d donate a major organ to own a fancy car? Ask them if they’d feel as comfortable sacrificing a rear window instead. Polestar’s newest ride has made its North American debut at the NY Auto Show and notably lacks a rear windshield. The rationale is rear passengers get better headroom and a more comfortable ride than in other cars. Drivers, meanwhile, get a high-res display where the rear-view mirror used to be, linked to a live feed from a rear-mounted camera. Given how often people’s heads or luggage obscure the backward view, it’s a trade I’m readily prepared to accept.

— Dan Cooper

The biggest stories you might have missed

You can now use ChatGPT without an account

How to watch (and record) the 2024 solar eclipse on April 8

Open Roads review: Take it slow and savor the drama

Microsoft unbundles Teams and Office 365 for customers worldwide

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!

From its start, Gmail conditioned us to trade privacy for free services

Two decades of the customer being the product.

Gmail wasn’t the first service that turned its users into the product, but it’s probably the one we’re the most comfortable with. After all, while Facebook and its kin have been perpetually slammed for privacy issues, who really gets mad at Gmail? Our anniversary package has a deep dive into the last 20 years of Google’s flagship mail product.

Continue Reading.

Google says it will destroy browsing data collected from Chrome’s Incognito mode

… Oh, and speaking of Google and privacy.

The search giant has settled a recent class-action lawsuit relating to Chrome’s tracking of Incognito users. It has pledged to wipe out “billions” of data points it improperly collected and take steps to block any further tracking for five years. (Always a good sign when a company pledges to stop doing something it’s been told off for doing, but only for a short period.)

Continue Reading.

Tekken director apparently keeps getting requests to add a Waffle House stage

They could call it the Waffle Rough-House.

For the uninitiated, Waffle House is a waffle-centric chain of 24/7 American diners with a reputation for random outbursts of violence. It’s apparently so well known that Tekken players have been petitioning the game’s director to add a Waffle House level. Sadly, it probably won’t happen because Waffle House stands accused of underpaying its workers and, given the above context, exposing them to an unsafe working environment.

Continue Reading

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-does-your-car-need-a-rear-windshield-111523159.html?src=rss

Telegram takes on WhatsApp with business-focused features

Telegram isn't quite as widely used as WhatsApp, but businesses can now add it as a communication option for their customers if they want to. Anybody on the messaging app can now convert their account into a business account to get access to features designed to make it easier for customers to find and contact them. They'll be able to display their hours of operation on their profile and pin their location on a map. With their operating hours in place, customers can see at a glance whether they're still open and what time they're closing for the day. 

A screenshot showing a business profile on Telegram.
Telegram

Businesses can also customize their start page and display information about their products and services on empty chats, giving customers a glimpse of what's on offer even before they get in touch. To make it easier to respond to multiple inquiries, Telegram Business accounts will also be able to craft and save preset messages that they can send as quick replies. Of course, they can also pre-write greeting and away messages that get automatically sent to customers who contact them. They can use a Telegram Bot to chat with their customers, as well, though we all know how frustrating it can be to talk with a robot when we need to talk to a human customer service rep. All these features are free, but only for those with a Telegram Premium account, which costs $5 a month.

In addition to introducing its new business-focused features, Telegram has also revealed that it's giving channel owners 50 percent of the revenue earned from ads displayed on their channels, as long as they have at least 1,000 subscribers. Based on information previously shared by company founder Pavel Durov, Telegram seems to be doing well financially and can afford to be that generous. Durov told The Financial Times that he expects the messaging app to be profitable by next year and that it's currently exploring a future initial public offering.

This article originally appeared on Engadget at https://www.engadget.com/telegram-takes-on-whatsapp-with-business-focused-features-101843987.html?src=rss

Telegram takes on WhatsApp with business-focused features

Telegram isn't quite as widely used as WhatsApp, but businesses can now add it as a communication option for their customers if they want to. Anybody on the messaging app can now convert their account into a business account to get access to features designed to make it easier for customers to find and contact them. They'll be able to display their hours of operation on their profile and pin their location on a map. With their operating hours in place, customers can see at a glance whether they're still open and what time they're closing for the day. 

A screenshot showing a business profile on Telegram.
Telegram

Businesses can also customize their start page and display information about their products and services on empty chats, giving customers a glimpse of what's on offer even before they get in touch. To make it easier to respond to multiple inquiries, Telegram Business accounts will also be able to craft and save preset messages that they can send as quick replies. Of course, they can also pre-write greeting and away messages that get automatically sent to customers who contact them. They can use a Telegram Bot to chat with their customers, as well, though we all know how frustrating it can be to talk with a robot when we need to talk to a human customer service rep. All these features are free, but only for those with a Telegram Premium account, which costs $5 a month.

In addition to introducing its new business-focused features, Telegram has also revealed that it's giving channel owners 50 percent of the revenue earned from ads displayed on their channels, as long as they have at least 1,000 subscribers. Based on information previously shared by company founder Pavel Durov, Telegram seems to be doing well financially and can afford to be that generous. Durov told The Financial Times that he expects the messaging app to be profitable by next year and that it's currently exploring a future initial public offering.

This article originally appeared on Engadget at https://www.engadget.com/telegram-takes-on-whatsapp-with-business-focused-features-101843987.html?src=rss

Jon Stewart says Apple asked him not to host FTC Chair Lina Khan

Jon Stewart hosted FTC (Federal Trade Commission) chair Lina Khan on his weekly Daily Show segment yesterday, but Stewart's own revelations were just as interesting as Khan's. During the sit-down, Stewart admitted that Apple asked him not to host Khan on a podcast, which was an extension of his The Problem with Jon Stewart Apple TV+ show at the time. 

"I wanted to have you on a podcast and Apple asked us not to do it," Stewart told Khan. "They literally said, 'Please don’t talk to her.'"

In fact, the entire episode appeared to have a "things Apple wouldn't let us do" theme. Ahead of the Khan interview, Stewart did a segment on artificial intelligence he called "the false promise of AI," effectively debunking altruistic claims of AI leaders and positing that it was strictly designed to replace human employees. 

"They wouldn’t let us do even that dumb thing we just did in the first act on AI," he told Khan. "Like, what is that sensitivity? Why are they so afraid to even have these conversations out in the public sphere?"

"I think it just shows the danger of what happens when you concentrate so much power and so much decision making in a small number of companies," Khan replied.

The Problem With Jon Stewart was abruptly cancelled ahead of its third season, reportedly following clashes over potential AI and China segments. That prompted US lawmakers to question Apple, seeking to know if the decision had anything to do with possible criticism of China. 

While stating that Apple has the right to stream any content it wants, "the coercive tactics of a foreign power should not be directly or indirectly influencing these determinations," the bipartisan committee wrote. (Apple's response to this, if any, has yet to be released.)

Stewart didn't say that the AI and Khan interview issues were the reason his show was cancelled, but they do indicate that Apple asserted editorial influence over issues that directly involved it.

Elsewhere in the segment, Khan discussed the FTC's lawsuit against Amazon, stating that the FTC alleges the company is a monopoly maintained via illegal practices (exorbitant seller fees, shady ads). They also touched on the FTC's lawsuit against Facebook, tech company collusion via AI, corporate consolidation, exorbitant drug prices and more.

This article originally appeared on Engadget at https://www.engadget.com/jon-stewart-says-apple-asked-him-not-to-host-ftc-chair-lina-khan-090249490.html?src=rss

Jon Stewart says Apple asked him not to host FTC Chair Lina Khan

Jon Stewart hosted FTC (Federal Trade Commission) chair Lina Khan on his weekly Daily Show segment yesterday, but Stewart's own revelations were just as interesting as Khan's. During the sit-down, Stewart admitted that Apple asked him not to host Khan on a podcast, which was an extension of his The Problem with Jon Stewart Apple TV+ show at the time. 

"I wanted to have you on a podcast and Apple asked us not to do it," Stewart told Khan. "They literally said, 'Please don’t talk to her.'"

In fact, the entire episode appeared to have a "things Apple would let us do" theme. Ahead of the Khan interview, Stewart did a segment on artificial intelligence he called "the false promise of AI," effectively debunking altruistic claims of AI leaders and positing that it was strictly designed to replace human employees. 

"They wouldn’t let us do even that dumb thing we just did in the first act on AI," he told Khan. "Like, what is that sensitivity? Why are they so afraid to even have these conversations out in the public sphere?"

"I think it just shows the danger of what happens when you concentrate so much power and so much decision making in a small number of companies," Khan replied.

The Problem With Jon Stewart was abruptly cancelled ahead of its third season, reportedly following clashes over potential AI and China segments. That prompted US lawmakers to question Apple, seeking to know if the decision had anything to do with possible criticism of China. 

While stating that Apple has the right to stream any content it wants, "the coercive tactics of a foreign power should not be directly or indirectly influencing these determinations," the bipartisan committee wrote. (Apple's response to this, if any, has yet to be released.)

Stewart didn't say that the AI and Khan interview issues were the reason his show was cancelled, but they do indicate that Apple asserted editorial influence over issues that directly involved it.

Elsewhere in the segment, Khan discussed the FTC's lawsuit against Amazon, stating that the FTC alleges the company is a monopoly maintained via illegal practices (exorbitant seller fees, shady ads). They also touched on the FTC's lawsuit against Facebook, tech company collusion via AI, corporate consolidation, exorbitant drug prices and more.

This article originally appeared on Engadget at https://www.engadget.com/jon-stewart-says-apple-asked-him-not-to-host-ftc-chair-lina-khan-090249490.html?src=rss

The US and UK are teaming up to test the safety of AI models

OpenAI, Google, Anthropic and other companies developing generative AI are continuing to improve their technologies and releasing better and better large language models. In order to create a common approach for independent evaluation on the safety of those models as they come out, the UK and the US governments have signed a Memorandum of Understanding. Together, the UK's AI Safety Institute and its counterpart in the US, which was announced by Vice President Kamala Harris but has yet to begin operations, will develop suites of tests to assess the risks and ensure the safety of "the most advanced AI models."

They're planning to share technical knowledge, information and even personnel as part of the partnership, and one of their initial goals seems to be performing a joint testing exercise on a publicly accessible model. UK's science minister Michelle Donelan, who signed the agreement, told The Financial Times that they've "really got to act quickly" because they're expecting a new generation of AI models to come out over the next year. They believe those models could be "complete game-changers," and they still don't know what they could be capable of. 

According to The Times, this partnership is the first bilateral arrangement on AI safety in the world, though both the US and the UK intend to team up with other countries in the future. "AI is the defining technology of our generation. This partnership is going to accelerate both of our Institutes' work across the full spectrum of risks, whether to our national security or to our broader society," US Secretary of Commerce Gina Raimondo said. "Our partnership makes clear that we aren't running away from these concerns — we're running at them. Because of our collaboration, our Institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidance."

While this particular partnership is focused on testing and evaluation, governments around the world are also conjuring regulations to keep AI tools in check. Back in March, the White House signed an executive order aiming to ensure that federal agencies are only using AI tools that "do not endanger the rights and safety of the American people." A couple of weeks before that, the European Parliament approved sweeping legislation to regulate artificial intelligence. It will ban "AI that manipulates human behavior or exploits people’s vulnerabilities," "biometric categorization systems based on sensitive characteristics," as well as the "untargeted scraping" of faces from CCTV footage and the web to create facial recognition databases. In addition, deepfakes and other AI-generated images, videos and audio will need to be clearly labeled as such under its rules. 

This article originally appeared on Engadget at https://www.engadget.com/the-us-and-uk-are-teaming-up-to-test-the-safety-of-ai-models-063002266.html?src=rss

The US and UK are teaming up to test the safety of AI models

OpenAI, Google, Anthropic and other companies developing generative AI are continuing to improve their technologies and releasing better and better large language models. In order to create a common approach for independent evaluation on the safety of those models as they come out, the UK and the US governments have signed a Memorandum of Understanding. Together, the UK's AI Safety Institute and its counterpart in the US, which was announced by Vice President Kamala Harris but has yet to begin operations, will develop suites of tests to assess the risks and ensure the safety of "the most advanced AI models."

They're planning to share technical knowledge, information and even personnel as part of the partnership, and one of their initial goals seems to be performing a joint testing exercise on a publicly accessible model. UK's science minister Michelle Donelan, who signed the agreement, told The Financial Times that they've "really got to act quickly" because they're expecting a new generation of AI models to come out over the next year. They believe those models could be "complete game-changers," and they still don't know what they could be capable of. 

According to The Times, this partnership is the first bilateral arrangement on AI safety in the world, though both the US and the UK intend to team up with other countries in the future. "AI is the defining technology of our generation. This partnership is going to accelerate both of our Institutes' work across the full spectrum of risks, whether to our national security or to our broader society," US Secretary of Commerce Gina Raimondo said. "Our partnership makes clear that we aren't running away from these concerns — we're running at them. Because of our collaboration, our Institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidance."

While this particular partnership is focused on testing and evaluation, governments around the world are also conjuring regulations to keep AI tools in check. Back in March, the White House signed an executive order aiming to ensure that federal agencies are only using AI tools that "do not endanger the rights and safety of the American people." A couple of weeks before that, the European Parliament approved sweeping legislation to regulate artificial intelligence. It will ban "AI that manipulates human behavior or exploits people’s vulnerabilities," "biometric categorization systems based on sensitive characteristics," as well as the "untargeted scraping" of faces from CCTV footage and the web to create facial recognition databases. In addition, deepfakes and other AI-generated images, videos and audio will need to be clearly labeled as such under its rules. 

This article originally appeared on Engadget at https://www.engadget.com/the-us-and-uk-are-teaming-up-to-test-the-safety-of-ai-models-063002266.html?src=rss

You can now use ChatGPT without an account

On Monday, OpenAI began opening up ChatGPT to users without an account. It described the move as part of its mission to “make tools like ChatGPT broadly available so that people can experience the benefits of AI.” It also gives the company more training data (for those who don’t opt out) and perhaps nudges more users into creating accounts and subscribing for superior GPT-4 access instead of the older GPT-3.5 model free users get.

I tested the instant access, which — as advertised — allowed me to start a new GPT-3.5 thread without any login info. The chatbot’s standard “How can I help you today?” screen appears, with optional buttons to sign up or log in. Although I saw it today, OpenAI says it’s gradually rolling out access, so check back later if you don’t see the option yet.

OpenAI says it added extra safeguards for accountless users, including blocking prompts and image generations in more categories than logged-in users. When asked for more info on what new categories it’s blocking, an OpenAI spokesperson told me that, while developing the feature, it considered how logged-out GPT-3.5 users could potentially introduce new threats.

The spokesperson added that the teams in charge of detecting and stopping abuse of its AI models have been involved in creating the new feature and will adjust accordingly if unexpected threats emerge. Of course, it still blocks everything it does for signed-in users, as detailed in its moderation API.

You can opt out of data training for your prompts when not signed in. To do so, click on the little question mark to the right of the text box, then select Settings and turn off the toggle for “Improve the model for everyone.”

OpenAI says more than 100 million people across 185 countries use ChatGPT weekly. Those are staggering numbers for an 18-month-old service from a company many people still hadn’t heard of two years ago. Today’s move gives those hesitant to create an account an incentive to take the world-changing chatbot for a spin, boosting those numbers even more.

This article originally appeared on Engadget at https://www.engadget.com/you-can-now-use-chatgpt-without-an-account-184417749.html?src=rss

You can now use ChatGPT without an account

On Monday, OpenAI began opening up ChatGPT to users without an account. It described the move as part of its mission to “make tools like ChatGPT broadly available so that people can experience the benefits of AI.” It also gives the company more training data (for those who don’t opt out) and perhaps nudges more users into creating accounts and subscribing for superior GPT-4 access instead of the older GPT-3.5 model free users get.

I tested the instant access, which — as advertised — allowed me to start a new GPT-3.5 thread without any login info. The chatbot’s standard “How can I help you today?” screen appears, with optional buttons to sign up or log in. Although I saw it today, OpenAI says it’s gradually rolling out access, so check back later if you don’t see the option yet.

OpenAI says it added extra safeguards for accountless users, including blocking prompts and image generations in more categories than logged-in users. When asked for more info on what new categories it’s blocking, an OpenAI spokesperson told me that, while developing the feature, it considered how logged-out GPT-3.5 users could potentially introduce new threats.

The spokesperson added that the teams in charge of detecting and stopping abuse of its AI models have been involved in creating the new feature and will adjust accordingly if unexpected threats emerge. Of course, it still blocks everything it does for signed-in users, as detailed in its moderation API.

You can opt out of data training for your prompts when not signed in. To do so, click on the little question mark to the right of the text box, then select Settings and turn off the toggle for “Improve the model for everyone.”

OpenAI says more than 100 million people across 185 countries use ChatGPT weekly. Those are staggering numbers for an 18-month-old service from a company many people still hadn’t heard of two years ago. Today’s move gives those hesitant to create an account an incentive to take the world-changing chatbot for a spin, boosting those numbers even more.

This article originally appeared on Engadget at https://www.engadget.com/you-can-now-use-chatgpt-without-an-account-184417749.html?src=rss