The US and UK are teaming up to test the safety of AI models

OpenAI, Google, Anthropic and other companies developing generative AI are continuing to improve their technologies and releasing better and better large language models. In order to create a common approach for independent evaluation on the safety of those models as they come out, the UK and the US governments have signed a Memorandum of Understanding. Together, the UK's AI Safety Institute and its counterpart in the US, which was announced by Vice President Kamala Harris but has yet to begin operations, will develop suites of tests to assess the risks and ensure the safety of "the most advanced AI models."

They're planning to share technical knowledge, information and even personnel as part of the partnership, and one of their initial goals seems to be performing a joint testing exercise on a publicly accessible model. UK's science minister Michelle Donelan, who signed the agreement, told The Financial Times that they've "really got to act quickly" because they're expecting a new generation of AI models to come out over the next year. They believe those models could be "complete game-changers," and they still don't know what they could be capable of. 

According to The Times, this partnership is the first bilateral arrangement on AI safety in the world, though both the US and the UK intend to team up with other countries in the future. "AI is the defining technology of our generation. This partnership is going to accelerate both of our Institutes' work across the full spectrum of risks, whether to our national security or to our broader society," US Secretary of Commerce Gina Raimondo said. "Our partnership makes clear that we aren't running away from these concerns — we're running at them. Because of our collaboration, our Institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidance."

While this particular partnership is focused on testing and evaluation, governments around the world are also conjuring regulations to keep AI tools in check. Back in March, the White House signed an executive order aiming to ensure that federal agencies are only using AI tools that "do not endanger the rights and safety of the American people." A couple of weeks before that, the European Parliament approved sweeping legislation to regulate artificial intelligence. It will ban "AI that manipulates human behavior or exploits people’s vulnerabilities," "biometric categorization systems based on sensitive characteristics," as well as the "untargeted scraping" of faces from CCTV footage and the web to create facial recognition databases. In addition, deepfakes and other AI-generated images, videos and audio will need to be clearly labeled as such under its rules. 

This article originally appeared on Engadget at https://www.engadget.com/the-us-and-uk-are-teaming-up-to-test-the-safety-of-ai-models-063002266.html?src=rss

The US and UK are teaming up to test the safety of AI models

OpenAI, Google, Anthropic and other companies developing generative AI are continuing to improve their technologies and releasing better and better large language models. In order to create a common approach for independent evaluation on the safety of those models as they come out, the UK and the US governments have signed a Memorandum of Understanding. Together, the UK's AI Safety Institute and its counterpart in the US, which was announced by Vice President Kamala Harris but has yet to begin operations, will develop suites of tests to assess the risks and ensure the safety of "the most advanced AI models."

They're planning to share technical knowledge, information and even personnel as part of the partnership, and one of their initial goals seems to be performing a joint testing exercise on a publicly accessible model. UK's science minister Michelle Donelan, who signed the agreement, told The Financial Times that they've "really got to act quickly" because they're expecting a new generation of AI models to come out over the next year. They believe those models could be "complete game-changers," and they still don't know what they could be capable of. 

According to The Times, this partnership is the first bilateral arrangement on AI safety in the world, though both the US and the UK intend to team up with other countries in the future. "AI is the defining technology of our generation. This partnership is going to accelerate both of our Institutes' work across the full spectrum of risks, whether to our national security or to our broader society," US Secretary of Commerce Gina Raimondo said. "Our partnership makes clear that we aren't running away from these concerns — we're running at them. Because of our collaboration, our Institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidance."

While this particular partnership is focused on testing and evaluation, governments around the world are also conjuring regulations to keep AI tools in check. Back in March, the White House signed an executive order aiming to ensure that federal agencies are only using AI tools that "do not endanger the rights and safety of the American people." A couple of weeks before that, the European Parliament approved sweeping legislation to regulate artificial intelligence. It will ban "AI that manipulates human behavior or exploits people’s vulnerabilities," "biometric categorization systems based on sensitive characteristics," as well as the "untargeted scraping" of faces from CCTV footage and the web to create facial recognition databases. In addition, deepfakes and other AI-generated images, videos and audio will need to be clearly labeled as such under its rules. 

This article originally appeared on Engadget at https://www.engadget.com/the-us-and-uk-are-teaming-up-to-test-the-safety-of-ai-models-063002266.html?src=rss

From its start, Gmail conditioned us to trade privacy for free services

Long before Gmail became smart enough to finish your sentences, Google’s now-ubiquitous email service was buttering up the public for a fate that defined the internet age: if you’re not paying for the product, you are the product.

When Gmail was announced on April 1, 2004, its lofty promises and the timing of its release reportedly had people assuming it was a joke. It wasn’t the first web-based email provider — Hotmail and Yahoo! Mail had already been around for years — but Gmail was offering faster service, automatic conversation grouping for messages, integrated search functions and 1GB of storage, which was at the time a huge leap forward in personal cloud storage. Google in its press release boasted that a gigabyte was “more than 100 times” what its competitors offered. All of that, for free.

Except, as Gmail and countless tech companies in its wake have taught us, there’s no such thing as free. Using Gmail came with a tradeoff that’s now commonplace: You get access to its service, and in exchange, Google gets your data. Specifically, its software could scan the contents of account holders’ emails and use that information to serve them personalized ads on the site’s sidebar. For better or worse, it was a groundbreaking approach.

“Depending on your take, Gmail is either too good to be true, or it’s the height of corporate arrogance, especially coming from a company whose house motto is ‘Don’t Be Evil,’” tech journalist Paul Boutin wrote for Slate when Gmail launched. (Boutin, one of its early media testers, wrote favorably about Google’s email scanning but suggested the company implement a way for users to opt out lest they reject it entirely.)

There was immediate backlash from those who considered Gmail to be a privacy nightmare, yet it grew — and generated a lot of hype, thanks to its invite-only status in the first few years, which spurred a reselling market for Gmail invitations at upwards of $150 a pop, according to TIME. Google continued its ad-related email scanning practices for over a decade, despite the heat, carrying on through Gmail’s public rollout in 2007 and well into the 2010s, when it really started gaining traction.

And why not? If Gmail proved anything, it was that people would, for the most part, accept such terms. Or at least not care enough to read the fine-print closely. In 2012, Gmail became the world’s largest email service, with 425 million active users.

Other sites followed Google’s lead, baking similar deals into their terms of service, so people’s use of the product would automatically mean consent to data collection and specified forms of sharing. Facebook started integrating targeted ads based on its users’ online activities in 2007, and the practice has since become a pillar of social media’s success.

Things have changed a lot in recent years, though, with the rise of a more tech-savvy public and increased scrutiny from regulators. Gmail users on multiple occasions attempted to bring about class-action lawsuits over the scanning issue, and in 2017, Google finally caved. That year, the company announced that regular Gmail users’ emails would no longer be scanned for ad personalization (paid enterprise Gmail accounts already had this treatment).

Google, of course, still collects users’ data in other ways and uses the information to serve hyper-relevant ads. It still scans emails too, both for security purposes and to power some of its smart features. And the company came under fire again in 2018 after The Wall Street Journal revealed it was allowing third-party developers to trawl users’ Gmail inboxes, to which Google responded by reminding users it was within their power to grant and revoke those permissions. As CNET reporters Laura Hautala and Richard Nieva wrote then, Google’s response more or less boiled down to: “This is what you signed up for.”

Really, what users signed up for was a cutting-edge email platform that ran laps around the other services at the time, and in many ways still does. It made the privacy concerns, for some, easier to swallow. From its inception, Gmail set the bar pretty high with its suite of free features. Users could suddenly send files of up to 25MB and check their email from anywhere as long as they had access to an internet connection and a browser, since it wasn’t locked to a desktop app.

It popularized the cloud as well as the Javascript technique AJAX, Wired noted in a piece for Gmail’s 10-year anniversary. This made Gmail dynamic, allowing the inbox to automatically refresh and surface new messages without the user clicking buttons. And it more or less did away with spam, filtering out junk messages.

Still, when Gmail first launched, it was considered by many to be a huge gamble for Google — which had already established itself with its search engine. “A lot of people thought it was a very bad idea, from both a product and a strategic standpoint,” Gmail creator Paul Buchheit told TIME in 2014. “The concern was this didn’t have anything to do with web search.”

Things obviously worked out alright, and Gmail’s dominion has only strengthened. Gmail crossed the one billion user mark in 2016, and its numbers have since doubled. It’s still leading the way in email innovation, 20 years after it first went online, integrating increasingly advanced features to make the process of receiving and responding to emails (which, let’s be honest, is a dreaded daily chore for a lot of us) much easier. Gmail may eventually have changed its approach to data collection, but the precedent it set is now deeply enmeshed in the exchange of services on the internet; companies take what data they can from consumers while they can and ask for forgiveness later.

This article originally appeared on Engadget at https://www.engadget.com/from-its-start-gmail-conditioned-us-to-trade-privacy-for-free-services-120009741.html?src=rss

From its start, Gmail conditioned us to trade privacy for free services

Long before Gmail became smart enough to finish your sentences, Google’s now-ubiquitous email service was buttering up the public for a fate that defined the internet age: if you’re not paying for the product, you are the product.

When Gmail was announced on April 1, 2004, its lofty promises and the timing of its release reportedly had people assuming it was a joke. It wasn’t the first web-based email provider — Hotmail and Yahoo! Mail had already been around for years — but Gmail was offering faster service, automatic conversation grouping for messages, integrated search functions and 1GB of storage, which was at the time a huge leap forward in personal cloud storage. Google in its press release boasted that a gigabyte was “more than 100 times” what its competitors offered. All of that, for free.

Except, as Gmail and countless tech companies in its wake have taught us, there’s no such thing as free. Using Gmail came with a tradeoff that’s now commonplace: You get access to its service, and in exchange, Google gets your data. Specifically, its software could scan the contents of account holders’ emails and use that information to serve them personalized ads on the site’s sidebar. For better or worse, it was a groundbreaking approach.

“Depending on your take, Gmail is either too good to be true, or it’s the height of corporate arrogance, especially coming from a company whose house motto is ‘Don’t Be Evil,’” tech journalist Paul Boutin wrote for Slate when Gmail launched. (Boutin, one of its early media testers, wrote favorably about Google’s email scanning but suggested the company implement a way for users to opt out lest they reject it entirely.)

There was immediate backlash from those who considered Gmail to be a privacy nightmare, yet it grew — and generated a lot of hype, thanks to its invite-only status in the first few years, which spurred a reselling market for Gmail invitations at upwards of $150 a pop, according to TIME. Google continued its ad-related email scanning practices for over a decade, despite the heat, carrying on through Gmail’s public rollout in 2007 and well into the 2010s, when it really started gaining traction.

And why not? If Gmail proved anything, it was that people would, for the most part, accept such terms. Or at least not care enough to read the fine-print closely. In 2012, Gmail became the world’s largest email service, with 425 million active users.

Other sites followed Google’s lead, baking similar deals into their terms of service, so people’s use of the product would automatically mean consent to data collection and specified forms of sharing. Facebook started integrating targeted ads based on its users’ online activities in 2007, and the practice has since become a pillar of social media’s success.

Things have changed a lot in recent years, though, with the rise of a more tech-savvy public and increased scrutiny from regulators. Gmail users on multiple occasions attempted to bring about class-action lawsuits over the scanning issue, and in 2017, Google finally caved. That year, the company announced that regular Gmail users’ emails would no longer be scanned for ad personalization (paid enterprise Gmail accounts already had this treatment).

Google, of course, still collects users’ data in other ways and uses the information to serve hyper-relevant ads. It still scans emails too, both for security purposes and to power some of its smart features. And the company came under fire again in 2018 after The Wall Street Journal revealed it was allowing third-party developers to trawl users’ Gmail inboxes, to which Google responded by reminding users it was within their power to grant and revoke those permissions. As CNET reporters Laura Hautala and Richard Nieva wrote then, Google’s response more or less boiled down to: “This is what you signed up for.”

Really, what users signed up for was a cutting-edge email platform that ran laps around the other services at the time, and in many ways still does. It made the privacy concerns, for some, easier to swallow. From its inception, Gmail set the bar pretty high with its suite of free features. Users could suddenly send files of up to 25MB and check their email from anywhere as long as they had access to an internet connection and a browser, since it wasn’t locked to a desktop app.

It popularized the cloud as well as the Javascript technique AJAX, Wired noted in a piece for Gmail’s 10-year anniversary. This made Gmail dynamic, allowing the inbox to automatically refresh and surface new messages without the user clicking buttons. And it more or less did away with spam, filtering out junk messages.

Still, when Gmail first launched, it was considered by many to be a huge gamble for Google — which had already established itself with its search engine. “A lot of people thought it was a very bad idea, from both a product and a strategic standpoint,” Gmail creator Paul Buchheit told TIME in 2014. “The concern was this didn’t have anything to do with web search.”

Things obviously worked out alright, and Gmail’s dominion has only strengthened. Gmail crossed the one billion user mark in 2016, and its numbers have since doubled. It’s still leading the way in email innovation, 20 years after it first went online, integrating increasingly advanced features to make the process of receiving and responding to emails (which, let’s be honest, is a dreaded daily chore for a lot of us) much easier. Gmail may eventually have changed its approach to data collection, but the precedent it set is now deeply enmeshed in the exchange of services on the internet; companies take what data they can from consumers while they can and ask for forgiveness later.

This article originally appeared on Engadget at https://www.engadget.com/from-its-start-gmail-conditioned-us-to-trade-privacy-for-free-services-120009741.html?src=rss

Microsoft unbundles Teams and Office 365 for customers worldwide

In October, Microsoft unbundled Teams from Microsoft 365 and Office 365 suites in the European Union and Switzerland to avoid potential fines. Now, the company is expanding this offering, selling Microsoft Teams separately from Microsoft 365 and Office 365 worldwide, Reuters reports. "Doing so also addresses feedback from the European Commission by providing multinational companies more flexibility when they want to standardise their purchasing across geographies," a Microsoft spokesperson told the publication.

Current users can now choose to keep their current deal or switch to one of the separate offerings — especially helpful for anyone who uses the Office suite but prefers another communication service like Zoom or Google Meet. Commercial customers new to Microsoft's offerings can pick up Teams on its own for $5.25, while Office sans Teams is going for anywhere from $7.75 to $54.75.

Microsoft's journey to unbundling Teams and Office started in 2020 when Slack filed an antitrust complaint with the EU. The now Salesforce-owned company alleged that it was illegal to include Teams in the Office suite and that Microsoft was blocking customers from removing the chat platform. The European Commission has subsequently been investigating this matter, with Microsoft announcing in April 2023 that it would separate Teams from Microsoft 35 and Office 365. Though the move went into effect last fall, Microsoft is still at risk of owing the EU a hefty fine if found to have broken antitrust laws.

This article originally appeared on Engadget at https://www.engadget.com/microsoft-unbundles-teams-and-office-365-for-customers-worldwide-111031996.html?src=rss

Microsoft unbundles Teams and Office 365 for customers worldwide

In October, Microsoft unbundled Teams from Microsoft 365 and Office 365 suites in the European Union and Switzerland to avoid potential fines. Now, the company is expanding this offering, selling Microsoft Teams separately from Microsoft 365 and Office 365 worldwide, Reuters reports. "Doing so also addresses feedback from the European Commission by providing multinational companies more flexibility when they want to standardise their purchasing across geographies," a Microsoft spokesperson told the publication.

Current users can now choose to keep their current deal or switch to one of the separate offerings — especially helpful for anyone who uses the Office suite but prefers another communication service like Zoom or Google Meet. Commercial customers new to Microsoft's offerings can pick up Teams on its own for $5.25, while Office sans Teams is going for anywhere from $7.75 to $54.75.

Microsoft's journey to unbundling Teams and Office started in 2020 when Slack filed an antitrust complaint with the EU. The now Salesforce-owned company alleged that it was illegal to include Teams in the Office suite and that Microsoft was blocking customers from removing the chat platform. The European Commission has subsequently been investigating this matter, with Microsoft announcing in April 2023 that it would separate Teams from Microsoft 35 and Office 365. Though the move went into effect last fall, Microsoft is still at risk of owing the EU a hefty fine if found to have broken antitrust laws.

This article originally appeared on Engadget at https://www.engadget.com/microsoft-unbundles-teams-and-office-365-for-customers-worldwide-111031996.html?src=rss

NYC’s business chatbot is reportedly doling out ‘dangerously inaccurate’ information

An AI chatbot released by the New York City government to help business owners access pertinent information has been spouting falsehoods, at times even misinforming users about actions that are against the law, according to a report from The Markup. The report, which was co-published with the local nonprofit newsrooms Documented and The City, includes numerous examples of inaccuracies in the chatbot’s responses to questions relating to housing policies, workers’ rights and other topics.

Mayor Adams’ administration introduced the chatbot in October as an addition to the MyCity portal, which launched in March 2023 as “a one-stop shop for city services and benefits.” The chatbot, powered by Microsoft’s Azure AI, is aimed at current and aspiring business owners, and was billed as a source of “actionable and trusted information” that comes directly from the city government’s sites. But it is a pilot program, and a disclaimer on the website notes that it “may occasionally produce incorrect, harmful or biased content.”

In The Markup’s tests, the chatbot repeatedly provided incorrect information. In response to the question, “Can I make my store cashless?”, for example, it replied, “Yes, you can make your store cashless in New York City” — despite the fact that New York City banned cashless stores in 2020. The report shows the chatbot also responded incorrectly about whether employers can take their workers’ tips, whether landlords have to accept section 8 vouchers or tenants on rental assistance, and whether businesses have to inform staff of scheduling changes. A housing policy expert that spoke to The Markup called the chatbot “dangerously inaccurate” at its worst.

The city has indicated that the chatbot is still a work in progress. In a statement to Engadget, Leslie Brown, a spokesperson for the NYC Office of Technology and Innovation, said, “In line with the city’s key principles of reliability and transparency around AI, the site informs users the clearly marked pilot beta product should only be used for business-related content, tells users there are potential risks, and encourages them via disclaimer to both double-check its responses with the provided links and not use them as a substitute for professional advice.”

“The site has already provided thousands of people with timely, accurate answers and offers a feedback option to help drive continuous improvements of the beta tool,” Brown said. “We will continue to focus on upgrading this tool so that we can better support small businesses across the city.”

Update, March 31 2024, 9:23AM ET: This story has been updated to include a statement from the NYC Office of Technology and Innovation.

This article originally appeared on Engadget at https://www.engadget.com/nycs-business-chatbot-is-reportedly-doling-out-dangerously-inaccurate-information-203926922.html?src=rss

AT&T resets millions of customers’ passcodes after account info was leaked on the dark web

AT&T says 7.6 million current customers were affected by a recent leak in which sensitive data was released on the dark web, along with 65.4 million former account holders. TechCrunch first reported on Saturday morning that the company has reset the passcodes of all affected active accounts, and AT&T confirmed the move in an update published on its support page. The data set, which AT&T says “appears to be from 2019 or earlier,” includes names, home addresses, phone numbers, dates of birth and Social Security numbers, according to TechCrunch.

TechCrunch reports that it alerted AT&T about the potential for the leaked data to be used to access customers accounts on Monday, after a security researcher discovered that the records included easily decipherable encrypted passcodes. AT&T said today that it’s “launched a robust investigation supported by internal and external cybersecurity experts.” The data appeared on the dark web about two weeks ago, according to AT&T.

It comes three years after a hacker known as ShinyHunters claimed in 2021 that they’d obtained the account data of 73 million AT&T customers. AT&T at the time told BleepingComputer that it had not suffered a breach and that samples of information shared by the hacker online did “not appear to have come from our systems.” The company now says that “it is not yet known whether the data in those fields originated from AT&T or one of its vendors.” So far, it “does not have evidence of unauthorized access to its systems resulting in exfiltration of the data set.”

AT&T says it will reach out to both current and former account holders who have been affected by the leak. The company also says it will offer credit monitoring to those customers “where applicable.”

This article originally appeared on Engadget at https://www.engadget.com/att-resets-millions-of-customers-passcodes-after-account-info-was-leaked-on-the-dark-web-160842651.html?src=rss

AT&T resets millions of customers’ passcodes after account info was leaked on the dark web

AT&T says 7.6 million current customers were affected by a recent leak in which sensitive data was released on the dark web, along with 65.4 million former account holders. TechCrunch first reported on Saturday morning that the company has reset the passcodes of all affected active accounts, and AT&T confirmed the move in an update published on its support page. The data set, which AT&T says “appears to be from 2019 or earlier,” includes names, home addresses, phone numbers, dates of birth and Social Security numbers, according to TechCrunch.

TechCrunch reports that it alerted AT&T about the potential for the leaked data to be used to access customers accounts on Monday, after a security researcher discovered that the records included easily decipherable encrypted passcodes. AT&T said today that it’s “launched a robust investigation supported by internal and external cybersecurity experts.” The data appeared on the dark web about two weeks ago, according to AT&T.

It comes three years after a hacker known as ShinyHunters claimed in 2021 that they’d obtained the account data of 73 million AT&T customers. AT&T at the time told BleepingComputer that it had not suffered a breach and that samples of information shared by the hacker online did “not appear to have come from our systems.” The company now says that “it is not yet known whether the data in those fields originated from AT&T or one of its vendors.” So far, it “does not have evidence of unauthorized access to its systems resulting in exfiltration of the data set.”

AT&T says it will reach out to both current and former account holders who have been affected by the leak. The company also says it will offer credit monitoring to those customers “where applicable.”

This article originally appeared on Engadget at https://www.engadget.com/att-resets-millions-of-customers-passcodes-after-account-info-was-leaked-on-the-dark-web-160842651.html?src=rss

Google will start showing AI-powered search results to users who didn’t opt in

If you're in the US, you might see a new shaded section at the top of your Google Search results with a summary answering your inquiry, along with links for more information. That section, generated by Google's generative AI technology, used to appear only if you've opted into the Search Generative Experience (SGE) in the Search Labs platform. Now, according to Search Engine Land, Google has started adding the experience on a "subset of queries, on a small percentage of search traffic in the US." And that is why you could be getting Google's experimental AI-generated section even if you haven't switched it on. 

The company introduced SGE at its I/O developer conference in May last year, shortly after it opened up access to its ChatGPT rival Bard, now called Gemini. By November, it had rolled out the feature to 120 countries with more languages other than English, but it still remained opt in. Search Engine Land says Google will start showing you the experience even if you haven't opted in if you look up complex queries or if it thinks you could benefit from getting information from multiple websites. "How do I get marks off painted walls," is apparently one example. 

Google told the publication that for these tests, it will only show AI overviews if it has confidence that they will show information better than what Search results might surface. Apparently, the company is conducting these tests, because it wants to get feedback from more people, specifically from those who didn't choose to activate the feature. That way it can have a better idea of how generative AI can serve those who may not be tech-savvy or those who couldn't care less about generative AI. 

This article originally appeared on Engadget at https://www.engadget.com/google-will-start-showing-ai-powered-search-results-to-users-who-didnt-opt-in-093036257.html?src=rss