The US and UK are teaming up to test the safety of AI models

OpenAI, Google, Anthropic and other companies developing generative AI are continuing to improve their technologies and releasing better and better large language models. In order to create a common approach for independent evaluation on the safety of those models as they come out, the UK and the US governments have signed a Memorandum of Understanding. Together, the UK's AI Safety Institute and its counterpart in the US, which was announced by Vice President Kamala Harris but has yet to begin operations, will develop suites of tests to assess the risks and ensure the safety of "the most advanced AI models."

They're planning to share technical knowledge, information and even personnel as part of the partnership, and one of their initial goals seems to be performing a joint testing exercise on a publicly accessible model. UK's science minister Michelle Donelan, who signed the agreement, told The Financial Times that they've "really got to act quickly" because they're expecting a new generation of AI models to come out over the next year. They believe those models could be "complete game-changers," and they still don't know what they could be capable of. 

According to The Times, this partnership is the first bilateral arrangement on AI safety in the world, though both the US and the UK intend to team up with other countries in the future. "AI is the defining technology of our generation. This partnership is going to accelerate both of our Institutes' work across the full spectrum of risks, whether to our national security or to our broader society," US Secretary of Commerce Gina Raimondo said. "Our partnership makes clear that we aren't running away from these concerns — we're running at them. Because of our collaboration, our Institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidance."

While this particular partnership is focused on testing and evaluation, governments around the world are also conjuring regulations to keep AI tools in check. Back in March, the White House signed an executive order aiming to ensure that federal agencies are only using AI tools that "do not endanger the rights and safety of the American people." A couple of weeks before that, the European Parliament approved sweeping legislation to regulate artificial intelligence. It will ban "AI that manipulates human behavior or exploits people’s vulnerabilities," "biometric categorization systems based on sensitive characteristics," as well as the "untargeted scraping" of faces from CCTV footage and the web to create facial recognition databases. In addition, deepfakes and other AI-generated images, videos and audio will need to be clearly labeled as such under its rules. 

This article originally appeared on Engadget at https://www.engadget.com/the-us-and-uk-are-teaming-up-to-test-the-safety-of-ai-models-063002266.html?src=rss

X is funding a lawsuit against Jack Dorsey’s Block to support the ‘right to freedom of speech’

X is funding a lawsuit filed by Chloe Happe against her former employer Block, which was founded by Jack Dorsey, the same person who founded the website formerly known as Twitter. In her lawsuit, Happe said Block had wrongfully fired her in retaliation for two posts she made on what she called her "pseudonymous, satirical account" on X while on her personal time. One of the posts made after the October 7 Hamas attacks on Israel referenced refugees fleeing Gaza and and coming to the region of Kurdistan. In another, she used ableist language and a slur against transgender people while referencing the use of a "gender neutral restroom in the office."

Happe repeatedly stressed that she "expressed her political views, opinions, or beliefs in the form of satire." She said she did not mention Block in any post on her anonymous account and that she did not make those posts during her work hours. Happe also said that she "voluntarily deleted" the post on refugees within days of posting. She deleted the post with the slurs on the same day she made it upon seeing that X had limited its visibility. 

But Block still obtained copies of the posts and wouldn't tell her if another employee had complained about it, she argued in her lawsuit, admitting that she initially denied making them out of fear that she could get in trouble. She accused Block of terminating her, without severance, solely because she expressed views the company disagreed with. Happe argued that Block's policies expressly allowed its employees to engage in speech like her post, so it was the company that violated its own rules. Jack Dorsey, the founder of both Block (a financial services company) and Twitter, had publicly endorsed Elon Musk before the latter took over ownership of the social media platform. Last year, though, he changed his tune and criticized Musk, saying "it all went south" after he took over and that he "should have walked away" from the acquisition.

On his account, Elon Musk retweeted X's announcement that it's supporting Happe's lawsuit with the caption: "Supporting your right to freedom of speech." The company had previously funded other lawsuits in the name of "free speech." One of those cases is Gina Carano's lawsuit against Lucasfilm and Disney, which she accused of removing her from The Mandalorian for expressing views that were "not in line with the acceptable narrative of the time." Carano notably questioned the effectiveness of COVID-19 vaccines and added "boop/bop/beep" as her pronouns. She also shared a post on Instagram that compared the treatment of conservatives in America to the treatment of Jews in Nazi-era Germany. 

Happe is asking the court to order her reinstatement as a Block employee. She is also asking for compensatory and punitive damages, including for loss of pay from the time she was terminated. 

This article originally appeared on Engadget at https://www.engadget.com/x-is-funding-a-lawsuit-against-jack-dorseys-block-to-support-the-right-to-freedom-of-speech-073059007.html?src=rss

X is funding a lawsuit against Jack Dorsey’s Block to support the ‘right to freedom of speech’

X is funding a lawsuit filed by Chloe Happe against her former employer Block, which was founded by Jack Dorsey, the same person who founded the website formerly known as Twitter. In her lawsuit, Happe said Block had wrongfully fired her in retaliation for two posts she made on what she called her "pseudonymous, satirical account" on X while on her personal time. One of the posts made after the October 7 Hamas attacks on Israel referenced refugees fleeing Gaza and and coming to the region of Kurdistan. In another, she used ableist language and a slur against transgender people while referencing the use of a "gender neutral restroom in the office."

Happe repeatedly stressed that she "expressed her political views, opinions, or beliefs in the form of satire." She said she did not mention Block in any post on her anonymous account and that she did not make those posts during her work hours. Happe also said that she "voluntarily deleted" the post on refugees within days of posting. She deleted the post with the slurs on the same day she made it upon seeing that X had limited its visibility. 

But Block still obtained copies of the posts and wouldn't tell her if another employee had complained about it, she argued in her lawsuit, admitting that she initially denied making them out of fear that she could get in trouble. She accused Block of terminating her, without severance, solely because she expressed views the company disagreed with. Happe argued that Block's policies expressly allowed its employees to engage in speech like her post, so it was the company that violated its own rules. Jack Dorsey, the founder of both Block (a financial services company) and Twitter, had publicly endorsed Elon Musk before the latter took over ownership of the social media platform. Last year, though, he changed his tune and criticized Musk, saying "it all went south" after he took over and that he "should have walked away" from the acquisition.

On his account, Elon Musk retweeted X's announcement that it's supporting Happe's lawsuit with the caption: "Supporting your right to freedom of speech." The company had previously funded other lawsuits in the name of "free speech." One of those cases is Gina Carano's lawsuit against Lucasfilm and Disney, which she accused of removing her from The Mandalorian for expressing views that were "not in line with the acceptable narrative of the time." Carano notably questioned the effectiveness of COVID-19 vaccines and added "boop/bop/beep" as her pronouns. She also shared a post on Instagram that compared the treatment of conservatives in America to the treatment of Jews in Nazi-era Germany. 

Happe is asking the court to order her reinstatement as a Block employee. She is also asking for compensatory and punitive damages, including for loss of pay from the time she was terminated. 

This article originally appeared on Engadget at https://www.engadget.com/x-is-funding-a-lawsuit-against-jack-dorseys-block-to-support-the-right-to-freedom-of-speech-073059007.html?src=rss

Microsoft Copilot has reportedly been blocked on all Congress-owned devices

US Congressional staff members can no longer use Microsoft's Copilot on their government-issued devices, according to Axios. The publication said it obtained a memo from House Chief Administrative Officer Catherine Szpindor, telling Congress personnel that the AI chatbot is now officially prohibited. Apparently, the Office of Cybersecurity has deemed Copilot to be a risk "due to the threat of leaking House data to non-House approved cloud services." While there's nothing stopping them from using Copilot on their own phones and laptops, it will now be blocked on all Windows devices owned by the Congress. 

Almost a year ago, the Congress also set a strict limit on the use of ChatGPT, which is powered by OpenAI's large language models, just like Copilot. It banned staffers from using the chatbot's free version on House computers, but it allowed them to continue using the paid (ChatGPT Plus) version for research and evaluation due to its tighter privacy controls. More recently, the White House revealed rules federal agencies have to follow when it comes to generative AI, which would ensure that any tool they use "do not endanger the rights and safety" of Americans. 

Microsoft told Axios that it does recognize government users' need for higher security requirements. Last year, it announced a roadmap of tools and services meant for government use, including an Azure OpenAI service for classified workloads and a new version of Microsoft 365's Copilot assistant. The company said that all those tools and services will feature higher levels of security that would make it more suitable for handling sensitive data. Szpindor's office, according to Axios, will evaluate the government version Copilot when it becomes available before deciding if it can be used on House devices. 

This article originally appeared on Engadget at https://www.engadget.com/microsoft-copilot-has-reportedly-been-blocked-on-all-congress-owned-devices-034946166.html?src=rss

Microsoft Copilot has reportedly been blocked on all Congress-owned devices

US Congressional staff members can no longer use Microsoft's Copilot on their government-issued devices, according to Axios. The publication said it obtained a memo from House Chief Administrative Officer Catherine Szpindor, telling Congress personnel that the AI chatbot is now officially prohibited. Apparently, the Office of Cybersecurity has deemed Copilot to be a risk "due to the threat of leaking House data to non-House approved cloud services." While there's nothing stopping them from using Copilot on their own phones and laptops, it will now be blocked on all Windows devices owned by the Congress. 

Almost a year ago, the Congress also set a strict limit on the use of ChatGPT, which is powered by OpenAI's large language models, just like Copilot. It banned staffers from using the chatbot's free version on House computers, but it allowed them to continue using the paid (ChatGPT Plus) version for research and evaluation due to its tighter privacy controls. More recently, the White House revealed rules federal agencies have to follow when it comes to generative AI, which would ensure that any tool they use "do not endanger the rights and safety" of Americans. 

Microsoft told Axios that it does recognize government users' need for higher security requirements. Last year, it announced a roadmap of tools and services meant for government use, including an Azure OpenAI service for classified workloads and a new version of Microsoft 365's Copilot assistant. The company said that all those tools and services will feature higher levels of security that would make it more suitable for handling sensitive data. Szpindor's office, according to Axios, will evaluate the government version Copilot when it becomes available before deciding if it can be used on House devices. 

This article originally appeared on Engadget at https://www.engadget.com/microsoft-copilot-has-reportedly-been-blocked-on-all-congress-owned-devices-034946166.html?src=rss

Activision is reportedly looking into the malware stealing its users’ login credentials

Activision is reportedly in the midst of investigating a hacking campaign that's stealing login credentials from people playing its games. According to TechCrunch, bad actors have been successfully installing malware onto victims' computers and using their access to steal logins for their gaming accounts and even their crypto wallets. Citing an unnamed source, the publication reported that the video game publisher has been helping victims remove the malware and regain control of their accounts, but that there isn't enough information yet to say how the malware is spreading.

A spokesperson for Activision, however, denied that the company is helping to remove the malware, stating that the issue is with third-party software vendors and not with Activision software or platforms. TechCrunch's source said the malware "could be only affecting folks who have third-party tools installed," insinuating that people are getting it from non-Activision-developed software typically used with its games.

Delaney Simmons, Activision's spokesperson, told the publication that the company is aware of "claims that some player credentials across the broader industry could be compromised from malware from downloading or using unauthorized software." He added that the company's servers "remain secure and uncompromised."

A third-party origin is certainly a plausible theory, seeing as the hacking scheme appears to have been uncovered by someone known as Zeebler, who develops cheating software for Call of Duty. Zeebler told TechCrunch that he discovered the campaign when one of his customers had their account stolen for his software. Upon looking into it, he reportedly discovered a database containing stolen credentials. He also said that the malware is disguised to look like real software, but they were actually designed to steal the usernames and passwords victims type in. Zeebler is presumably talking about third-party tools like cheating software getting cloned to harvest people's logins, but phishing schemes that use Activision's official login design exist, as well. Bottom line is, people should be careful what they download and always double check if the login page they're typing in is the real deal. 

Update, March 30 2024, 5:20PM ET: This story has been updated to include new information from Activision.

This article originally appeared on Engadget at https://www.engadget.com/activision-is-reportedly-looking-into-the-malware-stealing-its-users-login-credentials-092210468.html?src=rss

Activision is reportedly looking into the malware stealing its users’ login credentials

Activision is reportedly in the midst of investigating a hacking campaign that's stealing login credentials from people playing its games. According to TechCrunch, bad actors have been successfully installing malware onto victims' computers and using their access to steal logins for their gaming accounts and even their crypto wallets. Citing an unnamed source, the publication reported that the video game publisher has been helping victims remove the malware and regain control of their accounts, but that there isn't enough information yet to say how the malware is spreading.

A spokesperson for Activision, however, denied that the company is helping to remove the malware, stating that the issue is with third-party software vendors and not with Activision software or platforms. TechCrunch's source said the malware "could be only affecting folks who have third-party tools installed," insinuating that people are getting it from non-Activision-developed software typically used with its games.

Delaney Simmons, Activision's spokesperson, told the publication that the company is aware of "claims that some player credentials across the broader industry could be compromised from malware from downloading or using unauthorized software." He added that the company's servers "remain secure and uncompromised."

A third-party origin is certainly a plausible theory, seeing as the hacking scheme appears to have been uncovered by someone known as Zeebler, who develops cheating software for Call of Duty. Zeebler told TechCrunch that he discovered the campaign when one of his customers had their account stolen for his software. Upon looking into it, he reportedly discovered a database containing stolen credentials. He also said that the malware is disguised to look like real software, but they were actually designed to steal the usernames and passwords victims type in. Zeebler is presumably talking about third-party tools like cheating software getting cloned to harvest people's logins, but phishing schemes that use Activision's official login design exist, as well. Bottom line is, people should be careful what they download and always double check if the login page they're typing in is the real deal. 

Update, March 30 2024, 5:20PM ET: This story has been updated to include new information from Activision.

This article originally appeared on Engadget at https://www.engadget.com/activision-is-reportedly-looking-into-the-malware-stealing-its-users-login-credentials-092210468.html?src=rss

Twitch bans streams overlaid on boobs and butts

No more Fortnite Twitch streams on butts. I said what I said. If you're out of the loop on all things Twitch, there's a trend going around of streamers projecting their gameplays on green-screened body parts, usually intimate ones like butts and breasts. Because normal picture-in-picture is now apparently too boring. Twitch is putting a stop to its streamers' shenanigans, though, and will officially prohibit "content that focuses on clothed intimate body parts such as the buttocks, groin, or breasts for extended periods of time" starting on March 29.

In a writeup on the trend, Kotaku explained that it all started when controversial streamer Morgpie projected her Fortnite gaming session on a closeup of her behind. After that, other streamers followed suit, overlaying their games on body parts both real and fictional, like anime thighs or anime boobs breasting boobily on screen while they're playing. Now, boobs and butts streaming is out. Don't get caught up on the "clothed intimate body parts" wording, as well — of course, their unclothed versions are also prohibited, as per Twitch's policy that doesn't allow users to broadcast or upload "content that contains depictions of real or fictional nudity, regardless of the medium used to create it."

Twitch had previously revised its guidelines due to Morgpie's activities on its platform. The streamer went live with a well-positioned camera that suggested she was gaming topless, shortly after Twitch relaxed its rules for sexual content on the platform. It gave rise to a meta of streamers pretending to be unclothed, prompting the platform to rescind those policy changes and ultimately to bar users from pretending to be fully or partially nude in their streams. 

This article originally appeared on Engadget at https://www.engadget.com/twitch-bans-streams-overlaid-on-boobs-and-butts-100542551.html?src=rss

Twitch bans streams overlaid on boobs and butts

No more Fortnite Twitch streams on butts. I said what I said. If you're out of the loop on all things Twitch, there's a trend going around of streamers projecting their gameplays on green-screened body parts, usually intimate ones like butts and breasts. Because normal picture-in-picture is now apparently too boring. Twitch is putting a stop to its streamers' shenanigans, though, and will officially prohibit "content that focuses on clothed intimate body parts such as the buttocks, groin, or breasts for extended periods of time" starting on March 29.

In a writeup on the trend, Kotaku explained that it all started when controversial streamer Morgpie projected her Fortnite gaming session on a closeup of her behind. After that, other streamers followed suit, overlaying their games on body parts both real and fictional, like anime thighs or anime boobs breasting boobily on screen while they're playing. Now, boobs and butts streaming is out. Don't get caught up on the "clothed intimate body parts" wording, as well — of course, their unclothed versions are also prohibited, as per Twitch's policy that doesn't allow users to broadcast or upload "content that contains depictions of real or fictional nudity, regardless of the medium used to create it."

Twitch had previously revised its guidelines due to Morgpie's activities on its platform. The streamer went live with a well-positioned camera that suggested she was gaming topless, shortly after Twitch relaxed its rules for sexual content on the platform. It gave rise to a meta of streamers pretending to be unclothed, prompting the platform to rescind those policy changes and ultimately to bar users from pretending to be fully or partially nude in their streams. 

This article originally appeared on Engadget at https://www.engadget.com/twitch-bans-streams-overlaid-on-boobs-and-butts-100542551.html?src=rss

Oregon’s Right to Repair bill is now a law

Oregon Governor Tina Kotek has signed the state's Right to Repair bill into law, and it even comes with a provision that potentially makes it stronger than California's and Minnesota's versions. It's the first to prohibit (PDF) a practice called "parts pairing," which requires the use of certain proprietary components for repair. Parts pairing prevents third-party repair services from replacing a broken component with one that didn't come from the brand itself, because it wouldn't work with the company's software. People would usually get error messages if they try to install an unauthorized part, forcing them to buy from the company itself. 

Under the new rules, preventing an independent provider from installing off-brand parts is prohibited. As is reducing the performance of a device that had been fixed with an unauthorized component. Even those error messages and warnings are not allowed. The ban on parts pairing doesn't cover devices that are already out, though, and will only be applicate for anything manufactured after January 1, 2025.

While manufacturers like Apple seem to have changed their tune in recent years and now generally support the Right to Repair movement, Oregon's parts pairing provision was still a point of contention. Apple senior manager John Perry told lawmakers in a testimony that his company "agrees with the vast majority of Senate Bill 1596." However, it's also worried about the security implications of allowing the use of unauthorized parts, such as biometric sensors, for replacement. 

Regardless, the ban on parts pairing is now a rule under Oregon's law, along with making compatible parts available to device owners through the company or authorized service providers for favorable prices and without any "substantial" conditions. Companies are also required to make documentation on how to fix their devices, as well as any special tools needed to repair them, available to repair shops. These rules will apply to all phones sold after July 1, 2021 and to other consumer electronic devices sold after July 1, 2015. 

This article originally appeared on Engadget at https://www.engadget.com/oregons-right-to-repair-bill-is-now-a-law-064955635.html?src=rss