George Carlin’s estate settles lawsuit against podcasters’ AI comedy special

There will be no follow-up to that AI-generated George Carlin comedy special released by the podcast Dudesy. In January, Carlin's estate filed a lawsuit against the podcast and its creators Will Sasso and Chad Kultgen, accusing them of violating the performer's right to publicity and infringing on a copyright. Now, the two sides have reached a settlement agreement, which includes the permanent removal of the comedy special from Dudesy's archive. Sasso and Kultgen have also agreed never to repost it on any platform and never to use Carlin's image, voice or likeness without approval from the estate again, according to The New York Times

The AI algorithm that Dudesy used for the special was trained on thousands of hours of Carlin's routines that spanned decades of his career. It generated enough material for an hour-long special, but it did a pretty poor impression of the late comedian with basic punchlines and very little of what characterized Carlin's humor. In a statement, Carlin's daughter Kelly called it a "poorly-executed facsimile cobbled together by unscrupulous individuals."

Josh Schiller, who represented the Carlin estate in court, told The Times that "[t]he world has begun to appreciate the power and potential dangers inherent in AI tools, which can mimic voices, generate fake photographs and alter video." He added that it's "not a problem that will go away by itself" and that it "must be confronted with swift, forceful action in the courts." The companies making AI software "must also bear some measure of accountability," the lawyer said. 

This lawsuit is just one of the many filed by creatives against AI companies and the people that use the technology by training algorithms on someone's work. Several non-fiction authors and novelists that include George R.R. Martin, John Grisham and Jodi Picoult sued OpenAI for using their work to train its large language models. The New York Times and a handful of other news organizations also sued the company for using their articles for training and for allegedly reproducing their content word-for-word without attribution. 

This article originally appeared on Engadget at https://www.engadget.com/george-carlins-estate-settles-lawsuit-against-podcasters-ai-comedy-special-075224304.html?src=rss

Telegram takes on WhatsApp with business-focused features

Telegram isn't quite as widely used as WhatsApp, but businesses can now add it as a communication option for their customers if they want to. Anybody on the messaging app can now convert their account into a business account to get access to features designed to make it easier for customers to find and contact them. They'll be able to display their hours of operation on their profile and pin their location on a map. With their operating hours in place, customers can see at a glance whether they're still open and what time they're closing for the day. 

A screenshot showing a business profile on Telegram.
Telegram

Businesses can also customize their start page and display information about their products and services on empty chats, giving customers a glimpse of what's on offer even before they get in touch. To make it easier to respond to multiple inquiries, Telegram Business accounts will also be able to craft and save preset messages that they can send as quick replies. Of course, they can also pre-write greeting and away messages that get automatically sent to customers who contact them. They can use a Telegram Bot to chat with their customers, as well, though we all know how frustrating it can be to talk with a robot when we need to talk to a human customer service rep. All these features are free, but only for those with a Telegram Premium account, which costs $5 a month.

In addition to introducing its new business-focused features, Telegram has also revealed that it's giving channel owners 50 percent of the revenue earned from ads displayed on their channels, as long as they have at least 1,000 subscribers. Based on information previously shared by company founder Pavel Durov, Telegram seems to be doing well financially and can afford to be that generous. Durov told The Financial Times that he expects the messaging app to be profitable by next year and that it's currently exploring a future initial public offering.

This article originally appeared on Engadget at https://www.engadget.com/telegram-takes-on-whatsapp-with-business-focused-features-101843987.html?src=rss

Telegram takes on WhatsApp with business-focused features

Telegram isn't quite as widely used as WhatsApp, but businesses can now add it as a communication option for their customers if they want to. Anybody on the messaging app can now convert their account into a business account to get access to features designed to make it easier for customers to find and contact them. They'll be able to display their hours of operation on their profile and pin their location on a map. With their operating hours in place, customers can see at a glance whether they're still open and what time they're closing for the day. 

A screenshot showing a business profile on Telegram.
Telegram

Businesses can also customize their start page and display information about their products and services on empty chats, giving customers a glimpse of what's on offer even before they get in touch. To make it easier to respond to multiple inquiries, Telegram Business accounts will also be able to craft and save preset messages that they can send as quick replies. Of course, they can also pre-write greeting and away messages that get automatically sent to customers who contact them. They can use a Telegram Bot to chat with their customers, as well, though we all know how frustrating it can be to talk with a robot when we need to talk to a human customer service rep. All these features are free, but only for those with a Telegram Premium account, which costs $5 a month.

In addition to introducing its new business-focused features, Telegram has also revealed that it's giving channel owners 50 percent of the revenue earned from ads displayed on their channels, as long as they have at least 1,000 subscribers. Based on information previously shared by company founder Pavel Durov, Telegram seems to be doing well financially and can afford to be that generous. Durov told The Financial Times that he expects the messaging app to be profitable by next year and that it's currently exploring a future initial public offering.

This article originally appeared on Engadget at https://www.engadget.com/telegram-takes-on-whatsapp-with-business-focused-features-101843987.html?src=rss

The US and UK are teaming up to test the safety of AI models

OpenAI, Google, Anthropic and other companies developing generative AI are continuing to improve their technologies and releasing better and better large language models. In order to create a common approach for independent evaluation on the safety of those models as they come out, the UK and the US governments have signed a Memorandum of Understanding. Together, the UK's AI Safety Institute and its counterpart in the US, which was announced by Vice President Kamala Harris but has yet to begin operations, will develop suites of tests to assess the risks and ensure the safety of "the most advanced AI models."

They're planning to share technical knowledge, information and even personnel as part of the partnership, and one of their initial goals seems to be performing a joint testing exercise on a publicly accessible model. UK's science minister Michelle Donelan, who signed the agreement, told The Financial Times that they've "really got to act quickly" because they're expecting a new generation of AI models to come out over the next year. They believe those models could be "complete game-changers," and they still don't know what they could be capable of. 

According to The Times, this partnership is the first bilateral arrangement on AI safety in the world, though both the US and the UK intend to team up with other countries in the future. "AI is the defining technology of our generation. This partnership is going to accelerate both of our Institutes' work across the full spectrum of risks, whether to our national security or to our broader society," US Secretary of Commerce Gina Raimondo said. "Our partnership makes clear that we aren't running away from these concerns — we're running at them. Because of our collaboration, our Institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidance."

While this particular partnership is focused on testing and evaluation, governments around the world are also conjuring regulations to keep AI tools in check. Back in March, the White House signed an executive order aiming to ensure that federal agencies are only using AI tools that "do not endanger the rights and safety of the American people." A couple of weeks before that, the European Parliament approved sweeping legislation to regulate artificial intelligence. It will ban "AI that manipulates human behavior or exploits people’s vulnerabilities," "biometric categorization systems based on sensitive characteristics," as well as the "untargeted scraping" of faces from CCTV footage and the web to create facial recognition databases. In addition, deepfakes and other AI-generated images, videos and audio will need to be clearly labeled as such under its rules. 

This article originally appeared on Engadget at https://www.engadget.com/the-us-and-uk-are-teaming-up-to-test-the-safety-of-ai-models-063002266.html?src=rss

The US and UK are teaming up to test the safety of AI models

OpenAI, Google, Anthropic and other companies developing generative AI are continuing to improve their technologies and releasing better and better large language models. In order to create a common approach for independent evaluation on the safety of those models as they come out, the UK and the US governments have signed a Memorandum of Understanding. Together, the UK's AI Safety Institute and its counterpart in the US, which was announced by Vice President Kamala Harris but has yet to begin operations, will develop suites of tests to assess the risks and ensure the safety of "the most advanced AI models."

They're planning to share technical knowledge, information and even personnel as part of the partnership, and one of their initial goals seems to be performing a joint testing exercise on a publicly accessible model. UK's science minister Michelle Donelan, who signed the agreement, told The Financial Times that they've "really got to act quickly" because they're expecting a new generation of AI models to come out over the next year. They believe those models could be "complete game-changers," and they still don't know what they could be capable of. 

According to The Times, this partnership is the first bilateral arrangement on AI safety in the world, though both the US and the UK intend to team up with other countries in the future. "AI is the defining technology of our generation. This partnership is going to accelerate both of our Institutes' work across the full spectrum of risks, whether to our national security or to our broader society," US Secretary of Commerce Gina Raimondo said. "Our partnership makes clear that we aren't running away from these concerns — we're running at them. Because of our collaboration, our Institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidance."

While this particular partnership is focused on testing and evaluation, governments around the world are also conjuring regulations to keep AI tools in check. Back in March, the White House signed an executive order aiming to ensure that federal agencies are only using AI tools that "do not endanger the rights and safety of the American people." A couple of weeks before that, the European Parliament approved sweeping legislation to regulate artificial intelligence. It will ban "AI that manipulates human behavior or exploits people’s vulnerabilities," "biometric categorization systems based on sensitive characteristics," as well as the "untargeted scraping" of faces from CCTV footage and the web to create facial recognition databases. In addition, deepfakes and other AI-generated images, videos and audio will need to be clearly labeled as such under its rules. 

This article originally appeared on Engadget at https://www.engadget.com/the-us-and-uk-are-teaming-up-to-test-the-safety-of-ai-models-063002266.html?src=rss

X is funding a lawsuit against Jack Dorsey’s Block to support the ‘right to freedom of speech’

X is funding a lawsuit filed by Chloe Happe against her former employer Block, which was founded by Jack Dorsey, the same person who founded the website formerly known as Twitter. In her lawsuit, Happe said Block had wrongfully fired her in retaliation for two posts she made on what she called her "pseudonymous, satirical account" on X while on her personal time. One of the posts made after the October 7 Hamas attacks on Israel referenced refugees fleeing Gaza and and coming to the region of Kurdistan. In another, she used ableist language and a slur against transgender people while referencing the use of a "gender neutral restroom in the office."

Happe repeatedly stressed that she "expressed her political views, opinions, or beliefs in the form of satire." She said she did not mention Block in any post on her anonymous account and that she did not make those posts during her work hours. Happe also said that she "voluntarily deleted" the post on refugees within days of posting. She deleted the post with the slurs on the same day she made it upon seeing that X had limited its visibility. 

But Block still obtained copies of the posts and wouldn't tell her if another employee had complained about it, she argued in her lawsuit, admitting that she initially denied making them out of fear that she could get in trouble. She accused Block of terminating her, without severance, solely because she expressed views the company disagreed with. Happe argued that Block's policies expressly allowed its employees to engage in speech like her post, so it was the company that violated its own rules. Jack Dorsey, the founder of both Block (a financial services company) and Twitter, had publicly endorsed Elon Musk before the latter took over ownership of the social media platform. Last year, though, he changed his tune and criticized Musk, saying "it all went south" after he took over and that he "should have walked away" from the acquisition.

On his account, Elon Musk retweeted X's announcement that it's supporting Happe's lawsuit with the caption: "Supporting your right to freedom of speech." The company had previously funded other lawsuits in the name of "free speech." One of those cases is Gina Carano's lawsuit against Lucasfilm and Disney, which she accused of removing her from The Mandalorian for expressing views that were "not in line with the acceptable narrative of the time." Carano notably questioned the effectiveness of COVID-19 vaccines and added "boop/bop/beep" as her pronouns. She also shared a post on Instagram that compared the treatment of conservatives in America to the treatment of Jews in Nazi-era Germany. 

Happe is asking the court to order her reinstatement as a Block employee. She is also asking for compensatory and punitive damages, including for loss of pay from the time she was terminated. 

This article originally appeared on Engadget at https://www.engadget.com/x-is-funding-a-lawsuit-against-jack-dorseys-block-to-support-the-right-to-freedom-of-speech-073059007.html?src=rss

X is funding a lawsuit against Jack Dorsey’s Block to support the ‘right to freedom of speech’

X is funding a lawsuit filed by Chloe Happe against her former employer Block, which was founded by Jack Dorsey, the same person who founded the website formerly known as Twitter. In her lawsuit, Happe said Block had wrongfully fired her in retaliation for two posts she made on what she called her "pseudonymous, satirical account" on X while on her personal time. One of the posts made after the October 7 Hamas attacks on Israel referenced refugees fleeing Gaza and and coming to the region of Kurdistan. In another, she used ableist language and a slur against transgender people while referencing the use of a "gender neutral restroom in the office."

Happe repeatedly stressed that she "expressed her political views, opinions, or beliefs in the form of satire." She said she did not mention Block in any post on her anonymous account and that she did not make those posts during her work hours. Happe also said that she "voluntarily deleted" the post on refugees within days of posting. She deleted the post with the slurs on the same day she made it upon seeing that X had limited its visibility. 

But Block still obtained copies of the posts and wouldn't tell her if another employee had complained about it, she argued in her lawsuit, admitting that she initially denied making them out of fear that she could get in trouble. She accused Block of terminating her, without severance, solely because she expressed views the company disagreed with. Happe argued that Block's policies expressly allowed its employees to engage in speech like her post, so it was the company that violated its own rules. Jack Dorsey, the founder of both Block (a financial services company) and Twitter, had publicly endorsed Elon Musk before the latter took over ownership of the social media platform. Last year, though, he changed his tune and criticized Musk, saying "it all went south" after he took over and that he "should have walked away" from the acquisition.

On his account, Elon Musk retweeted X's announcement that it's supporting Happe's lawsuit with the caption: "Supporting your right to freedom of speech." The company had previously funded other lawsuits in the name of "free speech." One of those cases is Gina Carano's lawsuit against Lucasfilm and Disney, which she accused of removing her from The Mandalorian for expressing views that were "not in line with the acceptable narrative of the time." Carano notably questioned the effectiveness of COVID-19 vaccines and added "boop/bop/beep" as her pronouns. She also shared a post on Instagram that compared the treatment of conservatives in America to the treatment of Jews in Nazi-era Germany. 

Happe is asking the court to order her reinstatement as a Block employee. She is also asking for compensatory and punitive damages, including for loss of pay from the time she was terminated. 

This article originally appeared on Engadget at https://www.engadget.com/x-is-funding-a-lawsuit-against-jack-dorseys-block-to-support-the-right-to-freedom-of-speech-073059007.html?src=rss

Microsoft Copilot has reportedly been blocked on all Congress-owned devices

US Congressional staff members can no longer use Microsoft's Copilot on their government-issued devices, according to Axios. The publication said it obtained a memo from House Chief Administrative Officer Catherine Szpindor, telling Congress personnel that the AI chatbot is now officially prohibited. Apparently, the Office of Cybersecurity has deemed Copilot to be a risk "due to the threat of leaking House data to non-House approved cloud services." While there's nothing stopping them from using Copilot on their own phones and laptops, it will now be blocked on all Windows devices owned by the Congress. 

Almost a year ago, the Congress also set a strict limit on the use of ChatGPT, which is powered by OpenAI's large language models, just like Copilot. It banned staffers from using the chatbot's free version on House computers, but it allowed them to continue using the paid (ChatGPT Plus) version for research and evaluation due to its tighter privacy controls. More recently, the White House revealed rules federal agencies have to follow when it comes to generative AI, which would ensure that any tool they use "do not endanger the rights and safety" of Americans. 

Microsoft told Axios that it does recognize government users' need for higher security requirements. Last year, it announced a roadmap of tools and services meant for government use, including an Azure OpenAI service for classified workloads and a new version of Microsoft 365's Copilot assistant. The company said that all those tools and services will feature higher levels of security that would make it more suitable for handling sensitive data. Szpindor's office, according to Axios, will evaluate the government version Copilot when it becomes available before deciding if it can be used on House devices. 

This article originally appeared on Engadget at https://www.engadget.com/microsoft-copilot-has-reportedly-been-blocked-on-all-congress-owned-devices-034946166.html?src=rss

Microsoft Copilot has reportedly been blocked on all Congress-owned devices

US Congressional staff members can no longer use Microsoft's Copilot on their government-issued devices, according to Axios. The publication said it obtained a memo from House Chief Administrative Officer Catherine Szpindor, telling Congress personnel that the AI chatbot is now officially prohibited. Apparently, the Office of Cybersecurity has deemed Copilot to be a risk "due to the threat of leaking House data to non-House approved cloud services." While there's nothing stopping them from using Copilot on their own phones and laptops, it will now be blocked on all Windows devices owned by the Congress. 

Almost a year ago, the Congress also set a strict limit on the use of ChatGPT, which is powered by OpenAI's large language models, just like Copilot. It banned staffers from using the chatbot's free version on House computers, but it allowed them to continue using the paid (ChatGPT Plus) version for research and evaluation due to its tighter privacy controls. More recently, the White House revealed rules federal agencies have to follow when it comes to generative AI, which would ensure that any tool they use "do not endanger the rights and safety" of Americans. 

Microsoft told Axios that it does recognize government users' need for higher security requirements. Last year, it announced a roadmap of tools and services meant for government use, including an Azure OpenAI service for classified workloads and a new version of Microsoft 365's Copilot assistant. The company said that all those tools and services will feature higher levels of security that would make it more suitable for handling sensitive data. Szpindor's office, according to Axios, will evaluate the government version Copilot when it becomes available before deciding if it can be used on House devices. 

This article originally appeared on Engadget at https://www.engadget.com/microsoft-copilot-has-reportedly-been-blocked-on-all-congress-owned-devices-034946166.html?src=rss

Activision is reportedly looking into the malware stealing its users’ login credentials

Activision is reportedly in the midst of investigating a hacking campaign that's stealing login credentials from people playing its games. According to TechCrunch, bad actors have been successfully installing malware onto victims' computers and using their access to steal logins for their gaming accounts and even their crypto wallets. Citing an unnamed source, the publication reported that the video game publisher has been helping victims remove the malware and regain control of their accounts, but that there isn't enough information yet to say how the malware is spreading.

A spokesperson for Activision, however, denied that the company is helping to remove the malware, stating that the issue is with third-party software vendors and not with Activision software or platforms. TechCrunch's source said the malware "could be only affecting folks who have third-party tools installed," insinuating that people are getting it from non-Activision-developed software typically used with its games.

Delaney Simmons, Activision's spokesperson, told the publication that the company is aware of "claims that some player credentials across the broader industry could be compromised from malware from downloading or using unauthorized software." He added that the company's servers "remain secure and uncompromised."

A third-party origin is certainly a plausible theory, seeing as the hacking scheme appears to have been uncovered by someone known as Zeebler, who develops cheating software for Call of Duty. Zeebler told TechCrunch that he discovered the campaign when one of his customers had their account stolen for his software. Upon looking into it, he reportedly discovered a database containing stolen credentials. He also said that the malware is disguised to look like real software, but they were actually designed to steal the usernames and passwords victims type in. Zeebler is presumably talking about third-party tools like cheating software getting cloned to harvest people's logins, but phishing schemes that use Activision's official login design exist, as well. Bottom line is, people should be careful what they download and always double check if the login page they're typing in is the real deal. 

Update, March 30 2024, 5:20PM ET: This story has been updated to include new information from Activision.

This article originally appeared on Engadget at https://www.engadget.com/activision-is-reportedly-looking-into-the-malware-stealing-its-users-login-credentials-092210468.html?src=rss