A hacking group reportedly leaked confidential data from thousands of Disney Slack channels.

A hacking group leaked over a terabyte of confidential data from more than 10,000 Slack channels belonging to Disney, the Wall Street Journal reported on Monday. The leaked information includes discussions about ad campaigns, computer code, details about unreleased projects and discussion about interview candidates among other things. “Disney is investigating this matter,” a company spokesperson told the Journal.

Nullbulge calls itself a hacktivist group advocating for the rights of artists. A spokesperson for the group told the Journal that it targeted Disney due to concerns about the company's handling of artist contracts and its approach to generative AI. For weeks, the group teased its access to Disney’s Slack, posting snippets of confidential information such as attendance figures for Disneyland parks on X. Nullbulge told the Journal that it accessed Disney’s confidential information by compromising an employee’s computer computer twice, including through malicious software that it buried in a videogame add-on.

For more than a year, generative AI has sparked tensions between the companies that make and use the tech and members of the creative community who have accused corporations of using their work to train AI models without consent or compensation.

This article originally appeared on Engadget at https://www.engadget.com/a-hacking-group-reportedly-leaked-confidential-data-from-thousands-of-disney-slack-channels-001124844.html?src=rss

The EU will start enforcing its new AI regulations on August 1

The European Union has published the full and final text for the EU AI Act in its Official Journal, as reported by TechCrunch. Since the new law will come into force 20 days after its publication, that means it will be enforceable starting on August 1. All its provisions will be fully applicable in two years' time, but some of them will be implemented much earlier than that. 

Six months from now, the bloc will start implementing bans on prohibited applications for AI, such as the use of social credit ranking systems, the collection and compilation of facial recognition information for databases, as well the use of real time emotion recognition systems in schools and workplaces. 

In nine months, the EU will start implementing codes of practice on AI developers. The EU AI Office established by the European Commission will work with consultancy firms to draft those codes. It also plans to work with companies that provide general-purpose models deemed to carry systemic risks. As TechCrunch notes, though, that raises concerns that the industry's biggest players will be able to shape the rules that are supposed to keep them in check.

After a year, makers of general purpose AI models, such as ChatGPT, will have to comply with new transparency requirements and have to be able to demonstrate that their systems are safe and easily explainable to users. In addition to all those, the EU AI Act includes rules that apply to generative AI and manipulated media, such as making sure deepfakes and other AI-generated images, videos and audio are clearly labeled. 

Companies training their AI models will have to respect copyright laws, as well, unless their model is created purely for research and development. "Rightsholders may choose to reserve their rights over their works or other subject matter to prevent text and data mining, unless this is done for the purposes of scientific research," the AI Act's text reads. "Where the rights to opt out has been expressly reserved in an appropriate manner, providers of general-purpose AI models need to obtain an authorization from rightsholders if they want to carry out text and data mining over such works."

This article originally appeared on Engadget at https://www.engadget.com/the-eu-will-start-enforcing-its-new-ai-regulations-on-august-1-140037756.html?src=rss

Biden administration awards car factories $1.7 billion so they can build EVs

The US Energy Department has revealed that it's awarding car and auto parts factories in eight states a total of $1.7 billion in funding, so that they can be retooled to build electric vehicles and their components. According to The New York Times and The Washington Post, the money will come from President Biden's Inflation Reduction Act, which provides subsidies to EV and battery plants, as well as the $7,500 tax credits consumers can get if they buy an electric vehicle.

One of the 11 recipients is a Jeep factory in Belvidere, Illinois that closed last year. The $334.8 million it will get from the initiative will allow it to reopen to produce electric vehicles and restore 1,450 jobs. GM, which will be awarded $500 million, will convert a plant in Lansing, Michigan to produce EVs instead of gasoline cars. The US subsidiary of Korean auto parts maker Hyundai Mobis will also get $32.6 million to refit a plant in Toledo, Ohio for the production of plug-in vehicle components. 

Government officials said they chose communities that are disproportionately affected by pollution or lack of investment. In addition, employees in all of the selected companies are represented by unions. The grants aren't set in stone — the companies still have to negotiate terms with the Department of Energy. They have to commit to retaining their current workers despite the shift to EVs, and they have to meet employment targets. The companies also have to promise to provide their workers with certain benefits, such as child care, pensions and training to further their careers. 

As The Times notes, several factories selected for the initiative are located in "battleground states" for the upcoming presidential elections. "This investment will create thousands of good-paying, union manufacturing jobs and retain even more — from Lansing, Michigan to Fort Valley, Georgia — by helping auto companies retool, reboot and rehire in the same factories and communities," Biden said in a statement. "This delivers on my commitment to never give up on the manufacturing communities and workers that were left behind by my predecessor."

Jennifer Granholm, the US Energy secretary, believes the fund will retain 15,000 jobs and create 3,000 new ones. Granholm also said that it will help the US "compete with other countries who were subsidizing their auto industries." While the secretary didn't mention China specifically, the country is known for subsidizing its EV manufacturers. Earlier this year, the US government quadrupled import tariff for Chinese EVs, while the European Union announced that it was going to impose additional tariffs of up to 38 percent on Chinese-made electric vehicles to protect local manufacturers. 

This article originally appeared on Engadget at https://www.engadget.com/biden-administration-awards-car-factories-17-billion-so-they-can-build-evs-133008903.html?src=rss

Elon Musk escapes paying $500 million to former Twitter employees

The social media platform formerly known as Twitter has been at the center of multiple legal battles from the very beginning of Elon Musk's takeover. One such suit relates to the more than 6,000 employees laid off by Musk following his acquisition of the company – and his alleged failure to pay them their full severance. Yesterday, Musk notched a win over his former employees.

The case in question is a class-action lawsuit filed by former Twitter employee Courtney McMillian. The complaint argued that under the federal Employee Retirement Income Security Act (ERISA), the Twitter Severance Plan owed laid off workers three months of pay. They received less than that, and sought $500 million in unpaid severance. However, on Tuesday, US District Judge Trina Thompson in the Northern District of California granted Musk's motion to dismiss the class-action complaint.

Judge Thompson found that the Twitter severance plan did not qualify under ERISA because they received notice of a separate payout scheme prior to the layoffs. Instead, she dismissed the case, ruling that the severance program adopted after Musk's takeover was the one that applied to these former employees, rather than the 2019 one the plaintiffs were expecting.

This ruling is a setback for the thousands of dismissed Twitter staffers, but there are future chances for them to win larger payments. Thompson's order noted that the plaintiffs could amend their complaint for non-ERISA claims. If they do, Thompson said "this Court will consider issuing an Order finding this case related to one of the cases currently pending" against X Corp/Twitter. There are still lawsuits underway on behalf of some past top brass at Twitter, one which is seeking $128 million in unpaid severance and another attempting to recoup about $1 million in unpaid legal fees.

This article originally appeared on Engadget at https://www.engadget.com/elon-musk-escapes-paying-500-million-to-former-twitter-employees-203813996.html?src=rss

Microsoft and Apple give up their OpenAI board seats

Microsoft has withdrawn from OpenAI's board of directors a couple of weeks after the European Commission revealed that it's taking another look at the terms of their partnership, according to the Financial Times. The company has reportedly sent OpenAI a letter, announcing that it was giving up its seat "effective immediately." Microsoft took on an observer, non-voting role within OpenAI's board following an internal upheaval that led to the firing (and eventual reinstatement) of the latter's CEO, Sam Altman. 

According to previous reports, Apple was also supposed to get an observer seat at the board following its announcement that it will integrate ChatGPT into its devices. The Times says that will no longer be the case. Instead, OpenAI will take on a new approach and hold regular meetings with key partners, including the two Big Tech companies. In the letter, Microsoft reportedly told OpenAI that it's confident in the direction the company is taking, so its seat on the board is no longer necessary. 

The company also wrote that its seat "provided insights into the board's activities without compromising its independence," but the European Commission wants to take a closer look at their relationship before deciding if it agrees. "We’re grateful to Microsoft for voicing confidence in the board and the direction of the company, and we look forward to continuing our successful partnership," an OpenAI spokesperson told The Times.

Microsoft initially invested $1 billion into OpenAI in 2019. Since then, the company has poured more money into the AI company until it has reached $13 billion in investments. The European Commission started investigating their partnership to figure out if it breaks the bloc's merger rules last year, but it ultimately concluded that Microsoft didn't gain control of OpenAI. It didn't drop the probe altogether, however — Margrethe Vestager, the commission's executive vice-president for competition policy, revealed in June that European authorities asked Microsoft for additional information regarding their agreement "to understand whether certain exclusivity clauses could have a negative effect on competitors."

The commission is looking into the Microsoft-OpenAI agreement as part of a bigger antitrust investigation. It also sent information requests to other big players in the industry that are also working on artificial intelligence technologies, including Meta, Google and TikTok. The commission intends to ensure fairness in consumer choices and to examine acqui-hires to "make sure these practices don’t slip through [its] merger control rules if they basically lead to a concentration."

This article originally appeared on Engadget at https://www.engadget.com/microsoft-and-apple-give-up-their-openai-board-seats-120022867.html?src=rss

Texas court blocks the FTC’s ban on noncompete agreements

The Federal Trade Commission's (FTC) ban on noncompete agreements was supposed to take effect on September 4, but a Texan court has postponed its implementation by siding with the plaintiffs in a lawsuit that seeks to block the rule. Back in April, the FTC banned noncompetes, which have been widely used in the tech industry for years, to drive innovation and protect workers' rights and wages. A lot of companies are unsurprisingly unhappy with the agency's rule — as NPR notes, Dallas tax services firm Ryan LLC sued the FTC hours after its announcement. The US Chamber of Commerce and other groups of American businesses eventually joined the lawsuit. 

"Noncompete clauses keep wages low, suppress new ideas, and rob the American economy of dynamism," FTC Chair Lina M. Khan said when the rule was announced. They prevent employees from moving to another company or from building businesses of their own in the same industry, so they may be stuck working in a job with lower pay or in an environment they don't like. But the Chamber of Commerce’s chief counsel Daryl Joseffer called the ban an attempt by the government to micromanage business decisions in a statement sent to Bloomberg

"The FTC’s blanket ban on noncompetes is an unlawful power grab that defies the agency’s constitutional and statutory authority and sets a dangerous precedent where the government knows better than the markets," Joseffer said. The FTC disagrees and told NPR that its "authority is supported by both statute and precedent."

US District Judge Ada Brown, an appointee of former President Donald Trump, wrote in her decision that "the text, structure, and history of the FTC Act reveal that the FTC lacks substantive rulemaking authority with respect to unfair methods of competition." Brown also said that the plaintiffs are "likely to succeed" in getting the rule struck down and that it's in the public's best interest to grant the plaintiff's motion for preliminary injunction. The judge added that the court will make a decision "on the ultimate merits of this action on or before August 30."

This article originally appeared on Engadget at https://www.engadget.com/texas-court-blocks-the-ftcs-ban-on-noncompete-agreements-150020601.html?src=rss

Microsoft agrees to $14 million California pay discrimination settlement

Microsoft is set to pay $14.4 million to resolve a case alleging retaliatory and discriminatory practices against California workers who took protected leave, such as family care, parental, disability and pregnancy leave. The Civil Rights Department of California (CRD) launched an investigation into Microsoft in 2020, looking into whether the tech giant violated laws such as California's Fair Employment and Housing Act and the Americans with Disabilities Act. The proposed agreement is subject to court approval. 

CRD claimed that workers who took protected leave "received lower bonuses and unfavorable performance reviews that, in turn, harmed their eligibility for merit increases, stock awards, and promotions." The California Department also alleged that Microsoft "failed to take sufficient action to prevent discrimination from occurring, altering the career trajectory of women, people with disabilities, and other employees who worked at the company, ultimately leaving them behind."

Microsoft's payment will go toward workers impacted from May 2017 until the date of the court's approval. The company must also retain an independent consultant for policy and practice recommendations, ensuring that managers don't use protected leave as a determinant when deciding rewards and promotions — managers and HR will need to undergo specific discrimination training. The independent consultant will also work with Microsoft to confirm that employees have a straightforward method to raise complaints if they feel taking protected leave has influenced their standing in the company. Furthermore, the independent consultant must provide an annual compliance report reflecting Microsoft's following of the agreement.  

"The settlement announced today will provide direct relief to impacted workers and safeguard against future discrimination at the company," Kevin Kirsh, CRD's director, stated. "We applaud Microsoft for coming to the table and agreeing to make the changes necessary to protect workers in California."

This article originally appeared on Engadget at https://www.engadget.com/microsoft-agrees-to-14-million-california-pay-discrimination-settlement-140016567.html?src=rss

Artists criticize Apple’s lack of transparency around Apple Intelligence data

Later this year, millions of Apple devices will begin running Apple Intelligence, Cupertino's take on generative AI that, among other things, lets people create images from text prompts. But some members of the creative community are unhappy about what they say is the company’s lack of transparency around the raw information powering the AI model that makes this possible.

“I wish Apple would have explained to the public in a more transparent way how they collected their training data,” Jon Lam, a video games artist and a creators’ rights activist based in Vancouver, told Engadget. “I think their announcement could not have come at a worse time.”

Creatives have historically been some of the most loyal customers of Apple, a company whose founder famously positioned it at the “intersection of technology and liberal arts.” But photographers, concept artists and sculptors who spoke to Engadget said that they were frustrated about Apple’s relative silence around how it gathers data for its AI models.

Generative AI is only as good as the data its models are trained on. To that end, most companies have ingested just about anything they could find on the internet, consent or compensation be damned. Nearly 6 billion images used to train multiple AI models also came from LAION-5B, a dataset of images scraped off the internet. In an interview with Forbes, David Holz, the CEO Midjourney, said that the company’s models were trained on “just a big scrape of the internet” and that “there isn’t really a way to get a hundred million images and know where they’re coming from.”

Artists, authors and musicians have accused generative AI companies of sucking up their work for free and profiting off of it, leading to more than a dozen lawsuits in 2023 alone. Last month, major music labels including Universal and Sony sued AI music generators Suno and Udio, startups valued at hundreds of millions of dollars, for copyright infringement. Tech companies have – ironically – both defended their actions and also struck licensing deals with content providers, including news publishers.

Some creatives thought that Apple might do better. “That’s why I wanted to give them a slight benefit of the doubt,” said Lam. “I thought they would approach the ethics conversation differently.”

Instead, Apple has revealed very little about the source of training data for Apple Intelligence. In a post published on the company’s machine learning research blog, the company wrote that, just like other generative AI companies, it grabs public data from the open web using AppleBot, its purpose-made web crawler, something that its executives have also said on stage. Apple’s AI and machine learning head John Giannandrea also reportedly said that “a large amount of training data was actually created by Apple” but did not go into specifics. And Apple has also reportedly signed deals with Shutterstock and Photobucket to license training images, but hasn’t publicly confirmed those relationships. While Apple Intelligence tries to win kudos for a supposedly more privacy-focused approach using on-device processing and bespoke cloud computing, the fundamentals girding its AI model appear little different from competitors.

Apple did not respond to specific questions from Engadget.

In May, Andrew Leung, a Los Angeles-based artist who has worked on films like Black Panther, The Lion King and Mulan, called generative AI “the greatest heist in the history of human intellect” in his testimony before the California State Assembly about the effects of AI on the entertainment industry. “I want to point out that when they use the term ‘publicly available’ it just doesn’t pass muster,” Leung said in an interview. “It doesn’t automatically translate to fair use.”

It’s also problematic for companies like Apple, said Leung, to only offer an option for people to opt out once they’ve already trained AI models on data that they did not consent to. “We never asked to be a part of it.” Apple does allow websites to opt out of being scraped by AppleBot forApple Intelligence training data – the company says it respects robots.txt, a text file that any website can host to tell crawlers to stay away – but this would be triage at best. It's not clear when AppleBot began scraping the web or how anyone could have opted out before then. And, technologically, it's an open question how or if requests to remove information from generative models can even be honored.

This is a sentiment that even blogs aimed at Apple fanatics are echoing. “It’s disappointing to see Apple muddy an otherwise compelling set of features (some of which I really want to try) with practices that are no better than the rest of the industry,” wrote Federico Viticci, founder and editor-in-chief of Apple enthusiast blog MacStories.

Adam Beane, a Los Angeles-based sculptor who created a likeness of Steve Jobs for Esquire in 2011, has used Apple products exclusively for 25 years. But he said that the company’s unwillingness to be forthright with the source of Apple Intelligence training data has disillusioned him.

"I'm increasingly angry with Apple," he told Engadget. "You have to be informed enough and savvy enough to know how to opt out of training Apple's AI, and then you have to trust a corporation to honor your wishes. Plus, all I can see being offered as an option to opt out is further training their AI with your data."

Karla Ortiz, a San Francisco-based illustrator, is one of the plaintiffs in a 2023 lawsuit against Stability AI and DeviantArt, the companies behind image generation models Stable Diffusion and DreamUp respectively, and Midjourney. “The bottom line is, we know [that] for generative AI to function as is, [it] relies on massive overreach and violations of rights, private and intellectual,” she wrote on a viral X thread about Apple Intelligence. “This is true for all [generative] AI companies, and as Apple pushes this tech down our throats, it’s important to remember they are not an exception.”

The outrage against Apple is also a part of a larger sense of betrayal among creative professionals against tech companies whose tools they depend on to do their jobs. In April, a Bloomberg report revealed that Adobe, which makes Photoshop and multiple other apps used by artists, designers, and photographers, used questionably-sourced images to train Firefly, its own image-generation model that Adobe claimed was “ethically” trained. And earlier this month, the company was forced to update its terms of service to clarify that it wouldn’t use the content of its customers to train generative AI models after customer outrage. “The entire creative community has been betrayed by every single software company we ever trusted,” said Lam. It isn’t feasible for him to switch away from Apple products entirely, he’s trying to cut back — he’s planning to give up his iPhone for a Light Phone III.

“I think there is a growing feeling that Apple is becoming just like the rest of them,” said Beane. “A giant corporation that is prioritizing their bottom line over the lives of the people who use their product.”

This article originally appeared on Engadget at https://www.engadget.com/artists-criticize-apples-lack-of-transparency-around-apple-intelligence-data-131250021.html?src=rss

Artists criticize Apple’s lack of transparency around Apple Intelligence data

Later this year, millions of Apple devices will begin running Apple Intelligence, Cupertino's take on generative AI that, among other things, lets people create images from text prompts. But some members of the creative community are unhappy about what they say is the company’s lack of transparency around the raw information powering the AI model that makes this possible.

“I wish Apple would have explained to the public in a more transparent way how they collected their training data,” Jon Lam, a video games artist and a creators’ rights activist based in Vancouver, told Engadget. “I think their announcement could not have come at a worse time.”

Creatives have historically been some of the most loyal customers of Apple, a company whose founder famously positioned it at the “intersection of technology and liberal arts.” But photographers, concept artists and sculptors who spoke to Engadget said that they were frustrated about Apple’s relative silence around how it gathers data for its AI models.

Generative AI is only as good as the data its models are trained on. To that end, most companies have ingested just about anything they could find on the internet, consent or compensation be damned. Nearly 6 billion images used to train multiple AI models also came from LAION-5B, a dataset of images scraped off the internet. In an interview with Forbes, David Holz, the CEO Midjourney, said that the company’s models were trained on “just a big scrape of the internet” and that “there isn’t really a way to get a hundred million images and know where they’re coming from.”

Artists, authors and musicians have accused generative AI companies of sucking up their work for free and profiting off of it, leading to more than a dozen lawsuits in 2023 alone. Last month, major music labels including Universal and Sony sued AI music generators Suno and Udio, startups valued at hundreds of millions of dollars, for copyright infringement. Tech companies have – ironically – both defended their actions and also struck licensing deals with content providers, including news publishers.

Some creatives thought that Apple might do better. “That’s why I wanted to give them a slight benefit of the doubt,” said Lam. “I thought they would approach the ethics conversation differently.”

Instead, Apple has revealed very little about the source of training data for Apple Intelligence. In a post published on the company’s machine learning research blog, the company wrote that, just like other generative AI companies, it grabs public data from the open web using AppleBot, its purpose-made web crawler, something that its executives have also said on stage. Apple’s AI and machine learning head John Giannandrea also reportedly said that “a large amount of training data was actually created by Apple” but did not go into specifics. And Apple has also reportedly signed deals with Shutterstock and Photobucket to license training images, but hasn’t publicly confirmed those relationships. While Apple Intelligence tries to win kudos for a supposedly more privacy-focused approach using on-device processing and bespoke cloud computing, the fundamentals girding its AI model appear little different from competitors.

Apple did not respond to specific questions from Engadget.

In May, Andrew Leung, a Los Angeles-based artist who has worked on films like Black Panther, The Lion King and Mulan, called generative AI “the greatest heist in the history of human intellect” in his testimony before the California State Assembly about the effects of AI on the entertainment industry. “I want to point out that when they use the term ‘publicly available’ it just doesn’t pass muster,” Leung said in an interview. “It doesn’t automatically translate to fair use.”

It’s also problematic for companies like Apple, said Leung, to only offer an option for people to opt out once they’ve already trained AI models on data that they did not consent to. “We never asked to be a part of it.” Apple does allow websites to opt out of being scraped by AppleBot forApple Intelligence training data – the company says it respects robots.txt, a text file that any website can host to tell crawlers to stay away – but this would be triage at best. It's not clear when AppleBot began scraping the web or how anyone could have opted out before then. And, technologically, it's an open question how or if requests to remove information from generative models can even be honored.

This is a sentiment that even blogs aimed at Apple fanatics are echoing. “It’s disappointing to see Apple muddy an otherwise compelling set of features (some of which I really want to try) with practices that are no better than the rest of the industry,” wrote Federico Viticci, founder and editor-in-chief of Apple enthusiast blog MacStories.

Adam Beane, a Los Angeles-based sculptor who created a likeness of Steve Jobs for Esquire in 2011, has used Apple products exclusively for 25 years. But he said that the company’s unwillingness to be forthright with the source of Apple Intelligence training data has disillusioned him.

"I'm increasingly angry with Apple," he told Engadget. "You have to be informed enough and savvy enough to know how to opt out of training Apple's AI, and then you have to trust a corporation to honor your wishes. Plus, all I can see being offered as an option to opt out is further training their AI with your data."

Karla Ortiz, a San Francisco-based illustrator, is one of the plaintiffs in a 2023 lawsuit against Stability AI and DeviantArt, the companies behind image generation models Stable Diffusion and DreamUp respectively, and Midjourney. “The bottom line is, we know [that] for generative AI to function as is, [it] relies on massive overreach and violations of rights, private and intellectual,” she wrote on a viral X thread about Apple Intelligence. “This is true for all [generative] AI companies, and as Apple pushes this tech down our throats, it’s important to remember they are not an exception.”

The outrage against Apple is also a part of a larger sense of betrayal among creative professionals against tech companies whose tools they depend on to do their jobs. In April, a Bloomberg report revealed that Adobe, which makes Photoshop and multiple other apps used by artists, designers, and photographers, used questionably-sourced images to train Firefly, its own image-generation model that Adobe claimed was “ethically” trained. And earlier this month, the company was forced to update its terms of service to clarify that it wouldn’t use the content of its customers to train generative AI models after customer outrage. “The entire creative community has been betrayed by every single software company we ever trusted,” said Lam. It isn’t feasible for him to switch away from Apple products entirely, he’s trying to cut back — he’s planning to give up his iPhone for a Light Phone III.

“I think there is a growing feeling that Apple is becoming just like the rest of them,” said Beane. “A giant corporation that is prioritizing their bottom line over the lives of the people who use their product.”

This article originally appeared on Engadget at https://www.engadget.com/artists-criticize-apples-lack-of-transparency-around-apple-intelligence-data-131250021.html?src=rss

The nation’s oldest nonprofit newsroom is suing OpenAI and Microsoft

The Center for Investigative Reporting, the nation’s oldest nonprofit newsroom that produces Mother Jones and Reveal sued OpenAI and Microsoft in federal court on Thursday for allegedly using its content to train AI models without consent or compensation. This is the latest in a long line of lawsuits filed by publishers and creators accusing generative AI companies of violating copyright.

“OpenAI and Microsoft started vacuuming up our stories to make their product more powerful, but they never asked for permission or offered compensation, unlike other organizations that license our material,” said Monika Bauerlein, CEO of the Center for Investigative Reporting, in a statement. “This free rider behavior is not only unfair, it is a violation of copyright. The work of journalists, at CIR and everywhere, is valuable, and OpenAI and Microsoft know it.” Bauerlein said that OpenAI and Microsoft treat the work of nonprofit and independent publishers “as free raw material for their products," and added that such moves by generative AI companies hurt the public’s access to truthful information in a “disappearing news landscape.”

OpenAI and Microsoft did not respond to a request for comment by Engadget.

The CIR’s lawsuit, which was filed in Manhattan’s federal court, accuses OpenAI and Microsoft, which owns nearly half of the company, of violating the Copyright Act and the Digital Millennium Copyright Act multiple times.

News organizations find themselves at an inflection point with generative AI. While the CIR is joining publishers like The New York Times, New York Daily News, The Intercept, AlterNet and Chicago Tribune in suing OpenAI, others publishers have chosen to strike licensing deals with the company. These deals will allow OpenAI to train its models on archives and ongoing content published by these publishers and cite information from them in responses offered by ChatGPT.

On the same day as the CIR sued OpenAI, for instance, TIME magazine announced a deal with the company that would grant it access to 101 years of archives. Last month, OpenAI signed a $250 million multi-year deal with News Corp, the owner of The Wall Street Journal, to train its models on more than a dozen brands owned by the publisher. The Financial Times, Axel Springer (the owner of Politico and Business Insider), The Associated Press and Dotdash Meredith have also signed deals with OpenAI.

This article originally appeared on Engadget at https://www.engadget.com/the-nations-oldest-nonprofit-newsroom-is-suing-openai-and-microsoft-174748454.html?src=rss