Artists criticize Apple’s lack of transparency around Apple Intelligence data

Later this year, millions of Apple devices will begin running Apple Intelligence, Cupertino's take on generative AI that, among other things, lets people create images from text prompts. But some members of the creative community are unhappy about what they say is the company’s lack of transparency around the raw information powering the AI model that makes this possible.

“I wish Apple would have explained to the public in a more transparent way how they collected their training data,” Jon Lam, a video games artist and a creators’ rights activist based in Vancouver, told Engadget. “I think their announcement could not have come at a worse time.”

Creatives have historically been some of the most loyal customers of Apple, a company whose founder famously positioned it at the “intersection of technology and liberal arts.” But photographers, concept artists and sculptors who spoke to Engadget said that they were frustrated about Apple’s relative silence around how it gathers data for its AI models.

Generative AI is only as good as the data its models are trained on. To that end, most companies have ingested just about anything they could find on the internet, consent or compensation be damned. Nearly 6 billion images used to train multiple AI models also came from LAION-5B, a dataset of images scraped off the internet. In an interview with Forbes, David Holz, the CEO Midjourney, said that the company’s models were trained on “just a big scrape of the internet” and that “there isn’t really a way to get a hundred million images and know where they’re coming from.”

Artists, authors and musicians have accused generative AI companies of sucking up their work for free and profiting off of it, leading to more than a dozen lawsuits in 2023 alone. Last month, major music labels including Universal and Sony sued AI music generators Suno and Udio, startups valued at hundreds of millions of dollars, for copyright infringement. Tech companies have – ironically – both defended their actions and also struck licensing deals with content providers, including news publishers.

Some creatives thought that Apple might do better. “That’s why I wanted to give them a slight benefit of the doubt,” said Lam. “I thought they would approach the ethics conversation differently.”

Instead, Apple has revealed very little about the source of training data for Apple Intelligence. In a post published on the company’s machine learning research blog, the company wrote that, just like other generative AI companies, it grabs public data from the open web using AppleBot, its purpose-made web crawler, something that its executives have also said on stage. Apple’s AI and machine learning head John Giannandrea also reportedly said that “a large amount of training data was actually created by Apple” but did not go into specifics. And Apple has also reportedly signed deals with Shutterstock and Photobucket to license training images, but hasn’t publicly confirmed those relationships. While Apple Intelligence tries to win kudos for a supposedly more privacy-focused approach using on-device processing and bespoke cloud computing, the fundamentals girding its AI model appear little different from competitors.

Apple did not respond to specific questions from Engadget.

In May, Andrew Leung, a Los Angeles-based artist who has worked on films like Black Panther, The Lion King and Mulan, called generative AI “the greatest heist in the history of human intellect” in his testimony before the California State Assembly about the effects of AI on the entertainment industry. “I want to point out that when they use the term ‘publicly available’ it just doesn’t pass muster,” Leung said in an interview. “It doesn’t automatically translate to fair use.”

It’s also problematic for companies like Apple, said Leung, to only offer an option for people to opt out once they’ve already trained AI models on data that they did not consent to. “We never asked to be a part of it.” Apple does allow websites to opt out of being scraped by AppleBot forApple Intelligence training data – the company says it respects robots.txt, a text file that any website can host to tell crawlers to stay away – but this would be triage at best. It's not clear when AppleBot began scraping the web or how anyone could have opted out before then. And, technologically, it's an open question how or if requests to remove information from generative models can even be honored.

This is a sentiment that even blogs aimed at Apple fanatics are echoing. “It’s disappointing to see Apple muddy an otherwise compelling set of features (some of which I really want to try) with practices that are no better than the rest of the industry,” wrote Federico Viticci, founder and editor-in-chief of Apple enthusiast blog MacStories.

Adam Beane, a Los Angeles-based sculptor who created a likeness of Steve Jobs for Esquire in 2011, has used Apple products exclusively for 25 years. But he said that the company’s unwillingness to be forthright with the source of Apple Intelligence training data has disillusioned him.

"I'm increasingly angry with Apple," he told Engadget. "You have to be informed enough and savvy enough to know how to opt out of training Apple's AI, and then you have to trust a corporation to honor your wishes. Plus, all I can see being offered as an option to opt out is further training their AI with your data."

Karla Ortiz, a San Francisco-based illustrator, is one of the plaintiffs in a 2023 lawsuit against Stability AI and DeviantArt, the companies behind image generation models Stable Diffusion and DreamUp respectively, and Midjourney. “The bottom line is, we know [that] for generative AI to function as is, [it] relies on massive overreach and violations of rights, private and intellectual,” she wrote on a viral X thread about Apple Intelligence. “This is true for all [generative] AI companies, and as Apple pushes this tech down our throats, it’s important to remember they are not an exception.”

The outrage against Apple is also a part of a larger sense of betrayal among creative professionals against tech companies whose tools they depend on to do their jobs. In April, a Bloomberg report revealed that Adobe, which makes Photoshop and multiple other apps used by artists, designers, and photographers, used questionably-sourced images to train Firefly, its own image-generation model that Adobe claimed was “ethically” trained. And earlier this month, the company was forced to update its terms of service to clarify that it wouldn’t use the content of its customers to train generative AI models after customer outrage. “The entire creative community has been betrayed by every single software company we ever trusted,” said Lam. It isn’t feasible for him to switch away from Apple products entirely, he’s trying to cut back — he’s planning to give up his iPhone for a Light Phone III.

“I think there is a growing feeling that Apple is becoming just like the rest of them,” said Beane. “A giant corporation that is prioritizing their bottom line over the lives of the people who use their product.”

This article originally appeared on Engadget at https://www.engadget.com/artists-criticize-apples-lack-of-transparency-around-apple-intelligence-data-131250021.html?src=rss

Artists criticize Apple’s lack of transparency around Apple Intelligence data

Later this year, millions of Apple devices will begin running Apple Intelligence, Cupertino's take on generative AI that, among other things, lets people create images from text prompts. But some members of the creative community are unhappy about what they say is the company’s lack of transparency around the raw information powering the AI model that makes this possible.

“I wish Apple would have explained to the public in a more transparent way how they collected their training data,” Jon Lam, a video games artist and a creators’ rights activist based in Vancouver, told Engadget. “I think their announcement could not have come at a worse time.”

Creatives have historically been some of the most loyal customers of Apple, a company whose founder famously positioned it at the “intersection of technology and liberal arts.” But photographers, concept artists and sculptors who spoke to Engadget said that they were frustrated about Apple’s relative silence around how it gathers data for its AI models.

Generative AI is only as good as the data its models are trained on. To that end, most companies have ingested just about anything they could find on the internet, consent or compensation be damned. Nearly 6 billion images used to train multiple AI models also came from LAION-5B, a dataset of images scraped off the internet. In an interview with Forbes, David Holz, the CEO Midjourney, said that the company’s models were trained on “just a big scrape of the internet” and that “there isn’t really a way to get a hundred million images and know where they’re coming from.”

Artists, authors and musicians have accused generative AI companies of sucking up their work for free and profiting off of it, leading to more than a dozen lawsuits in 2023 alone. Last month, major music labels including Universal and Sony sued AI music generators Suno and Udio, startups valued at hundreds of millions of dollars, for copyright infringement. Tech companies have – ironically – both defended their actions and also struck licensing deals with content providers, including news publishers.

Some creatives thought that Apple might do better. “That’s why I wanted to give them a slight benefit of the doubt,” said Lam. “I thought they would approach the ethics conversation differently.”

Instead, Apple has revealed very little about the source of training data for Apple Intelligence. In a post published on the company’s machine learning research blog, the company wrote that, just like other generative AI companies, it grabs public data from the open web using AppleBot, its purpose-made web crawler, something that its executives have also said on stage. Apple’s AI and machine learning head John Giannandrea also reportedly said that “a large amount of training data was actually created by Apple” but did not go into specifics. And Apple has also reportedly signed deals with Shutterstock and Photobucket to license training images, but hasn’t publicly confirmed those relationships. While Apple Intelligence tries to win kudos for a supposedly more privacy-focused approach using on-device processing and bespoke cloud computing, the fundamentals girding its AI model appear little different from competitors.

Apple did not respond to specific questions from Engadget.

In May, Andrew Leung, a Los Angeles-based artist who has worked on films like Black Panther, The Lion King and Mulan, called generative AI “the greatest heist in the history of human intellect” in his testimony before the California State Assembly about the effects of AI on the entertainment industry. “I want to point out that when they use the term ‘publicly available’ it just doesn’t pass muster,” Leung said in an interview. “It doesn’t automatically translate to fair use.”

It’s also problematic for companies like Apple, said Leung, to only offer an option for people to opt out once they’ve already trained AI models on data that they did not consent to. “We never asked to be a part of it.” Apple does allow websites to opt out of being scraped by AppleBot forApple Intelligence training data – the company says it respects robots.txt, a text file that any website can host to tell crawlers to stay away – but this would be triage at best. It's not clear when AppleBot began scraping the web or how anyone could have opted out before then. And, technologically, it's an open question how or if requests to remove information from generative models can even be honored.

This is a sentiment that even blogs aimed at Apple fanatics are echoing. “It’s disappointing to see Apple muddy an otherwise compelling set of features (some of which I really want to try) with practices that are no better than the rest of the industry,” wrote Federico Viticci, founder and editor-in-chief of Apple enthusiast blog MacStories.

Adam Beane, a Los Angeles-based sculptor who created a likeness of Steve Jobs for Esquire in 2011, has used Apple products exclusively for 25 years. But he said that the company’s unwillingness to be forthright with the source of Apple Intelligence training data has disillusioned him.

"I'm increasingly angry with Apple," he told Engadget. "You have to be informed enough and savvy enough to know how to opt out of training Apple's AI, and then you have to trust a corporation to honor your wishes. Plus, all I can see being offered as an option to opt out is further training their AI with your data."

Karla Ortiz, a San Francisco-based illustrator, is one of the plaintiffs in a 2023 lawsuit against Stability AI and DeviantArt, the companies behind image generation models Stable Diffusion and DreamUp respectively, and Midjourney. “The bottom line is, we know [that] for generative AI to function as is, [it] relies on massive overreach and violations of rights, private and intellectual,” she wrote on a viral X thread about Apple Intelligence. “This is true for all [generative] AI companies, and as Apple pushes this tech down our throats, it’s important to remember they are not an exception.”

The outrage against Apple is also a part of a larger sense of betrayal among creative professionals against tech companies whose tools they depend on to do their jobs. In April, a Bloomberg report revealed that Adobe, which makes Photoshop and multiple other apps used by artists, designers, and photographers, used questionably-sourced images to train Firefly, its own image-generation model that Adobe claimed was “ethically” trained. And earlier this month, the company was forced to update its terms of service to clarify that it wouldn’t use the content of its customers to train generative AI models after customer outrage. “The entire creative community has been betrayed by every single software company we ever trusted,” said Lam. It isn’t feasible for him to switch away from Apple products entirely, he’s trying to cut back — he’s planning to give up his iPhone for a Light Phone III.

“I think there is a growing feeling that Apple is becoming just like the rest of them,” said Beane. “A giant corporation that is prioritizing their bottom line over the lives of the people who use their product.”

This article originally appeared on Engadget at https://www.engadget.com/artists-criticize-apples-lack-of-transparency-around-apple-intelligence-data-131250021.html?src=rss

The nation’s oldest nonprofit newsroom is suing OpenAI and Microsoft

The Center for Investigative Reporting, the nation’s oldest nonprofit newsroom that produces Mother Jones and Reveal sued OpenAI and Microsoft in federal court on Thursday for allegedly using its content to train AI models without consent or compensation. This is the latest in a long line of lawsuits filed by publishers and creators accusing generative AI companies of violating copyright.

“OpenAI and Microsoft started vacuuming up our stories to make their product more powerful, but they never asked for permission or offered compensation, unlike other organizations that license our material,” said Monika Bauerlein, CEO of the Center for Investigative Reporting, in a statement. “This free rider behavior is not only unfair, it is a violation of copyright. The work of journalists, at CIR and everywhere, is valuable, and OpenAI and Microsoft know it.” Bauerlein said that OpenAI and Microsoft treat the work of nonprofit and independent publishers “as free raw material for their products," and added that such moves by generative AI companies hurt the public’s access to truthful information in a “disappearing news landscape.”

OpenAI and Microsoft did not respond to a request for comment by Engadget.

The CIR’s lawsuit, which was filed in Manhattan’s federal court, accuses OpenAI and Microsoft, which owns nearly half of the company, of violating the Copyright Act and the Digital Millennium Copyright Act multiple times.

News organizations find themselves at an inflection point with generative AI. While the CIR is joining publishers like The New York Times, New York Daily News, The Intercept, AlterNet and Chicago Tribune in suing OpenAI, others publishers have chosen to strike licensing deals with the company. These deals will allow OpenAI to train its models on archives and ongoing content published by these publishers and cite information from them in responses offered by ChatGPT.

On the same day as the CIR sued OpenAI, for instance, TIME magazine announced a deal with the company that would grant it access to 101 years of archives. Last month, OpenAI signed a $250 million multi-year deal with News Corp, the owner of The Wall Street Journal, to train its models on more than a dozen brands owned by the publisher. The Financial Times, Axel Springer (the owner of Politico and Business Insider), The Associated Press and Dotdash Meredith have also signed deals with OpenAI.

This article originally appeared on Engadget at https://www.engadget.com/the-nations-oldest-nonprofit-newsroom-is-suing-openai-and-microsoft-174748454.html?src=rss

Uber is locking New York drivers out of its apps and blaming a city pay rule

For the last month, Uber has been locking New York City drivers out of its apps during low-demand periods, and Lyft has threatened to do so, too. Bloomberg reports that the ride-hailing companies blame a New York City Taxi and Limousine Commission (TLC) rule for their behavior. At least one drivers’ union says it may consider striking if the lockouts continue.

The mid-shift lockouts stem from a six-year-old NYC pay rule that requires ride-sharing companies to pay drivers for idle time between fares. Capping how long drivers without passengers can be paid means Uber pays less, but it also means drivers are taking home much less money for the same amount of time on the clock. And they can’t predict when they’ll lose access to the app.

Drivers are understandably angry. “I used to work 10 hours and make $300 to $350,” Nikoloz Tsulukidze, a full-time Uber driver, told Bloomberg. “Now, I just worked 10 hours and barely made $170. I was so disappointed. I’m paying for my gas and cannot make money.”

Uber and Lyft are deploying the “Look what you made me do!” strategy, pointing fingers at the TLC’s pay rule (and each other) while trying to turn drivers into lobbyists against the regulation. An Uber email to its drivers from last month, viewed by Bloomberg, encouraged drivers to “let the TLC know the effect their rules have had” on their wages.

The way the rule affects the companies differently is also a factor in their blame games. Uber’s drivers have been busier this year, meaning its numbers have more weight on the city’s averages, which determine the minimum-pay limits. “The city’s rule bizarrely holds Uber responsible for Lyft’s failures,” Uber spokesperson Freddi Goldstein told Bloomberg. “With Lyft struggling to keep drivers busy, we don’t have other options.”

Meanwhile, Lyft (naturally) views the situation in reverse. “Uber wants to change the rules so that Lyft is penalized,” the company wrote in a June email to drivers. “The current NYC pay formula is broken,” Lyft spokesperson CJ Macklin told Bloomberg. “It forces rideshare companies to limit when drivers can earn, and therefore how much they can earn.”

A drivers’ union says Uber’s over-hiring is the root cause of the ordeal. Bhairavi Desai, president of the New York Taxi Workers Alliance, told Bloomberg that the company “mismanaged” hiring by allowing too many drivers to join its ranks — and the workers are now left to foot the bill. She accused Uber of “gaming the system” by using the TLC’s rule to withhold “time that should be paid under the law and making it unpaid.” Desai says the union will consider striking if necessary.

Although Lyft hasn’t yet begun locking out drivers, it might. A June email to the company’s drivers warned that it would soon “have to” adopt a similar practice.

The current mess in NYC follows a long trail of ugly fights across the country between ride-sharing companies and city regulations. Uber and Lyft staged similar lockouts in 2019 in response to a flat minimum wage requirement for drivers that continued until the following spring. Earlier this year, the two companies threatened to pull out of Minneapolis after the city tried to force a driver pay raise that would push their rates up to the equivalent of minimum wage.

This article originally appeared on Engadget at https://www.engadget.com/uber-is-locking-new-york-drivers-out-of-its-apps-and-blaming-a-city-pay-rule-204737818.html?src=rss

Record labels sue AI music generators for ‘massive infringement of recorded music’

Major music labels are taking on AI startups that they believe trained on their songs without paying. Universal Music Group, Warner Music Group and Sony Music Group sued the music generators Suno and Udio for allegedly infringing on copyrighted works on a “massive scale.”

The Recording Industry Association of America (RIAA) initiated the lawsuits and wants to establish that “nothing that exempts AI technology from copyright law or that excuses AI companies from playing by the rules.”

The music labels’ lawsuits in US federal court accuse Suno and Udio of scraping their copyrighted tracks from the internet. The filings against the AI companies reportedly demand injunctions against future use and damages of up to $150,000 per infringed work. (That sounds like it could add up to a monumental sum if the court finds them liable.) The suits appear aimed at establishing licensed training as the only acceptable industry framework for AI moving forward — while instilling fear in companies that train their models without consent.

Screenshot of the Udio AI music generator homescreen.
Udio

Suno AI and Udio AI (Uncharted Labs run the latter) are startups with software that generates music based on text inputs. The former is a partner of Microsoft for its CoPilot music generation tool. The RIAA claims the services’ reproduced tracks are uncannily similar to existing works to the degree that they must have been trained on copyrighted songs. It also claims the companies didn’t deny that they trained on copyright works, instead shielding themselves behind their training being “confidential business information” and standard industry practices.

According to The Wall Street Journal, the lawsuits accuse the AI generators of creating songs that sounded remarkably similar to The Temptations’ “My Girl,” Green Day’s “American Idiot,” and Mariah Carey’s “All I Want for Christmas Is You,” among others. They also claim the AI services produced indistinguishable vocals from artists like Lin-Manuel Miranda, Bruce Springsteen, Michael Jackson and ABBA.

Wired reports that one example cited in the lawsuit details how one of the AI tools reproduced a song that sounded nearly identical to Chuck Berry’s pioneering classic “Johnny B. Goode,” using the prompt, “1950s rock and roll, rhythm & blues, 12 bar blues, rockabilly, energetic male vocalist, singer guitarist,” along with some of Berry’s lyrics. The suit claims the generator almost perfectly generated the original track’s “Go, Johnny, go, go” chorus.

Screenshot for the Suno AI webpage.
Suno

To be clear, the RIAA isn’t advocating based on the principle that all AI training on copyrighted works is wrong. Instead, it’s saying it’s illegal to do so without licensing and consent, i.e., when the labels (and, likely to a lesser degree, the artists) don’t make any money off of it.

The recording industry is working on AI deals of its own that license music in a way that it believes is fair for its bottom line. These include an agreement between Universal and SoundLabs, which allows the latter to create vocal models for artists while still allowing the singers to control ownership and output. The label also partnered with YouTube on an AI licensing and royalties deal. Universal also represents Drake, whose diss track against Kendrick Lamar from earlier this year used AI-generated copies of Tupac Shakur and Snoop Dogg’s voices.

“There is room for AI and human creators to forge a sustainable, complementary relationship,” the filing against Suno reads. “This can and should be achieved through the well-established mechanism of free-market licensing that ensures proper respect for copyright owners.”

According to Bloomberg, Suno co-founder Mikey Shulman said in April that the company’s practices are “legal” and “fairly in line with what other people are doing.” The AI industry at large appears to be attempting to race towards a threshold where its tools are considered too ubiquitous to be held accountable before anyone can do anything about how it trained its models.

“We work very closely with lawyers to make sure that what we’re doing is legal and industry standard,” Suno’s founder said in April. “If the law changes, obviously we would change our business one way or the other.”

This article originally appeared on Engadget at https://www.engadget.com/record-labels-sue-ai-music-generators-for-massive-infringement-of-recorded-music-172915925.html?src=rss

How small claims court became Meta’s customer service hotline

Last month, Ray Palena boarded a plane from New Jersey to California to appear in court. He found himself engaged in a legal dispute against one of the largest corporations in the world, and improbably, the venue for their David-versus-Goliath showdown would be San Mateo's small claims court.

Over the course of eight months and an estimated $700 (mostly in travel expenses), he was able to claw back what all other methods had failed to render: his personal Facebook account.

Those may be extraordinary lengths to regain a digital profile with no relation to its owner's livelihood, but Palena is one of a growing number of frustrated users of Meta's services who, unable to get help from an actual human through normal channels of recourse, are using the court system instead. And in many cases, it's working.

Engadget spoke with five individuals who have sued Meta in small claims court over the last two years in four different states. In three cases, the plaintiffs were able to restore access to at least one lost account. One person was also able to win financial damages and another reached a cash settlement. Two cases were dismissed. In every case, the plaintiffs were at least able to get the attention of Meta’s legal team, which appears to have something of a playbook for handling these claims.

At the heart of these cases is the fact that Meta lacks the necessary volume of human customer service workers to assist those who lose their accounts. The company’s official help pages steer users who have been hacked toward confusing automated tools that often lead users to dead-end links or emails that don’t work if your account information has been changed. (The company recently launched a $14.99-per-month program, Meta Verified, which grants access to human customer support. Its track record as a means of recovering hacked accounts after the fact has been spotty at best, according to anecdotal descriptions.)

Hundreds of thousands of people also turn to their state Attorney General’s office as some state AGs have made requests on users’ behalf — on Reddit, this is known as the “AG method.” But attorneys general across the country have been so inundated with these requests they formally asked Meta to fix their customer service, too. “We refuse to operate as the customer service representatives of your company,” a coalition of 41 state AGs wrote in a letter to the company earlier this year.

Facebook and Instagram users have long sought creative and sometimes extreme measures to get hacked accounts back due to Meta’s lack of customer support features. Some users have resorted to hiring their own hackers or buying an Oculus headset since Meta has dedicated support staff for the device (users on Reddit report this “method” no longer works). The small claims approach has become a popular topic on Reddit forums where frustrated Meta users trade advice on various “methods” for getting an account back. People Clerk, a site that helps people write demand letters and other paperwork required for small claims court, published a help article called “How to Sue facebook,” in March.

It’s difficult to estimate just how many small claims cases are being brought by Facebook and Instagram users, but they may be on the rise. Patrick Forrest, the chief legal officer for Justice Direct, the legal services startup that owns People Clerk, says the company has seen a “significant increase” in cases against Meta over the last couple years.

One of the advantages of small claims court is that it’s much more accessible to people without deep pockets and legal training. Filing fees are typically under $100 and many courthouses have resources to help people complete the necessary paperwork for a case. “There's no discovery, there are no depositions, there's no pre-trial,” says Bruce Zucker, a law professor at California State University, Northridge. “You get a court date and it's going to be about a five or 10 minute hearing, and you have a judge who's probably also tried to call customer service and gotten nowhere.”

“Facebook and Instagram and WhatsApp [have] become crucial marketplaces where people conduct their business, where people are earning a living," Forrest said. “And if you are locked out of that account, business or personal, it can lead to severe financial damages, and it can disrupt your ability to sustain your livelihood.”

One such person whose finances were enmeshed with Meta's products is Valerie Garza, the owner of a massage business. She successfully sued the company in a San Diego small claims court in 2022 after a hack which cost her access to personal Facebook and Instagram accounts, as well as those associated with her business. She was able to document thousands of dollars in resulting losses.

A Meta legal representative contacted Garza a few weeks before her small claims court hearing, requesting she drop the case. She declined, and when Meta didn’t show up to her hearing, she won by default. "When we went through all of the loss of revenues," Garza told Engadget, "[the judge] kind of had to give it to me.”

But that wasn’t the end of Garza’s legal dispute with Meta. After the first hearing, the company filed a motion asking the judge to set aside the verdict, citing its own failure to appear at the hearing. Meta also tried to argue that its terms of service set a maximum of $100 liability. Another hearing was scheduled and a lawyer again contacted Garza offering to help get her account back.

“He seemed to actually kind of just want to get things turned back on, and that was still my goal, at this point,” Garza said. It was then she discovered that her business’ Instagram was being used to advertise sex work.

She began collecting screenshots of the activity on the account, which violated Instagram’s terms of service, as well as fraudulent charges for Facebook ads bought by whoever hacked her account. Once again, Meta didn’t show up to the hearing and a judge ordered the company to pay her the $7,268.65 in damages she had requested.

“I thought they were going to show up this time because they sent their exhibits, they didn't ask for a postponement or anything,” she says. “My guess is they didn't want to go on record and have a transcript showing how completely grossly negligent they are in their business and how very little they care about the safety or financial security of their paying advertisers.”

In July of 2023, Garza indicated in court documents that Meta had paid in full. In all, the process took more than a year, three court appearances and countless hours of work. But Garza says it was worth it. “I just can't stand letting somebody take advantage and walking away,” she says.

Even for individuals whose work doesn't depend on Meta's platforms, a hacked account can result in real harm.

Palena, who flew cross-country to challenge Meta in court, had no financial stake in his Facebook account, which he claimed nearly 20 years ago when the social network was still limited to college students. But whoever hacked him had changed the associated email address and phone number, and began using his page to run scam listings on Facebook Marketplace.

“I was more concerned about the damage it could do to me and my name if something did happen, if someone actually was scammed,” he tells Engadget. In his court filing, he asked for $10,000 in damages, the maximum allowed in California small claims court. He wrote that Meta had violated its own terms of service by allowing a hacked account to stay up, damaging his reputation. “I didn't really care that much about financial compensation,” Palena says “I really just wanted the account back because the person who hacked the account was still using it. They were using my profile with my name and my profile image."

A couple weeks later, a legal rep from Meta reached out to him and asked him for information about his account. They exchanged a few emails over several weeks, but his account was still inaccessible. The same day he boarded a plane to San Mateo, the Meta representative emailed him again and asked if he would be willing to drop the case since “the access team is close to getting your account secure and activated again.” He replied that he intended to be in court the next day as he was still unable to get into his account.

Less than half an hour before his hearing was scheduled to start, he received the email he had spent months waiting for: a password reset link to get back into his account. Palena still attended the hearing, though Meta did not. According to court records reviewed by Engadget, Palena told the judge the case had been “tentatively resolved,” though he hasn’t officially dropped the case yet.

While filing a small claims court case is comparatively simple, it can still be a minefield, even to figure out something as seemingly straightforward as which court to file to. Forrest notes that Facebook’s terms of service stipulates that legal cases must be brought in San Mateo County, home of Meta’s headquarters. But, confusingly, the terms of service for Meta accounts states that cases other than small claims court must be filed in San Mateo. In spite of the apparent contradiction, some people (like Garza) have had success suing Meta outside of San Mateo.

Each jurisdiction also has different rules for maximum allowable compensation in small claims, what sorts of relief those courts are able to grant and even whether or not parties are allowed to have a lawyer present. The low barrier to entry means many first-time plaintiffs are navigating the legal system for the first time without help, and making rookie mistakes along the way.

Shaun Freeman had spent years building up two Instagram accounts, which he describes as similar to TMZ but with “a little more character.” The pages, which had hundreds of thousands of followers, had also been a significant source of income to Freeman, who has also worked in the entertainment industry and uses the stage name Young Platinum.

He says his pages had been suspended or disabled in the past, but he was able to get them back through Meta’s appeals process, and once through a complaint to the California Attorney General’s office. But in 2023 he again lost access to both accounts. He says one was disabled and one is inaccessible due to what seems like a technical glitch.

He tried to file appeals and even asked a friend of a friend who worked at Meta to look into what had happened, but was unsuccessful. Apparently out of other options, he filed a small claims case in Nevada in February. A hearing was scheduled for May, but Freeman had trouble figuring out the legal mechanics. “It took me months and months to figure out how to get them served,” Freeman says. He was eventually able to hire a process server and got the necessary signature 10 days before his hearing. But it may have been too late. Court records show the case was dismissed for failure to serve.

Even without operator error, Meta seems content to create hardship for would-be litigants over matters much smaller than the company's more headline-grabbing antitrust and child safety disputes. Based on correspondence reviewed by Engadget, the company maintains a separate "small claims docket" email address to contact would-be litigants.

Ron Gaul, who lives in North Dakota, filed a small claims suit after Meta disabled his account following a wave of what he describes as targeted harassment. The case was eventually dismissed after Meta’s lawyers had the case moved to district court, which is permissible for a small claims case under North Dakota law.

Gaul says he couldn’t keep up with the motions filed by Meta’s lawyers, whom he had hoped to avoid by filing in small claims court. “I went to small claims because I couldn't have a lawyer,” he tells Engadget.

Ryan, an Arizona real estate agent who asked to be identified by his first name only, decided to sue Meta in small claims with his partner after their Facebook accounts were disabled in the fall of 2022. They were both admins of several large Facebook Groups and he says their accounts were disabled over a supposed copyright violation.

Before a scheduled hearing, the company reached out. “They started basically trying to bully us,” says Ryan, who asked to be identified by his first name only. “They started saying that they have a terms of service [and] they can do whatever they want, they could delete people for any reason.” Much like Gaul, Ryan expected small claims would level the playing field. But according to emails and court records reviewed by Engadget, Meta often deploys its own legal resources as well as outside law firms to respond to these sorts of claims and engage with small claims litigants outside of court. "They put people that still have legal training against these people that are, you know, representing themselves,” he said.

In the end, Meta’s legal team was able to help Ryan get his account back and he agreed to drop himself from the small claims case. But two months later his partner had still not gotten back into hers. Meta eventually told her that her account had been permanently deleted and was no longer able to be restored. Meta eventually offered $3,500 — the maximum amount for a small claims case in Arizona. He says they wanted more, but Meta refused, and they felt like they were out of options. Ryan claims they had already lost tens of thousands of dollars in potential sales that they normally sourced from Facebook. “We were prepared to go further, but no lawyer would really take it on without a $15,000 retainer and it wasn't worth it.”

While it may seem surprising that Meta would give these small claims cases so much attention, Zucker, the Cal State Northridge professor, says that big companies have their own reasons for wanting to avoid court. “I don’t think places like Google or Meta want to have a bunch of judgments against them … because then that becomes a public record and starts floating around,” he says. “So they do take these things seriously.”

Without responding to specific questions about the substance of this story, Meta instead sent Engadget the following statement:

"We know that losing and recovering access to your online accounts can be a frustrating experience. We invest heavily in designing account security systems to help prevent account compromise in the first place, and in educating our users, including by regularly sharing new security features and tips for how people can stay safe and vigilant against potential targeting by hackers. But we also know that bad actors, including scammers, target people across the internet and constantly adapt to evade detection by social media platforms like ours, email and telecom providers, banks and others. To detect malicious activity and help protect people who may have gotten compromised via email phishing, malware or other means, we also constantly improve our detection, enforcement and support systems, in addition to providing channels where people can report account access issues to us, working with law enforcement and taking legal action against malicious groups."

This article originally appeared on Engadget at https://www.engadget.com/how-small-claims-court-became-metas-customer-service-hotline-160224479.html?src=rss

Snap will pay $15 million to settle California lawsuit alleging sexual discrimination

The California Civil Rights Department has revealed that Snap Inc. has agreed to pay $15 million to settle the lawsuit it filed "over alleged discrimination, harassment, and retaliation against women at the company." California's civil rights agency started investigating the company behind Snapchat over three years ago due to claims that it discriminated and retaliated against female employees. The agency accused the company of failing the make sure that female employees were paid equally despite a period of rapid growth between 2015 to 2022. 

Women, especially those in engineering roles, were allegedly discouraged to apply for promotions and lost them to less qualified male colleagues when they did. The agency said that they also had to endure unwelcome sexual advances and faced retaliation when they spoke up. Female employees were given negative performance reviews, were denied opportunities and, ultimately, were terminated.

"In California, we’re proud of the work of our state’s innovators who are a driving force of our nation’s economy," CRD Director Kevin Kish said in a statement. "We're also proud of the strength of our state’s civil rights laws, which help ensure every worker is protected against discrimination and has an opportunity to thrive. This settlement with Snapchat demonstrates a shared commitment to a California where all workers have a fair chance at the American Dream. Women are entitled to equality in every job, in every workplace, and in every industry."

Snapchat denies that the company has an issue with pay inequality and sexual discrimination. In a statement sent to Politico and Bloomberg, it says it only decided to settle due to the costs and impact of a lengthy litigation. "We care deeply about our commitment to maintain a fair and inclusive environment at Snap, and do not believe we have any ongoing systemic pay equity, discrimination, harassment, or retaliation issues against women. While we disagreed with the California Civil Rights Department's claims and analyses, we took into consideration the cost and impact of lengthy litigation, and the scope of the CRD’s other settlements, and decided it is in the best interest of the company to resolve these claims and focus on the future," the company explains.

Under the settlement terms, which still have to be approved by a judge, $14.5 million of the total amount will go towards women who worked as employees at Snap Inc. in California between 2014 and 2024. The company will also be required to have a third-party monitor audit its sexual harassment, retaliation and discrimination compliance.

California's Civil Rights Department was the same agency that sued Activision Blizzard in 2021 and accused the company of fostering a "frat boy" culture that encouraged rampant misogyny and sexual harassment. The agency also found that women in the company were overlooked for promotions and were paid less than their male colleagues. It settled with the video game developer in late 2023 for $54 million, though it had to withdraw its claims that there was widespread sexual harassment at the company. 

This article originally appeared on Engadget at https://www.engadget.com/snap-will-pay-15-million-to-settle-california-lawsuit-alleging-sexual-discrimination-120019788.html?src=rss

Amazon’s Affordable Pharmacy program rxPass opens up to Medicare users with Prime

Amazon launched its RxPass in 2023, giving Prime customers access to generic medications that treat more than 80 common health conditions for $5 a month on top of a Prime subscription. Now, Amazon is expanding the program to Prime members on Medicare insurance, opening eligibility up to an additional 50 million customers, the company wrote in a press release.

As before, members get unlimited access to 60 generic medications and shipping — along with 24/7 access to a pharmacist — for a flat monthly $5 fee. Same-day delivery is offered in nine major cities.

If you're a Medicare user who takes at least one medication, you could save up to $70 per year, and even more for two or more medications, according to Amazon Pharmacy VP John Love. The company estimates that if every eligible Prime user signed up for the service, it could save Medicare $2 billion per year and reduce customer out-of-pocket spending. 

"For some of the Medicare population, the mobility feature can be very compelling. If you don't have easy access to a car or easy access to a retail pharmacy, the ability to get meds delivered is compelling," said Love. 

Amazon competes against other pharmacies including CVS, Walgreens and rival retailers like Costco. Medications included in RxPass are shown here and when searching, you'll see the RxPass logo next to eligible medications. Amazon also offers discounts up to 80 percent on generic drugs, and 40 percent on brand names. 

However, the program may not cost-effective if you need medications not included in the 60 offered by Amazon, according to Clark.com. RxPass also requires Amazon Prime, which costs $139 a year or $15 per month, on top of the $5 fee. 

This article originally appeared on Engadget at https://www.engadget.com/amazons-affordable-pharmacy-program-rxpass-opens-up-to-medicare-users-with-prime-123026092.html?src=rss

The US has sued Adobe over early termination fees and making subscriptions hard to cancel

The US government has sued Adobe and two senior company executives for allegedly deceiving consumers by hiding early termination fees and making them jump through hoops to cancel subscriptions to Adobe products.

The complaint filed by the Department of Justice on Monday accuses the Adobe of pushing consumers towards its “annual paid monthly” subscription plan without adequately disclosing that canceling the plan within the first year could result in an early termination fee. The complaint also alleges that Adobe’s early termination fee disclosures were buried in fine print or required consumers to hover over tiny icons to find them.

“Americans are tired of companies hiding the ball during subscription signup and then putting up roadblocks when they try to cancel,” said Samuel Levine, director of the FTC’s Bureau of Consumer Protection, in a statement. “The FTC will continue working to protect Americans from these illegal business practices.” 

Dana Rao, Adobe's general counsel and chief trust officer said that the company would fight the FTC in court. In a statement published on the company's website, Rao said: "Subscription services are convenient, flexible and cost effective to allow users to choose the plan that best fits their needs, timeline and budget. Our priority is to always ensure our customers have a positive experience. We are transparent with the terms and conditions of our subscription agreements and have a simple cancellation process. We will refute the FTC’s claims in court.”

The FTC said that it took action against Adobe after receiving complaints from consumers around the country who said that they were not aware of Adobe’s early termination fee. It noted that Adobe continued the practice despite being aware of consumers’ confusion. Any consumers who reached out to Adobe’s customer service to cancel their subscription encountered other obstacles like dropped calls and chats and being transferred to multiple representatives, the FTC’s statement adds.

The FTC’s action follows a wave of customer outrage over Adobe’s latest terms of service. Users were concerned that Adobe’s vague language suggested that the company could freely use their work to train its generative AI modes. In response to the backlash, Adobe announced updates to its terms of service to provide more detail around areas like AI and content ownership.

Update, June 17 2024, 1:39 PM ET: This story has been updated with a statement from Adobe. 

This article originally appeared on Engadget at https://www.engadget.com/the-us-has-sued-adobe-for-early-termination-fees-and-making-subscriptions-hard-to-cancel-165808358.html?src=rss

If Clearview AI scanned your face, you may get equity in the company

Controversial facial recognition company Clearview AI has agreed to an unusual settlement to a class action lawsuit, The New York Times reports. Rather than paying cash, the company would provide a 23 percent stake in its company to any Americans in its database. Without the settlement, Clearview could go bankrupt, according to court documents. 

If you live in the US and have ever posted a photo of yourself publicly online, you may be part of the class action. The settlement could amount to at least $50 million according to court documents, It still must be approved by a federal judge. 

Clearview AI, which counts billionaire Peter Thiel as a backer, says it has over 30 billion images in its database. Those can be accessed and cross-referenced by thousands of law enforcement departments including the US FBI and Department of Homeland Security. 

Shortly after its identity was outed, Clearview was hit with lawsuits in Illinois, California, Virginia, New York and elsewhere, which were all brought together as a class action suit in a federal Chicago court. The cost of the litigation was said to be draining the company's reserves, forcing it to seek a creative way to settle the suit.

The relatively small sum divided by the large number of users likely to be in the database means you won't be receiving a windfall. In any case, it would only happen if the company goes public or is acquired, according to the report. Once that occurs, lawyers would take up to 39 percent of the settlement, meaning the final amount could be reduced to about 30 million. If a third of Americans were in the database (about 110 million), each would get about 27 cents. 

That does beg the question of whether it would be worth just over a quarter to see one of the creepiest companies of all time to go bankrupt. To cite a small litany of the actions taken against it (on top of the US class action):

  • It was sued by the ACLU in 2020 (Clearview agreed to permanently halt sales of its biometric database to private companies in the US as part of the settlement.

  • Italy slapped a €20 million fine on the company in 2022 and banned it from using images of Italians in its database

  • Privacy groups in Europe filed complaints against it for allegedly breaking privacy laws (2021)

  • UK's privacy watchdog slapped it with a £7.55 million fine and ordered it to delete data from any UK resident

  • The LAPD banned the use of its software in 2020

  • Earlier this year the EU barred untargeted scraping of faces from the web, effectively blocking Clearview's business model in Europe

This article originally appeared on Engadget at https://www.engadget.com/if-clearview-ai-scanned-your-face-you-may-get-equity-in-the-company-120018460.html?src=rss